OpenAI-Compatible Runtime Surface
Ship chat completions, embeddings, and agent execution behind one gateway instead of stitching together provider-specific entry points by hand.
Operate inference, vector stores, tracing, guardrails, RAG, config, and incident workflows behind one production-ready console with tenant isolation built in.
If you are evaluating or onboarding Cognipeer Console, this is the shortest useful reading order:
If you already know the basics, jump directly to the part that matches your work:
| Start with | Best for | What you get |
|---|---|---|
| Guide | Teams onboarding the platform for the first time | Local setup, architecture, core module docs, and operational guidance |
| API Reference | SDK authors and integrators | Endpoint behavior, request and response models, and OpenAI-compatible surface details |
| Core Modules | Platform engineers extending the runtime | The shared infrastructure primitives that shape behavior across every domain service |
| Console SDK Docs | Application developers integrating from TypeScript or JavaScript | Client initialization, resource methods, examples, and framework integrations |
npm install
cp .env.example .env.local
npm run devdocker build -t cognipeer-console .
docker run -p 3000:3000 --env-file .env.local cognipeer-consolenpm run docs:build.agent-sdk and chat-ui surfaces while keeping Cognipeer Console's own information architecture.