Cognisafe sits between your application and any LLM API — intercepting every request and response, recording it to a time-series database, and asynchronously scoring it against the OWASP LLM Top 10. Your users see no added latency. You get a full audit trail, cost tracking, and automated threat detection. There is no framework lock-in. Cognisafe works via a lightweight SDK that either rewrites your provider’s base URL (proxy mode) or wraps the client’sDocumentation Index
Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
create method directly (direct mode). Your existing LLM code stays unchanged beyond a one-line patch call.
Core components
Proxy
An OpenAI-compatible Go reverse proxy that logs every call to the backend without blocking the response path.
Safety Scoring
A Python worker pulls jobs from Redis and runs PyRIT scorers asynchronously — no latency on the hot path.
Dashboard
A Next.js dashboard showing request volume, cost, latency, and safety scores with per-request drill-down.
SDK
A Python (and Node.js) SDK that patches OpenAI, Anthropic, Mistral, and Cohere clients in a single call.
Get started
Quickstart
Up and running in under 5 minutes — install the SDK, patch your provider, and see your first request.
How it works
Detailed walkthrough of the proxy, scoring pipeline, and data model.
Supported providers
| Provider | Mode |
|---|---|
| OpenAI | Proxy |
| Anthropic | Direct |
| Gemini | Direct |
| Mistral | Proxy |
| Azure OpenAI | Proxy |
| Cohere | Direct |
| Ollama | Proxy |
Supported agent frameworks
Cognisafe is framework-agnostic. Any framework that calls a supported LLM provider under the hood is automatically observed once you patch the provider client.- CrewAI
- LangGraph
- AutoGen
- Semantic Kernel
- MCP (Model Context Protocol)
- Pydantic AI
- LlamaIndex

