Documentation Index
Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
How it works
Ollama exposes an OpenAI-compatible HTTP API on port 11434. Because the Cognisafe proxy speaks the same protocol, you can observe all Ollama calls by pointing the proxy’sUPSTREAM_URL at your Ollama instance — no changes to Ollama itself.
This is particularly useful in air-gapped environments: all traffic stays on your internal network, and Cognisafe’s safety scoring (if using local scoring) never leaves your infrastructure.
Proxy configuration
SetUPSTREAM_URL on the Cognisafe proxy to your Ollama instance:
SDK setup
Usepatch_openai() — the OpenAI client speaks the same protocol as Ollama’s API:
Air-gapped safety scoring
By default, Cognisafe’s safety worker usesgpt-4o-mini (via OPENAI_API_KEY) to score requests. In an air-gapped environment, you have two options:
-
Disable scoring: if
OPENAI_API_KEYis not set, the worker falls back gracefully withscore_label: "unscored". Requests are still logged and cost/latency data is captured. -
Use a local scoring model: configure an Ollama-backed scoring model by pointing the safety worker at a local OpenAI-compatible endpoint:
Supported Ollama models
Any model available in the Ollama library works — Cognisafe does not constrain the model field. The proxy passesmodel through to Ollama unchanged.

