Documentation Index Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Installation
OpenAI only
With Anthropic
All providers
The base package includes OpenAI and Mistral support. Install extras for Anthropic (anthropic) and Cohere (cohere), or use [all] to get everything.
Configuration
Call cognisafe.configure() once at application startup, before any LLM calls are made.
import cognisafe
cognisafe.configure(
api_key = "csk_your_key_here" ,
project_id = "my-app" ,
proxy_url = "http://localhost:8080" , # default; omit for cloud
api_url = "http://localhost:8000" , # default; omit for cloud
)
configure() must be called before any patch function or LLM call. Patching before configuration raises a CognisafeNotConfiguredError.
Parameters
Parameter Type Required Default Description api_keystrYes* COGNISAFE_API_KEY env varYour project API key (csk_...) project_idstrYes* COGNISAFE_PROJECT_ID env varLogical project name — used to group requests in the dashboard proxy_urlstrNo http://localhost:8080URL of the Cognisafe Go proxy api_urlstrNo http://localhost:8000URL of the Cognisafe FastAPI backend
* Required if the corresponding environment variable is not set.
Environment variables
As an alternative to passing values to configure(), set these environment variables:
export COGNISAFE_API_KEY = "csk_your_key_here"
export COGNISAFE_PROJECT_ID = "my-app"
configure() reads these automatically when the keyword arguments are omitted.
Provider patches
OpenAI (proxy mode)
import cognisafe
from openai import OpenAI
cognisafe.configure( api_key = "csk_..." , project_id = "my-app" )
cognisafe.patch_openai()
client = OpenAI() # base_url is now the Cognisafe proxy
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Summarise this document." }],
)
patch_openai() rewrites openai.base_url so all calls route through the proxy. The proxy forwards them to https://api.openai.com and logs each request asynchronously.
Anthropic (direct mode)
import cognisafe
import anthropic
cognisafe.configure( api_key = "csk_..." , project_id = "my-app" )
cognisafe.patch_anthropic()
client = anthropic.Anthropic()
message = client.messages.create(
model = "claude-opus-4-5" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Hello." }],
)
Requires pip install "cognisafe[anthropic]". The SDK wraps messages.create and ships payloads to the backend after the response is returned.
Mistral (proxy mode)
import cognisafe
from mistralai import Mistral
cognisafe.configure( api_key = "csk_..." , project_id = "my-app" )
cognisafe.patch_mistral()
client = Mistral()
response = client.chat.complete(
model = "mistral-large-latest" ,
messages = [{ "role" : "user" , "content" : "Hello." }],
)
Cohere (direct mode)
import cognisafe
import cohere
cognisafe.configure( api_key = "csk_..." , project_id = "my-app" )
cognisafe.patch_cohere()
client = cohere.Client()
response = client.chat(
model = "command-r-plus" ,
message = "Hello." ,
)
Requires pip install "cognisafe[all]".
The @cognisafe.trace decorator
Use trace for any LLM call that doesn’t go through a patched provider — custom endpoints, fine-tuned models, internal APIs:
import cognisafe
import requests
cognisafe.configure( api_key = "csk_..." , project_id = "my-app" )
@cognisafe.trace ( model = "my-fine-tuned-gpt4o" )
def call_internal_model ( prompt : str ) -> str :
resp = requests.post(
"https://internal.example.com/v1/chat" ,
json = { "messages" : [{ "role" : "user" , "content" : prompt}]},
headers = { "Authorization" : "Bearer sk-internal-..." },
)
return resp.json()[ "choices" ][ 0 ][ "message" ][ "content" ]
result = call_internal_model( "Classify this support ticket." )
The decorator captures the function’s input arguments and return value, then ships them to /internal/log in a background thread. The decorated function’s execution is not blocked.