Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

How it works

OpenAI uses proxy mode. After calling patch_openai(), the Python SDK rewrites openai.base_url to point to the Cognisafe Go proxy (http://localhost:8080 by default). All OpenAI client calls then flow through the proxy, which:
  1. Forwards the request to https://api.openai.com unchanged
  2. Streams the response back to your application
  3. Logs the full request and response payload asynchronously — no impact on latency
Because the Cognisafe proxy is OpenAI-compatible, any library that accepts a base_url parameter works automatically — you don’t need to use the official openai Python package.

Setup

import cognisafe
from openai import OpenAI

cognisafe.configure(
    api_key="csk_your_key_here",
    project_id="my-app",
)
cognisafe.patch_openai()

client = OpenAI()
# All calls through this client are now logged and scored by Cognisafe.

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"},
    ],
)

print(response.choices[0].message.content)

Supported capabilities

CapabilitySupportedNotes
Chat completionsYesFull logging of request and response
StreamingYesCaptured after the stream completes
EmbeddingsYesPrompt and embedding vector logged
Function / tool callsYesTool definitions and call results included in payload
Assistants APIPartialThread-level logging not yet supported
Streaming responses are buffered by the proxy and logged once the stream closes. Your application receives the stream in real time — only the logging is deferred to stream completion.

Azure OpenAI

For Azure OpenAI, use the same patch_openai() call. The proxy sits in front of Azure’s endpoint rather than https://api.openai.com. Set UPSTREAM_URL on the proxy to your Azure resource URL:
# Proxy environment variable
UPSTREAM_URL=https://your-resource.openai.azure.com
Then configure the SDK normally:
import cognisafe
from openai import AzureOpenAI

cognisafe.configure(api_key="csk_...", project_id="my-app")
cognisafe.patch_openai()

client = AzureOpenAI(
    azure_endpoint=cognisafe.get_proxy_url(),  # points to Cognisafe proxy
    api_version="2024-02-01",
)

response = client.chat.completions.create(
    model="gpt-4o-deployment",
    messages=[{"role": "user", "content": "Hello."}],
)
See the Azure OpenAI provider page for full details.

Any OpenAI-compatible library

Because the proxy speaks the OpenAI API protocol, any client that accepts a custom base URL works without patch_openai():
import httpx

# Direct HTTP — set base_url to the proxy
response = httpx.post(
    "http://localhost:8080/v1/chat/completions",
    headers={"Authorization": "Bearer sk-your-openai-key"},
    json={
        "model": "gpt-4o",
        "messages": [{"role": "user", "content": "Hello."}],
    },
)