Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Installation

npm install cognisafe
The package is published to npm as cognisafe. It ships with full TypeScript types — no @types/ package needed.

TypeScript support

The SDK is written in TypeScript and exports proper type definitions. It works with both import (ESM) and require (CJS):
// ESM / TypeScript
import cognisafe from 'cognisafe';

// CommonJS
const cognisafe = require('cognisafe');

Configuration

Call cognisafe.configure() once at startup, before any LLM calls:
import cognisafe from 'cognisafe';

cognisafe.configure({
  apiKey: 'csk_your_key_here',    // or reads COGNISAFE_API_KEY
  projectId: 'my-app',            // or reads COGNISAFE_PROJECT_ID
  proxyUrl: 'http://localhost:8080',  // default; omit for cloud
  apiUrl: 'http://localhost:8000',    // default; omit for cloud
});

Configuration options

OptionTypeRequiredDefaultDescription
apiKeystringYes*COGNISAFE_API_KEYYour project API key (csk_...)
projectIdstringYes*COGNISAFE_PROJECT_IDProject identifier for grouping requests
proxyUrlstringNohttp://localhost:8080Cognisafe Go proxy URL
apiUrlstringNohttp://localhost:8000Cognisafe FastAPI backend URL
* Required if the corresponding environment variable is not set.

Patching providers

OpenAI

import cognisafe from 'cognisafe';
import OpenAI from 'openai';

cognisafe.configure({ apiKey: 'csk_...', projectId: 'my-app' });
cognisafe.patchOpenAI();

const client = new OpenAI(); // baseURL is now the Cognisafe proxy

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello, world!' }],
});

console.log(response.choices[0].message.content);
patchOpenAI() sets the baseURL on the default OpenAI client to the Cognisafe proxy. The proxy forwards to https://api.openai.com and logs every call asynchronously.

Anthropic

import cognisafe from 'cognisafe';
import Anthropic from '@anthropic-ai/sdk';

cognisafe.configure({ apiKey: 'csk_...', projectId: 'my-app' });
cognisafe.patchAnthropic();

const client = new Anthropic();

const message = await client.messages.create({
  model: 'claude-opus-4-5',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello.' }],
});

console.log(message.content[0].text);
Anthropic uses direct mode — the SDK wraps messages.create and ships payloads to the backend after the response is returned to your code.

The trace wrapper

For custom LLM endpoints or any call not covered by a patch method:
import cognisafe from 'cognisafe';

const callMyModel = cognisafe.trace(
  async (prompt: string): Promise<string> => {
    const response = await fetch('https://internal.example.com/v1/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ messages: [{ role: 'user', content: prompt }] }),
    });
    const data = await response.json();
    return data.choices[0].message.content;
  },
  { model: 'my-internal-model' }
);

const result = await callMyModel('Summarise this text.');
trace returns a wrapped version of your function. The wrapper captures inputs and outputs and sends them to /internal/log after the function resolves — your code is not blocked.
The Node SDK is published under the same package name (cognisafe) as the Python SDK but is a completely separate implementation. Install via npm install cognisafe, not pip.