Skip to main content
The simplest way to instrument your application is to wrap your AI provider clients. Braintrust provides native wrappers for popular providers that automatically log all requests, responses, streaming data, token usage, and timing information.

How it works

Wrapping a client takes just a few lines of code and captures everything automatically:
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

const logger = initLogger({ projectName: "My Project" });
const client = wrapOpenAI(new OpenAI());

// All calls are automatically logged
const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});
The wrapper automatically captures:
  • Request inputs (messages, parameters)
  • Model outputs (completions, tool calls)
  • Metadata (model, temperature, token usage)
  • Timing (start time, duration)
  • Streaming chunks (if applicable)
  • Errors and exceptions

Supported providers

Braintrust provides native wrappers for major AI providers:

OpenAI

Wrap the OpenAI client to log GPT models, embeddings, and other OpenAI APIs. See the OpenAI integration guide for complete documentation.
1

Install packages

# pnpm
pnpm add braintrust openai
# npm
npm install braintrust openai
2

Set API keys

Set your API keys as environment variables:
export BRAINTRUST_API_KEY=<your-braintrust-api-key>
export OPENAI_API_KEY=<your-openai-api-key>
Get your Braintrust API key from Settings > API keys.
3

Wrap the client

import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

const logger = initLogger({ projectName: "My Project" });
const client = wrapOpenAI(
  new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  }),
);

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is machine learning?" },
  ],
});

Anthropic

Wrap the Anthropic client to log Claude models. See the Anthropic integration guide for complete documentation.
1

Install packages

# pnpm
pnpm add braintrust @anthropic-ai/sdk
# npm
npm install braintrust @anthropic-ai/sdk
2

Set API keys

Set your API keys as environment variables:
export BRAINTRUST_API_KEY=<your-braintrust-api-key>
export ANTHROPIC_API_KEY=<your-anthropic-api-key>
Get your Braintrust API key from Settings > API keys.
3

Wrap the client

import { initLogger, wrapAnthropic } from "braintrust";
import Anthropic from "@anthropic-ai/sdk";

const logger = initLogger({ projectName: "My Project" });
const client = wrapAnthropic(
  new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }),
);

const response = await client.messages.create({
  model: "claude-sonnet-4-5-20250929",
  max_tokens: 1024,
  messages: [{ role: "user", content: "What is machine learning?" }],
});

Google Gemini

Wrap the Google GenAI client to log Gemini models. See the Gemini integration guide for complete documentation.
1

Install packages

# pnpm
pnpm add braintrust @google/genai
# npm
npm install braintrust @google/genai
2

Set API keys

Set your API keys as environment variables:
export BRAINTRUST_API_KEY=<your-braintrust-api-key>
export GEMINI_API_KEY=<your-gemini-api-key>
Get your Braintrust API key from Settings > API keys.
3

Wrap the client

import * as googleGenAI from "@google/genai";
import { wrapGoogleGenAI, initLogger } from "braintrust";

initLogger({ projectName: "My Project" });

const { GoogleGenAI } = wrapGoogleGenAI(googleGenAI);
const client = new GoogleGenAI({
  apiKey: process.env.GEMINI_API_KEY || "",
});

const response = await client.models.generateContent({
  model: "gemini-2.5-flash",
  contents: "What is machine learning?",
});

Other providers

Braintrust provides wrappers for many additional AI providers: See the integrations overview for all supported providers.

Streaming support

Wrappers automatically handle streaming responses. No special configuration needed - just enable streaming in your API call and the wrapper collects all chunks and logs the complete request once streaming finishes.
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

const logger = initLogger({ projectName: "My Project" });
const client = wrapOpenAI(new OpenAI());

const stream = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Count to 10" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
// Streaming data is automatically logged when complete

Stream from prompts

When executing prompts through the Braintrust API, you can stream results using Server-Sent Events (SSE). This works for both direct API calls and playground execution. Braintrust uses a simplified SSE format optimized for common LLM use cases:
  • Text streaming: For chat message content
  • JSON streaming: For structured tool call arguments
  • Progress events: For intermediate function execution steps
See the prompts documentation for detailed streaming examples and the SSE format reference for the complete specification.

Next steps