Skip to main content
Braintrust traces your LLM calls with auto-instrumentation. In most languages, you enable tracing once at startup and every request to a supported AI provider or framework is logged — inputs, outputs, model parameters, latency, token usage, and costs — with no per-call code changes. For languages that don’t yet support auto-instrumentation, you can wrap each client instance to get the same coverage. The examples on this page use OpenAI, but Braintrust supports many providers and frameworks.
Braintrust’s CLI and MCP server can help you instrument your code.

Auto-instrumentation

Auto-instrumentation patches supported AI libraries at startup so every LLM call is captured without wrapping individual clients. This is the recommended way to set up tracing. If you’re using Java or .NET, or if auto-instrumentation isn’t working in your environment, try wrap functions instead.
Install the dependencies:
npm install braintrust openai
This example traces a single OpenAI call:
import { initLogger } from "braintrust";
import OpenAI from "openai";

// Call once at startup — all LLM calls are traced automatically
initLogger({
  apiKey: process.env.BRAINTRUST_API_KEY,
  projectName: "My Project (TypeScript)",
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.responses.create({
  model: "gpt-5-mini",
  input: "What is the capital of France?",
});
Run with the --import flag to enable auto-instrumentation:
node --import braintrust/hook.mjs app.js
If you’re using a bundler (Vite, Webpack, esbuild, Rollup) or a framework that uses one (Next.js, Nuxt, SvelteKit), use the appropriate bundler plugin (included in Braintrust’s JavaScript SDK) instead of the --import flag.
Requires Node.js 18.19.0+ or 20.6.0+ for --import flag support. Check with node --version.
Run your app and check Braintrust — your LLM calls will appear in the project logs.
Streaming responses are fully supported — Braintrust automatically collects streamed chunks and logs the complete response as a single span.

Wrap functions

Wrap functions let you explicitly instrument individual client instances. This is an alternative to auto-instrumentation, useful if you prefer explicit control or if auto-instrumentation isn’t supported by the libraries you’re using. Unlike auto-instrumentation, you need to wrap each client instance in your application.
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

initLogger({
  apiKey: process.env.BRAINTRUST_API_KEY,
  projectName: "My Project (TypeScript)",
});

// Wrap the OpenAI client to trace all calls
const client = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
const response = await client.responses.create({
  model: "gpt-5-mini",
  input: "What is the capital of France?",
});

Braintrust gateway

The Braintrust gateway provides a unified OpenAI-compatible API for accessing models from many providers. When you call a model through the gateway, your requests are automatically traced — no SDK instrumentation or wrap functions needed. The gateway also provides automatic caching and observability across providers.

Supported libraries

To help you log traces, Braintrust’s SDKs support auto-instrumentation and wrap functions for many common libraries. Select a language to get started.
Braintrust also integrates with frameworks like LangChain, LangGraph, CrewAI, LlamaIndex, Mastra, and OpenTelemetry, which require framework-specific setup such as callback handlers or OpenTelemetry configuration. See Integrations.

Next steps