Braintrust traces your LLM calls with auto-instrumentation. In most languages, you enable tracing once at startup and every request to a supported AI provider or framework is logged — inputs, outputs, model parameters, latency, token usage, and costs — with no per-call code changes.For languages that don’t yet support auto-instrumentation, you can wrap each client instance to get the same coverage.The examples on this page use OpenAI, but Braintrust supports many providers and frameworks.
Braintrust’s MCP server can help you instrument your code.
Auto-instrumentation patches supported AI libraries at startup so every LLM call is captured without wrapping individual clients. This is the recommended way to set up tracing.The steps below walk you through installing dependencies, setting environment variables, and running a traced LLM call end-to-end.
If you’re using Java or .NET, or if auto-instrumentation isn’t working in your environment, try wrap functions instead.
TypeScript
Python
Ruby
Go
Auto-instrumentation in TypeScript uses a startup hook that patches supported AI libraries automatically.
import { initLogger } from "braintrust";import OpenAI from "openai";// Call once at startup — all LLM calls are traced automaticallyinitLogger({ apiKey: process.env.BRAINTRUST_API_KEY, projectName: "My Project (TypeScript)",});const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });const response = await client.responses.create({ model: "gpt-5-mini", input: "What is the capital of France?",});
4
Run your app
Run with the --import flag to enable auto-instrumentation:
node --import braintrust/hook.mjs app.js
Using a bundler?
If you’re using a bundler (Vite, Webpack, esbuild, Rollup) or a framework that uses one (Next.js, Nuxt, SvelteKit), use the appropriate bundler plugin (included in Braintrust’s JavaScript SDK) instead of the --import flag.
Node.js version requirements
Requires Node.js 18.19.0+ or 20.6.0+ for --import flag support. Check with node --version.
Auto-instrumentation in Python uses auto_instrument() to patch supported AI libraries at startup.
import osimport braintrust# Call once at startup — all LLM calls are traced automaticallybraintrust.auto_instrument()braintrust.init_logger( api_key=os.environ["BRAINTRUST_API_KEY"], project="My Project (Python)",)from openai import OpenAIclient = OpenAI(api_key=os.environ["OPENAI_API_KEY"])response = client.responses.create( model="gpt-5-mini", input="What is the capital of France?",)
Disabling specific integrations
braintrust.auto_instrument() enables every supported Python integration by default. Disable individual integrations by passing False for a specific keyword.
Parameter
Default
Library or framework
openai
True
OpenAI Python SDK
anthropic
True
Anthropic Python SDK
litellm
True
LiteLLM
pydantic_ai
True
Pydantic AI
google_genai
True
Google GenAI
openrouter
True
OpenRouter native Python SDK
mistral
True
Mistral Python SDK
agno
True
Agno
agentscope
True
AgentScope
claude_agent_sdk
True
Claude Agent SDK
dspy
True
DSPy
adk
True
Google ADK
langchain
True
LangChain and LangGraph
openai_agents
True
OpenAI Agents SDK
For example:
braintrust.auto_instrument(openrouter=False)
4
Run your app
python app.py
Auto-instrumentation in Ruby uses the braintrust/setup require to patch supported AI libraries on load.
1
Install the dependencies
Add the Braintrust gem to your Gemfile, using the braintrust/setup require to enable auto-instrumentation on load:
require 'bundler/setup'Bundler.requireBraintrust.init( api_key: ENV['BRAINTRUST_API_KEY'], default_project: 'My Project (Ruby)')client = OpenAI::Client.new(access_token: ENV['OPENAI_API_KEY'])response = client.responses.create( parameters: { model: 'gpt-5-mini', input: 'What is the capital of France?' })
4
Run your app
ruby app.rb
Auto-instrumentation in Go uses Orchestrion for compile-time tracing. Each provider integration is installed as a separate Go module.
1
Install and register the integration
Install the Braintrust SDK, the provider connector module, and Orchestrion:
go get github.com/braintrustdata/braintrust-sdk-gogo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/openaigo get github.com/openai/openai-gogo install github.com/DataDog/orchestrion@v1.6.1
Then create orchestrion.tool.go in your project root to register which integrations Orchestrion should instrument:
Each tracing integration is published as its own Go module. Install the ones you need and add them to orchestrion.tool.go:
go get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/anthropicgo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/genaigo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/genkitgo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/adkgo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/cloudwego/einogo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/langchaingogo get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/github.com/sashabaranov/go-openai
Or use the trace/contrib/all meta-module to install and register every integration at once:
go get github.com/braintrustdata/braintrust-sdk-go/trace/contrib/all
This example traces a single OpenAI call. The Go SDK sends traces to Braintrust via OpenTelemetry, so you create a TracerProvider and pass it to Braintrust:
Wrap functions let you explicitly instrument individual client instances. This is an alternative to auto-instrumentation, useful if you prefer explicit control or if auto-instrumentation isn’t supported by the libraries you’re using. Unlike auto-instrumentation, you need to wrap each client instance in your application.
TypeScript
Python
Ruby
Go
Java
.NET
import { initLogger, wrapOpenAI } from "braintrust";import OpenAI from "openai";initLogger({ apiKey: process.env.BRAINTRUST_API_KEY, projectName: "My Project (TypeScript)",});// Wrap the OpenAI client to trace all callsconst client = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));const response = await client.responses.create({ model: "gpt-5-mini", input: "What is the capital of France?",});
import osimport braintrustfrom braintrust import wrap_openaifrom openai import OpenAIbraintrust.init_logger( api_key=os.environ["BRAINTRUST_API_KEY"], project="My Project (Python)",)# Wrap the OpenAI client to trace all callsclient = wrap_openai(OpenAI(api_key=os.environ["OPENAI_API_KEY"]))response = client.responses.create( model="gpt-5-mini", input="What is the capital of France?",)
Use Braintrust.instrument! with a target: to instrument a specific client instance:
require 'braintrust'require 'openai'Braintrust.init( api_key: ENV['BRAINTRUST_API_KEY'], default_project: 'My Project (Ruby)', auto_instrument: false)# Wrap a specific OpenAI client to trace all callsclient = OpenAI::Client.new(access_token: ENV['OPENAI_API_KEY'])Braintrust.instrument!(:ruby_openai, target: client)response = client.responses.create( parameters: { model: 'gpt-5-mini', input: 'What is the capital of France?' })
Use :openai if you’re using the openai gem, or :ruby_openai for the ruby-openai gem.
The Go SDK provides tracing middleware that you pass to your AI provider’s client constructor:
package mainimport ( "context" "fmt" "log" "os" "github.com/braintrustdata/braintrust-sdk-go" traceopenai "github.com/braintrustdata/braintrust-sdk-go/trace/contrib/openai" "github.com/openai/openai-go" "github.com/openai/openai-go/option" "github.com/openai/openai-go/responses" "go.opentelemetry.io/otel" sdktrace "go.opentelemetry.io/otel/sdk/trace")func main() { tp := sdktrace.NewTracerProvider() defer tp.Shutdown(context.Background()) otel.SetTracerProvider(tp) _, err := braintrust.New(tp, braintrust.WithProject("My Project (Go)"), braintrust.WithAPIKey(os.Getenv("BRAINTRUST_API_KEY")), ) if err != nil { log.Fatal(err) } // Create an OpenAI client with tracing middleware client := openai.NewClient( option.WithAPIKey(os.Getenv("OPENAI_API_KEY")), option.WithMiddleware(traceopenai.NewMiddleware()), ) response, err := client.Responses.New(context.Background(), responses.ResponseNewParams{ Model: "gpt-5-mini", Input: responses.ResponseNewParamsInputUnion{OfString: openai.String("What is the capital of France?")}, }) if err != nil { log.Fatal(err) } fmt.Println(response.OutputText())}
using System;using Braintrust.Sdk;using Braintrust.Sdk.Config;using Braintrust.Sdk.OpenAI;using OpenAI;using OpenAI.Chat;var config = BraintrustConfig.Of( ("BRAINTRUST_API_KEY", Environment.GetEnvironmentVariable("BRAINTRUST_API_KEY")), ("BRAINTRUST_DEFAULT_PROJECT_NAME", "My Project (.NET)"));var braintrust = Braintrust.Sdk.Braintrust.Get(config);var activitySource = braintrust.GetActivitySource();// Wrap the OpenAI client to trace all callsvar openAIClient = BraintrustOpenAI.WrapOpenAI( activitySource, Environment.GetEnvironmentVariable("OPENAI_API_KEY"));var chatClient = openAIClient.GetChatClient("gpt-5-mini");var response = await chatClient.CompleteChatAsync( new ChatMessage[] { new UserChatMessage("What is the capital of France?") });
The Braintrust gateway provides a unified OpenAI-compatible API for accessing models from many providers. When you call a model through the gateway, your requests are automatically traced — no SDK instrumentation or wrap functions needed. The gateway also provides automatic caching and observability across providers.