Braintrust traces your LLM calls with auto-instrumentation. In most languages, you enable tracing once at startup and every request to a supported AI provider or framework is logged — inputs, outputs, model parameters, latency, token usage, and costs — with no per-call code changes.For languages that don’t yet support auto-instrumentation, you can wrap each client instance to get the same coverage.The examples on this page use OpenAI, but Braintrust supports many providers and frameworks.
Braintrust’s CLI and MCP server can help you instrument your code.
Auto-instrumentation patches supported AI libraries at startup so every LLM call is captured without wrapping individual clients. This is the recommended way to set up tracing.If you’re using Java or .NET, or if auto-instrumentation isn’t working in your environment, try wrap functions instead.
TypeScript
Python
Ruby
Go
Install the dependencies:
Report incorrect code
Copy
Ask AI
npm install braintrust openai
This example traces a single OpenAI call:
Report incorrect code
Copy
Ask AI
import { initLogger } from "braintrust";import OpenAI from "openai";// Call once at startup — all LLM calls are traced automaticallyinitLogger({ apiKey: process.env.BRAINTRUST_API_KEY, projectName: "My Project (TypeScript)",});const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });const response = await client.responses.create({ model: "gpt-5-mini", input: "What is the capital of France?",});
Run with the --import flag to enable auto-instrumentation:
Report incorrect code
Copy
Ask AI
node --import braintrust/hook.mjs app.js
Using a bundler?
If you’re using a bundler (Vite, Webpack, esbuild, Rollup) or a framework that uses one (Next.js, Nuxt, SvelteKit), use the appropriate bundler plugin (included in Braintrust’s JavaScript SDK) instead of the --import flag.
Node.js version requirements
Requires Node.js 18.19.0+ or 20.6.0+ for --import flag support. Check with node --version.
Install the dependencies:
Report incorrect code
Copy
Ask AI
pip install braintrust openai
This example traces a single OpenAI call:
Report incorrect code
Copy
Ask AI
import osimport braintrustfrom openai import OpenAI# Call once at startup — all LLM calls are traced automaticallybraintrust.auto_instrument()braintrust.init_logger( api_key=os.environ["BRAINTRUST_API_KEY"], project="My Project (Python)",)client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))response = client.responses.create( model="gpt-5-mini", input="What is the capital of France?",)
Add the Braintrust gem to your Gemfile, using the braintrust/setup require to enable auto-instrumentation on load:
This example traces a single OpenAI call. The Go SDK sends traces to Braintrust via OpenTelemetry, so you create a TracerProvider and pass it to Braintrust:
Wrap functions let you explicitly instrument individual client instances. This is an alternative to auto-instrumentation, useful if you prefer explicit control or if auto-instrumentation isn’t supported by the libraries you’re using. Unlike auto-instrumentation, you need to wrap each client instance in your application.
TypeScript
Python
Ruby
Go
Java
.NET
Report incorrect code
Copy
Ask AI
import { initLogger, wrapOpenAI } from "braintrust";import OpenAI from "openai";initLogger({ apiKey: process.env.BRAINTRUST_API_KEY, projectName: "My Project (TypeScript)",});// Wrap the OpenAI client to trace all callsconst client = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));const response = await client.responses.create({ model: "gpt-5-mini", input: "What is the capital of France?",});
Report incorrect code
Copy
Ask AI
import osimport braintrustfrom braintrust import wrap_openaifrom openai import OpenAIbraintrust.init_logger( api_key=os.environ["BRAINTRUST_API_KEY"], project="My Project (Python)",)# Wrap the OpenAI client to trace all callsclient = wrap_openai(OpenAI(api_key=os.environ.get("OPENAI_API_KEY")))response = client.responses.create( model="gpt-5-mini", input="What is the capital of France?",)
Use Braintrust.instrument! with a target: to instrument a specific client instance:
Report incorrect code
Copy
Ask AI
require 'braintrust'require 'openai'Braintrust.init( api_key: ENV['BRAINTRUST_API_KEY'], default_project: 'My Project (Ruby)', auto_instrument: false)# Wrap a specific OpenAI client to trace all callsclient = OpenAI::Client.new(access_token: ENV['OPENAI_API_KEY'])Braintrust.instrument!(:ruby_openai, target: client)response = client.responses.create( parameters: { model: 'gpt-5-mini', input: 'What is the capital of France?' })
Use :openai if you’re using the openai gem, or :ruby_openai for the ruby-openai gem.
The Go SDK provides tracing middleware that you pass to your AI provider’s client constructor:
Report incorrect code
Copy
Ask AI
package mainimport ( "context" "fmt" "log" "os" "github.com/braintrustdata/braintrust-sdk-go" traceopenai "github.com/braintrustdata/braintrust-sdk-go/trace/contrib/openai" "github.com/openai/openai-go" "github.com/openai/openai-go/option" "github.com/openai/openai-go/responses" "go.opentelemetry.io/otel" sdktrace "go.opentelemetry.io/otel/sdk/trace")func main() { tp := sdktrace.NewTracerProvider() defer tp.Shutdown(context.Background()) otel.SetTracerProvider(tp) _, err := braintrust.New(tp, braintrust.WithProject("My Project (Go)"), braintrust.WithAPIKey(os.Getenv("BRAINTRUST_API_KEY")), ) if err != nil { log.Fatal(err) } // Create an OpenAI client with tracing middleware client := openai.NewClient( option.WithAPIKey(os.Getenv("OPENAI_API_KEY")), option.WithMiddleware(traceopenai.NewMiddleware()), ) response, err := client.Responses.New(context.Background(), responses.ResponseNewParams{ Model: "gpt-5-mini", Input: responses.ResponseNewParamsInputUnion{OfString: openai.String("What is the capital of France?")}, }) if err != nil { log.Fatal(err) } fmt.Println(response.OutputText())}
using System;using Braintrust.Sdk;using Braintrust.Sdk.Config;using Braintrust.Sdk.OpenAI;using OpenAI;using OpenAI.Chat;var config = BraintrustConfig.Of( ("BRAINTRUST_API_KEY", Environment.GetEnvironmentVariable("BRAINTRUST_API_KEY")), ("BRAINTRUST_DEFAULT_PROJECT_NAME", "My Project (.NET)"));var braintrust = Braintrust.Sdk.Braintrust.Get(config);var activitySource = braintrust.GetActivitySource();// Wrap the OpenAI client to trace all callsvar openAIClient = BraintrustOpenAI.WrapOpenAI( activitySource, Environment.GetEnvironmentVariable("OPENAI_API_KEY"));var chatClient = openAIClient.GetChatClient("gpt-5-mini");var response = await chatClient.CompleteChatAsync( new ChatMessage[] { new UserChatMessage("What is the capital of France?") });
The Braintrust gateway provides a unified OpenAI-compatible API for accessing models from many providers. When you call a model through the gateway, your requests are automatically traced — no SDK instrumentation or wrap functions needed. The gateway also provides automatic caching and observability across providers.