Integrating with Braintrust
Braintrust Traces can be integrated seamlessly with popular platforms and frameworks to capture rich context and power intelligent workflows. This guide walks you through the supported integrations and how to configure them for maximum observability and insight.
OpenTelemetry (OTel)
To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.
Braintrust supports configuring OTel with our SDK, as well as libraries like OpenLLMetry and the Vercel AI SDK. You can also use OTel's built-in exporters to send traces to Braintrust if you don't want to install additional libraries or write code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so you can start logging to Braintrust from many different environments.
Python SDK configuration
Install the Braintrust Python SDK with OpenTelemetry support:
For Python applications, use the BraintrustSpanProcessor
for simplified configuration:
For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor
:
api_key
: The API key to use for Braintrust. Defaults to theBRAINTRUST_API_KEY
environment variable.api_url
: The URL of the Braintrust API. Defaults to theBRAINTRUST_API_URL
environment variable orhttps://api.braintrust.dev
if not set.parent
: The parent project or experiment to use for Braintrust. Defaults to theBRAINTRUST_PARENT
environment variable.filter_ai_spans
: Defaults toFalse
. IfTrue
, only AI-related spans will be sent to Braintrust.custom_filter
: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. IfTrue
, the span will be sent to Braintrust. IfFalse
, the span will be dropped. IfNone
, don't influence the sampling decision.
TypeScript SDK configuration
Install the Braintrust TypeScript SDK with the following OpenTelemetry dependencies:
For TypeScript/JavaScript applications, use the BraintrustSpanProcessor
with NodeSDK:
Or configure it manually with a custom tracer provider:
For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor
:
apiKey
: The API key to use for Braintrust. Defaults to theBRAINTRUST_API_KEY
environment variable.apiUrl
: The URL of the Braintrust API. Defaults to theBRAINTRUST_API_URL
environment variable orhttps://api.braintrust.dev
if not set.parent
: The parent project or experiment to use for Braintrust. Defaults to theBRAINTRUST_PARENT
environment variable.filterAISpans
: Defaults tofalse
. Iftrue
, only AI-related spans will be sent to Braintrust.customFilter
: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. Iftrue
, the span will be sent to Braintrust. Iffalse
, the span will be dropped. Ifnull
, don't influence the sampling decision.
OTLP configuration
If you are using a different language or want to use pure OTel code, you can set up the OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust.
Once you set up an OTLP exporter to send traces to Braintrust, we automatically
convert LLM calls into Braintrust LLM
spans, which
can be saved as prompts
and evaluated in the playground.
For JavaScript/TypeScript applications, you can use the BraintrustExporter
directly:
For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:
The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces
. If your exporter
uses signal-specific environment variables, you'll need to set the full path:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces
If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example:
OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel
The x-bt-parent
header sets the trace's parent project or experiment. You can use
a prefix like project_id:
, project_name:
, or experiment_id:
here, or pass in
a span slug
(span.export()
) to nest the trace under a span within the parent object.
To find your project ID, navigate to your project's configuration page and find the Copy Project ID button at the bottom of the page.
Vercel AI SDK
The Vercel AI SDK natively supports OpenTelemetry and works out of the box with Braintrust, either via Next.js or Node.js.
Next.js
If you are using Next.js, you can use the Braintrust exporter with @vercel/otel
for the cleanest setup:
Or set the following environment variables in your app's .env
file, with your API key and project ID:
Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent
header.
When you call the AI SDK, make sure to set experimental_telemetry
:
The integration supports streaming functions like streamText
. Each streamed call will produce ai.streamText
spans in Braintrust.
Node.js
If you are using Node.js without a framework, you must configure the NodeSDK
directly. Here, it's more straightforward
to use the BraintrustSpanProcessor
.
First, install the necessary dependencies:
Then, set up the OpenTelemetry SDK:
Traceloop
To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:
When setting the bearer token, be sure to encode the space between "Bearer" and your API key using %20
.
Traces will then appear under the Braintrust project or experiment provided in
the x-bt-parent
header.
LlamaIndex
To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor
to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:
Now traced LLM calls will appear under the provided Braintrust project or experiment.
Mastra
To use Braintrust with Mastra, configure these environment variables:
When you create your agent, enable telemetry and export the data using OpenTelemetry:
This will automatically send interactions, tool calls, and performance metrics to Braintrust for monitoring and evaluation.
Manual tracing
If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged.
Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.
Attribute | Braintrust Field | Description |
---|---|---|
gen_ai.prompt | input | User message (string). If you have an array of messages, you'll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content . |
gen_ai.prompt_json | input | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.completion | output | Assistant message (string). Note that if you have an array of messages, you'll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content . |
gen_ai.completion_json | output | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.request.model | metadata.model | The model name (e.g. "gpt-4o") |
gen_ai.request.max_tokens | metadata.max_tokens | max_tokens |
gen_ai.request.temperature | metadata.temperature | temperature |
gen_ai.request.top_p | metadata.top_p | top_p |
gen_ai.usage.prompt_tokens | metrics.prompt_tokens | Input tokens |
gen_ai.usage.completion_tokens | metrics.completion_tokens | Output tokens |
You can also use the braintrust
namespace to set fields in Braintrust directly:
Attribute | Braintrust Field | Notes |
---|---|---|
braintrust.input | input | Typically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content . |
braintrust.input_json | input | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.output | output | Typically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content . |
braintrust.output_json | output | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.metadata | metadata | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature . |
braintrust.metrics | metrics | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens . |
braintrust.tags | tags | An array of strings that can be set on the root span. |
Here's an example of how to set up manual tracing:
Troubleshooting
Why are my traces not showing up?
There are a few common reasons why your traces may not show up in Braintrust:
- Braintrust's logs table only shows traces that have a root span (i.e.
span_parents
is empty). If you only send children spans, they will not appear in the logs table. A common reason for this is only sending spans to Braintrust which have atraceparent
header. To fix this, make sure to send a root span for every trace you want to appear in the UI. - If you are self-hosting Braintrust, make sure you do not use
https://api.braintrust.dev
and instead use your custom API URL as theOTLP_ENDPOINT
, for examplehttps://dfwhllz61x709.cloudfront.net/otel
. - You must explicitly set up OpenTelemetry in your application. If you're using Next.js, then follow the Next.js OpenTelemetry guide. If you are using Node.js without a framework, then follow this example to set up a basic exporter.
Vercel AI SDK v4 (native wrapper)
The Vercel AI SDK is an elegant tool for building AI-powered applications. Although you can use OpenTelemetry (see above) to trace your requests, Braintrust also natively supports tracing requests made with the Vercel AI SDK. When deciding which to use, consider the following:
- Use OpenTelemetry (OTel) if:
- You are already using OTel to trace your application, for example your Next.js web app.
- You want to trace to multiple providers, not just Braintrust.
- You want to automatically trace tool calls (although with limited control).
- Use the native tracing wrapper if:
- You are already using Braintrust to trace and want to weave in the Vercel AI SDK
- You want to avoid setting up OTel.
- You want granular control over tracing (e.g. wrapping and tracing nested function calls from within your tools).
To use the native tracing wrapper, you can wrap a Vercel AI SDK model with wrapAISDKModel
and then use it as you would any other model.
Wrapping tools
Wrap tool implementations with wrapTraced
. Here is a full example, modified from the Node.js Quickstart.
When you run this code, you'll see traces like this in the Braintrust UI:
Vercel AI SDK v5
Just like the v4 version, this native integration automatically traces your model calls with the ai-sdk. We now provide a middleware that is a more flexible approach that allows you to include additional middleware.
OpenAI Agents SDK
When installed with the openai-agents
extra,
the Braintrust SDK provides a tracing.TracingProcessor
implementation
that sends the traces and spans from the OpenAI Agents SDK to Braintrust.
The constructor of BraintrustTracingProcessor
can take a braintrust.Span
, braintrust.Experiment
, or braintrust.Logger
that serves as the root under which all spans will be logged.
If None
is passed, the current span, experiment, or logger
will be selected exactly as in braintrust.start_span
.
The Agents SDK can also be used to implement a task
in an Eval
,
making it straightforward to build and evaluate agentic workflows:
Instructor
To use Instructor to generate structured outputs, you need to wrap the
OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai
first,
because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.
LangChain
Trace your LangChain applications by configuring a global LangChain callback handler.
Learn more about LangChain callbacks in their documentation.
LangGraph
Trace your LangGraph applications by configuring a global LangChain callback handler.
Learn more about LangGraph in their documentation.