Python SDK configuration
Install the Braintrust Python SDK with OpenTelemetry support:.env
BraintrustSpanProcessor for simplified configuration:
opentelemetry-braintrust.py
BraintrustSpanProcessor:
api_key: The API key to use for Braintrust. Defaults to theBRAINTRUST_API_KEYenvironment variable.api_url: The URL of the Braintrust API. Defaults to theBRAINTRUST_API_URLenvironment variable orhttps://api.braintrust.devif not set.parent: The parent project or experiment to use for Braintrust. Defaults to theBRAINTRUST_PARENTenvironment variable.filter_ai_spans: Defaults toFalse. IfTrue, only AI-related spans will be sent to Braintrust.custom_filter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. IfTrue, the span will be sent to Braintrust. IfFalse, the span will be dropped. IfNone, don’t influence the sampling decision.
TypeScript SDK configuration
Install the Braintrust TypeScript SDK with the following OpenTelemetry dependencies:BraintrustSpanProcessor with NodeSDK:
opentelemetry-braintrust.ts
opentelemetry-braintrust.ts
BraintrustSpanProcessor:
apiKey: The API key to use for Braintrust. Defaults to theBRAINTRUST_API_KEYenvironment variable.apiUrl: The URL of the Braintrust API. Defaults to theBRAINTRUST_API_URLenvironment variable orhttps://api.braintrust.devif not set.parent: The parent project or experiment to use for Braintrust. Defaults to theBRAINTRUST_PARENTenvironment variable.filterAISpans: Defaults tofalse. Iftrue, only AI-related spans will be sent to Braintrust.customFilter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. Iftrue, the span will be sent to Braintrust. Iffalse, the span will be dropped. Ifnull, don’t influence the sampling decision.
OTel compatibility mode
OTel compatibility mode is a beta feature.
Setup
OTel compatibility mode requires the following versions of the Braintrust SDKs:
- Python SDK:
braintrust[otel] >= 0.3.1 - TypeScript SDK:
braintrust >= 0.4.5
span.export(). All machines that read exported spans (via the x-bt-parent header or distributed tracing) must use these minimum versions. Upgrade them before enabling compatibility mode..env
Example
otel-compatibility-mode.py
How it works
OpenTelemetry compatibility mode works by generating OTel compatible span IDs and storing the current active span in OTel’s context.Distributed tracing
Distributed tracing requires the following minimum versions:
- Python SDK:
braintrust[otel] >= v0.3.5 - TypeScript SDK:
braintrust >= v0.4.8
These examples use
fetch and requests to make HTTP requests. The trace context can also be transmitted via message queue metadata, gRPC metadata, or any other inter-service communication mechanism that supports custom headers.Create OpenTelemetry spans as children of Braintrust spans
Export the Braintrust span context and use it to create an OpenTelemetry context.Create Braintrust spans as children of OpenTelemetry spans
Propagate the OpenTelemetry context using W3C Trace Context headers.OTLP configuration
If you are using a different language or want to use pure OTel code, you can set up the OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust. Once you set up an OTLP exporter to send traces to Braintrust, we automatically convert LLM calls into BraintrustLLM spans, which
can be saved as prompts
and evaluated in the playground.
For JavaScript/TypeScript applications, you can use the BraintrustExporter directly:
The trace endpoint URL is
https://api.braintrust.dev/otel/v1/traces. If your exporter
uses signal-specific environment variables, you’ll need to set the full path:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/tracesIf you’re self-hosting Braintrust, substitute your stack’s Universal API URL. For example:
OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otelx-bt-parent header sets the trace’s parent project or experiment. You can use
a prefix like project_id:, project_name:, or experiment_id: here, or pass in
a span slug
(span.export()) to nest the trace under a span within the parent object.
To find your project ID, navigate to your project’s configuration page and find the Copy Project ID button at the bottom of the page.
Vercel AI SDK
The Vercel AI SDK natively supports OpenTelemetry and works out of the box with Braintrust, either via Next.js or Node.js.Next.js
If you are using Next.js, use the Braintrust exporter with@vercel/otel:
parent field.
When you call the AI SDK, make sure to set experimental_telemetry:
The integration supports streaming functions like
streamText. Each streamed call will produce ai.streamText spans in Braintrust.Node.js
If you are using Node.js without a framework, you must configure theNodeSDK directly. Here, it’s more straightforward
to use the BraintrustSpanProcessor.
First, install the necessary dependencies:
Manual tracing
If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged. Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.| Attribute | Braintrust Field | Description |
|---|---|---|
gen_ai.input.messages | input | The chat history provided to the model as an input. Messages must be structured according to the OpenTelemetry GenAI Input messages JSON schema. |
gen_ai.prompt | input | User message (string). If you have an array of messages, you’ll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content. |
gen_ai.prompt_json | input | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.output.messages | output | Messages returned by the model. Messages must be structured according to the OpenTelemetry GenAI Output messages JSON schema. |
gen_ai.completion | output | Assistant message (string). Note that if you have an array of messages, you’ll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content. |
gen_ai.completion_json | output | A JSON-serialized string containing an array of OpenAI messages. |
gen_ai.request | metadata.* | A JSON object or flattened attributes containing model parameters. The model parameter is cleaned of provider prefixes (e.g., “openai/gpt-4o” becomes “gpt-4o”). |
gen_ai.request.model | metadata.model | The model name (e.g. “gpt-4o”). Provider prefixes like “openai/”, “anthropic/”, “google/” are automatically removed. |
gen_ai.request.max_tokens | metadata.max_tokens | Maximum tokens to generate. |
gen_ai.request.temperature | metadata.temperature | Sampling temperature. |
gen_ai.request.top_p | metadata.top_p | Nucleus sampling parameter. |
gen_ai.operation.name | span_attributes.type | The operation type. Value “chat” maps to type “llm”, “execute_tool” maps to type “tool”. |
gen_ai.agent.tools | metadata.tools | A JSON-serialized array of tool names available to the agent. Tool names are automatically converted into tool definition objects with type: "function" and basic schemas. |
gen_ai.tool.name | metadata.tools | The name of the tool being executed. Automatically converted into a tool definition object. Also sets span_attributes.type to “tool”. |
gen_ai.usage | metrics.* | A JSON object containing token usage. Can include prompt_tokens, completion_tokens, input_tokens, output_tokens, and total_tokens. |
gen_ai.usage.prompt_tokens | metrics.prompt_tokens | Input tokens (preferred field name). |
gen_ai.usage.completion_tokens | metrics.completion_tokens | Output tokens (preferred field name). |
gen_ai.usage.input_tokens | metrics.prompt_tokens | Input tokens (alternative field name, normalized to prompt_tokens). |
gen_ai.usage.output_tokens | metrics.completion_tokens | Output tokens (alternative field name, normalized to completion_tokens). |
gen_ai.usage.total_tokens | metrics.tokens | Total tokens (normalized to tokens). If not provided, automatically calculated from prompt_tokens + completion_tokens. |
braintrust namespace to set fields in Braintrust directly:
| Attribute | Braintrust Field | Notes |
|---|---|---|
braintrust.input | input | Typically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content. |
braintrust.input_json | input | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.output | output | Typically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content. |
braintrust.output_json | output | A JSON-serialized string containing an array of OpenAI messages. |
braintrust.metadata | metadata | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature. If you include tools, you must provide full tool definition objects. |
braintrust.metrics | metrics | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens. |
braintrust.scores | scores | A JSON-serialized dictionary with string keys, where values are scores for the span. Alternatively, you can use flattened attribute names, like braintrust.scores.accuracy or braintrust.scores.relevance. |
braintrust.expected | expected | The expected output for the span. Can be any value (string, number, object, etc.). |
braintrust.expected_json | expected | A JSON-serialized string containing the expected output. Use this when you need to pass complex objects or arrays as the expected value. |
braintrust.tags | tags | An array of strings that can be set on the root span. |
braintrust.span_attributes | span_attributes | A JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.span_attributes.type or braintrust.span_attributes.name. The type field can be one of: "llm", "task", "tool", "eval", "score", "function". |
braintrust.* attributes are deleted and translated into Braintrust’s native format.
GenAI Events
In addition to attributes, Braintrust also processes GenAI events on spans to extract input/output messages. These events follow the OpenTelemetry GenAI semantic conventions for events:| Event Name | Field | Description |
|---|---|---|
gen_ai.user.message | input | User message event. Content is extracted from the content attribute (supports both string and JSON array format). |
gen_ai.system.message | input | System message event. Content is extracted from the content attribute. |
gen_ai.choice | output | Model response event. Message is extracted from the message attribute and can include both text content and tool calls. |
gen_ai.assistant.message | output | Assistant message event. Content is extracted from the content attribute. |
gen_ai.tool.message | input | Tool result event. Content is extracted from the content attribute and associated with the tool call via the id attribute. |
Troubleshooting
Why are my traces not showing up?
There are a few common reasons why your traces may not show up in Braintrust:- Braintrust’s logs table only shows traces that have a root span (i.e.
span_parentsis empty). If you only send children spans, they will not appear in the logs table. A common reason for this is only sending spans to Braintrust which have atraceparentheader. To fix this, make sure to send a root span for every trace you want to appear in the UI. - If you are self-hosting Braintrust, make sure you do not use
https://api.braintrust.devand instead use your custom API URL as theOTLP_ENDPOINT, for examplehttps://dfwhllz61x709.cloudfront.net/otel. - You must explicitly set up OpenTelemetry in your application. If you’re using Next.js, then follow the Next.js OpenTelemetry guide. If you are using Node.js without a framework, then follow this example to set up a basic exporter.