OpenTelemetry (OTel)

To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.

Braintrust supports configuring OTel with our SDK, as well as libraries like OpenLLMetry and the Vercel AI SDK. You can also use OTel's built-in exporters to send traces to Braintrust if you don't want to install additional libraries or write code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so you can start logging to Braintrust from many different environments.

Python SDK configuration

Install the Braintrust Python SDK with OpenTelemetry support:

uv add braintrust[otel]
export BRAINTRUST_API_KEY=your-api-key
export BRAINTRUST_PARENT=project_name:my-otel-project
 
# If you are self-hosting Braintrust, set the URL of your hosted dataplane. You can omit this otherwise.
export BRAINTRUST_API_URL=https://api.braintrust.dev

For Python applications, use the BraintrustSpanProcessor for simplified configuration:

import os
 
from braintrust.otel import BraintrustSpanProcessor
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
 
# Configure the global OTel tracer provider
provider = TracerProvider()
trace.set_tracer_provider(provider)
 
# Send spans to Braintrust.
provider.add_span_processor(BraintrustSpanProcessor())

For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor:

  • api_key: The API key to use for Braintrust. Defaults to the BRAINTRUST_API_KEY environment variable.
  • api_url: The URL of the Braintrust API. Defaults to the BRAINTRUST_API_URL environment variable or https://api.braintrust.dev if not set.
  • parent: The parent project or experiment to use for Braintrust. Defaults to the BRAINTRUST_PARENT environment variable.
  • filter_ai_spans: Defaults to False. If True, only AI-related spans will be sent to Braintrust.
  • custom_filter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. If True, the span will be sent to Braintrust. If False, the span will be dropped. If None, don't influence the sampling decision.

TypeScript SDK configuration

Install the Braintrust TypeScript SDK with the following OpenTelemetry dependencies:

pnpm add braintrust @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/sdk-trace-base

For TypeScript/JavaScript applications, use the BraintrustSpanProcessor with NodeSDK:

import { NodeSDK } from "@opentelemetry/sdk-node";
import { BraintrustSpanProcessor } from "braintrust";
 
const sdk = new NodeSDK({
  serviceName: "my-service",
  spanProcessor: new BraintrustSpanProcessor({
    parent: "project_name:your-project-name",
  }),
});
 
sdk.start();

Or configure it manually with a custom tracer provider:

import { BasicTracerProvider } from "@opentelemetry/sdk-trace-base";
import { trace } from "@opentelemetry/api";
import { BraintrustSpanProcessor } from "braintrust";
 
trace.setGlobalTracerProvider(
  new BasicTracerProvider({
    spanProcessors: [
      new BraintrustSpanProcessor({
        parent: "project_name:your-project-name",
      }),
    ],
  }),
);

For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor:

  • apiKey: The API key to use for Braintrust. Defaults to the BRAINTRUST_API_KEY environment variable.
  • apiUrl: The URL of the Braintrust API. Defaults to the BRAINTRUST_API_URL environment variable or https://api.braintrust.dev if not set.
  • parent: The parent project or experiment to use for Braintrust. Defaults to the BRAINTRUST_PARENT environment variable.
  • filterAISpans: Defaults to false. If true, only AI-related spans will be sent to Braintrust.
  • customFilter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. If true, the span will be sent to Braintrust. If false, the span will be dropped. If null, don't influence the sampling decision.

OTLP configuration

If you are using a different language or want to use pure OTel code, you can set up the OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust.

Once you set up an OTLP exporter to send traces to Braintrust, we automatically convert LLM calls into Braintrust LLM spans, which can be saved as prompts and evaluated in the playground.

For JavaScript/TypeScript applications, you can use the BraintrustExporter directly:

import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { BraintrustExporter } from "braintrust";
 
const exporter = new BraintrustExporter({
  apiKey: "your-api-key",
  parent: "project_name:your-project",
  filterAISpans: true,
});
 
const processor = new BatchSpanProcessor(exporter);

For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces. If your exporter uses signal-specific environment variables, you'll need to set the full path: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces

If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example: OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel

The x-bt-parent header sets the trace's parent project or experiment. You can use a prefix like project_id:, project_name:, or experiment_id: here, or pass in a span slug (span.export()) to nest the trace under a span within the parent object.

To find your project ID, navigate to your project's configuration page and find the Copy Project ID button at the bottom of the page.

Vercel AI SDK

The Vercel AI SDK natively supports OpenTelemetry and works out of the box with Braintrust, either via Next.js or Node.js.

Next.js

If you are using Next.js, you can use the Braintrust exporter with @vercel/otel for the cleanest setup:

import { registerOTel } from "@vercel/otel";
import { BraintrustExporter } from "braintrust";
 
// In your instrumentation.ts file
export function register() {
  registerOTel({
    serviceName: "my-braintrust-app",
    traceExporter: new BraintrustExporter({
      parent: "project_name:your-project-name",
      filterAISpans: true, // Only send AI-related spans
    }),
  });
}

Or set the following environment variables in your app's .env file, with your API key and project ID:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent header.

When you call the AI SDK, make sure to set experimental_telemetry:

const result = await generateText({
  model: openai("gpt-4o-mini"),
  prompt: "What is 2 + 2?",
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      query: "weather",
      location: "San Francisco",
    },
  },
});

The integration supports streaming functions like streamText. Each streamed call will produce ai.streamText spans in Braintrust.

import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
 
export async function POST(req: Request) {
  const { prompt } = await req.json();
 
  const result = await streamText({
    model: openai("gpt-4o-mini"),
    prompt,
    experimental_telemetry: { isEnabled: true },
  });
 
  return result.toDataStreamResponse();
}

Node.js

If you are using Node.js without a framework, you must configure the NodeSDK directly. Here, it's more straightforward to use the BraintrustSpanProcessor.

First, install the necessary dependencies:

pnpm add ai @ai-sdk/openai braintrust @opentelemetry/sdk-node @opentelemetry/sdk-trace-base zod

Then, set up the OpenTelemetry SDK:

import { NodeSDK } from "@opentelemetry/sdk-node";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { BraintrustSpanProcessor } from "braintrust";
 
const sdk = new NodeSDK({
  spanProcessors: [
    new BraintrustSpanProcessor({
      parent: "project_name:your-project-name",
      filterAISpans: true,
    }),
  ],
});
 
sdk.start();
 
async function main() {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    messages: [
      {
        role: "user",
        content: "What are my orders and where are they? My user ID is 123",
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        parameters: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        parameters: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    experimental_telemetry: {
      isEnabled: true,
      functionId: "my-awesome-function",
      metadata: {
        something: "custom",
        someOtherThing: "other-value",
      },
    },
    maxSteps: 10,
  });
 
  await sdk.shutdown();
}
 
main().catch(console.error);

Traceloop

To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:

TRACELOOP_BASE_URL=https://api.braintrust.dev/otel
TRACELOOP_HEADERS="Authorization=Bearer%20<Your API Key>, x-bt-parent=project_id:<Your Project ID>"

When setting the bearer token, be sure to encode the space between "Bearer" and your API key using %20.

Traces will then appear under the Braintrust project or experiment provided in the x-bt-parent header.

from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
 
Traceloop.init(disable_batch=True)
client = OpenAI()
 
 
@workflow(name="story")
def run_story_stream(client):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a short story about LLM evals."}],
    )
    return completion.choices[0].message.content
 
 
print(run_story_stream(client))

LlamaIndex

To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:

import os
 
import llama_index.core
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = (
    f"Authorization=Bearer {BRAINTRUST_API_KEY}" + f"x-bt-parent=project_id:{PROJECT_ID}"
)
llama_index.core.set_global_handler("arize_phoenix", endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces")

Now traced LLM calls will appear under the provided Braintrust project or experiment.

from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
 
messages = [
    ChatMessage(role="system", content="Speak like a pirate. ARRR!"),
    ChatMessage(role="user", content="What do llamas sound like?"),
]
result = OpenAI().chat(messages)
print(result)

Mastra

To use Braintrust with Mastra, configure these environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_name:<Your Project Name>"

When you create your agent, enable telemetry and export the data using OpenTelemetry:

import { Mastra } from "@mastra/core";
 
export const mastra = new Mastra({
  // ... other config
  telemetry: {
    serviceName: "your-service-name",
    enabled: true,
    export: {
      type: "otlp",
    },
  },
});

This will automatically send interactions, tool calls, and performance metrics to Braintrust for monitoring and evaluation.

Mastra logs

Manual tracing

If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged.

Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.

AttributeBraintrust FieldDescription
gen_ai.promptinputUser message (string). If you have an array of messages, you'll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content.
gen_ai.prompt_jsoninputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.completionoutputAssistant message (string). Note that if you have an array of messages, you'll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content.
gen_ai.completion_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.request.modelmetadata.modelThe model name (e.g. "gpt-4o")
gen_ai.request.max_tokensmetadata.max_tokensmax_tokens
gen_ai.request.temperaturemetadata.temperaturetemperature
gen_ai.request.top_pmetadata.top_ptop_p
gen_ai.usage.prompt_tokensmetrics.prompt_tokensInput tokens
gen_ai.usage.completion_tokensmetrics.completion_tokensOutput tokens

You can also use the braintrust namespace to set fields in Braintrust directly:

AttributeBraintrust FieldNotes
braintrust.inputinputTypically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content.
braintrust.input_jsoninputA JSON-serialized string containing an array of OpenAI messages.
braintrust.outputoutputTypically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content.
braintrust.output_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
braintrust.metadatametadataA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature.
braintrust.metricsmetricsA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens.
braintrust.tagstagsAn array of strings that can be set on the root span.

Fields mapped from braintrust.* attributes are deleted and translated into Braintrust's native format.

Here's an example of how to set up manual tracing:

import json
import os
 
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
provider = TracerProvider()
processor = BatchSpanProcessor(
    OTLPSpanExporter(
        endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces",
        headers={"Authorization": f"Bearer {BRAINTRUST_API_KEY}", "x-bt-parent": f"project_id:{PROJECT_ID}"},
    )
)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
 
# Export a span with flattened attribute names.
with tracer.start_as_current_span("GenAI Attributes") as span:
    span.set_attribute("gen_ai.prompt.0.role", "system")
    span.set_attribute("gen_ai.prompt.0.content", "You are a helpful assistant.")
    span.set_attribute("gen_ai.prompt.1.role", "user")
    span.set_attribute("gen_ai.prompt.1.content", "What is the capital of France?")
 
    span.set_attribute("gen_ai.completion.0.role", "assistant")
    span.set_attribute("gen_ai.completion.0.content", "The capital of France is Paris.")
 
    span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
    span.set_attribute("gen_ai.request.temperature", 0.5)
    span.set_attribute("gen_ai.usage.prompt_tokens", 10)
    span.set_attribute("gen_ai.usage.completion_tokens", 30)
 
# Export a span using JSON-serialized attributes.
with tracer.start_as_current_span("GenAI JSON-Serialized Attributes") as span:
    span.set_attribute(
        "gen_ai.prompt_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Italy?"},
            ]
        ),
    )
    span.set_attribute(
        "gen_ai.completion_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Italy is Rome."},
            ]
        ),
    )
 
# Export a span using the `braintrust` namespace.
with tracer.start_as_current_span("Braintrust Attributes") as span:
    span.set_attribute("braintrust.input.0.role", "system")
    span.set_attribute("braintrust.input.0.content", "You are a helpful assistant.")
    span.set_attribute("braintrust.input.1.role", "user")
    span.set_attribute("braintrust.input.1.content", "What is the capital of Libya?")
 
    span.set_attribute("braintrust.output.0.role", "assistant")
    span.set_attribute("braintrust.output.0.content", "The capital of Brazil is Brasilia.")
 
    span.set_attribute("braintrust.metadata.model", "gpt-4o-mini")
    span.set_attribute("braintrust.metadata.country", "Brazil")
    span.set_attribute("braintrust.metrics.prompt_tokens", 10)
    span.set_attribute("braintrust.metrics.completion_tokens", 20)
 
# Export a span using JSON-serialized `braintrust` attributes.
with tracer.start_as_current_span("Braintrust JSON-Serialized Attributes") as span:
    span.set_attribute(
        "braintrust.input_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Argentina?"},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.output_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Argentina is Buenos Aires."},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.metadata",
        json.dumps({"model": "gpt-4o-mini", "country": "Argentina"}),
    )
    span.set_attribute(
        "braintrust.metrics",
        json.dumps({"prompt_tokens": 15, "completion_tokens": 45}),
    )

Troubleshooting

Why are my traces not showing up?

There are a few common reasons why your traces may not show up in Braintrust:

  • Braintrust's logs table only shows traces that have a root span (i.e. span_parents is empty). If you only send children spans, they will not appear in the logs table. A common reason for this is only sending spans to Braintrust which have a traceparent header. To fix this, make sure to send a root span for every trace you want to appear in the UI.
  • If you are self-hosting Braintrust, make sure you do not use https://api.braintrust.dev and instead use your custom API URL as the OTLP_ENDPOINT, for example https://dfwhllz61x709.cloudfront.net/otel.
  • You must explicitly set up OpenTelemetry in your application. If you're using Next.js, then follow the Next.js OpenTelemetry guide. If you are using Node.js without a framework, then follow this example to set up a basic exporter.

On this page

OpenTelemetry (OTel) - Docs - Braintrust