Integrating with Braintrust

Braintrust Traces can be integrated seamlessly with popular platforms and frameworks to capture rich context and power intelligent workflows. This guide walks you through the supported integrations and how to configure them for maximum observability and insight.

OpenTelemetry (OTel)

To set up Braintrust as an OpenTelemetry backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.

Braintrust supports configuring OTel with our SDK, as well as libraries like OpenLLMetry and the Vercel AI SDK. You can also use OTel's built-in exporters to send traces to Braintrust if you don't want to install additional libraries or write code. OpenLLMetry supports a range of languages including Python, TypeScript, Java, and Go, so you can start logging to Braintrust from many different environments.

Python SDK configuration

Install the Braintrust Python SDK with OpenTelemetry support:

pip install braintrust[otel]
 
export BRAINTRUST_API_KEY=your-api-key
export BRAINTRUST_PARENT=project_name:my-otel-project
 
# If you are self-hosting Braintrust, set the URL of your hosted dataplane. You can omit this otherwise.
export BRAINTRUST_API_URL=https://api.braintrust.dev

For Python applications, use the BraintrustSpanProcessor for simplified configuration:

import os
 
from braintrust.otel import BraintrustSpanProcessor
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
 
# Configure the global OTel tracer provider
provider = TracerProvider()
trace.set_tracer_provider(provider)
 
# Send spans to Braintrust.
provider.add_span_processor(BraintrustSpanProcessor())

For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor:

  • api_key: The API key to use for Braintrust. Defaults to the BRAINTRUST_API_KEY environment variable.
  • api_url: The URL of the Braintrust API. Defaults to the BRAINTRUST_API_URL environment variable or https://api.braintrust.dev if not set.
  • parent: The parent project or experiment to use for Braintrust. Defaults to the BRAINTRUST_PARENT environment variable.
  • filter_ai_spans: Defaults to False. If True, only AI-related spans will be sent to Braintrust.
  • custom_filter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. If True, the span will be sent to Braintrust. If False, the span will be dropped. If None, don't influence the sampling decision.

TypeScript SDK configuration

Install the Braintrust TypeScript SDK with the following OpenTelemetry dependencies:

npm install braintrust @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/sdk-trace-base

For TypeScript/JavaScript applications, use the BraintrustSpanProcessor with NodeSDK:

import { NodeSDK } from "@opentelemetry/sdk-node";
import { BraintrustSpanProcessor } from "braintrust";
 
const sdk = new NodeSDK({
  serviceName: "my-service",
  spanProcessor: new BraintrustSpanProcessor({
    parent: "project_name:your-project-name",
  }),
});
 
sdk.start();

Or configure it manually with a custom tracer provider:

import { BasicTracerProvider } from "@opentelemetry/sdk-trace-base";
import { trace } from "@opentelemetry/api";
import { BraintrustSpanProcessor } from "braintrust";
 
trace.setGlobalTracerProvider(
  new BasicTracerProvider({
    spanProcessors: [
      new BraintrustSpanProcessor({
        parent: "project_name:your-project-name",
      }),
    ],
  }),
);

For more advanced configuration, you can pass in the following arguments to BraintrustSpanProcessor:

  • apiKey: The API key to use for Braintrust. Defaults to the BRAINTRUST_API_KEY environment variable.
  • apiUrl: The URL of the Braintrust API. Defaults to the BRAINTRUST_API_URL environment variable or https://api.braintrust.dev if not set.
  • parent: The parent project or experiment to use for Braintrust. Defaults to the BRAINTRUST_PARENT environment variable.
  • filterAISpans: Defaults to false. If true, only AI-related spans will be sent to Braintrust.
  • customFilter: A function that gives you fine-grained control over which spans are sent to Braintrust. It takes a span and returns a boolean. If true, the span will be sent to Braintrust. If false, the span will be dropped. If null, don't influence the sampling decision.

OTLP configuration

If you are using a different language or want to use pure OTel code, you can set up the OpenTelemetry Protocol Exporter (OTLP) to send traces to Braintrust.

Once you set up an OTLP exporter to send traces to Braintrust, we automatically convert LLM calls into Braintrust LLM spans, which can be saved as prompts and evaluated in the playground.

For JavaScript/TypeScript applications, you can use the BraintrustExporter directly:

import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { BraintrustExporter } from "braintrust";
 
const exporter = new BraintrustExporter({
  apiKey: "your-api-key",
  parent: "project_name:your-project",
  filterAISpans: true,
});
 
const processor = new BatchSpanProcessor(exporter);

For collectors that use the OpenTelemetry SDK to export traces, set the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

The trace endpoint URL is https://api.braintrust.dev/otel/v1/traces. If your exporter uses signal-specific environment variables, you'll need to set the full path: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://api.braintrust.dev/otel/v1/traces

If you're self-hosting Braintrust, substitute your stack's Universal API URL. For example: OTEL_EXPORTER_OTLP_ENDPOINT=https://dfwhllz61x709.cloudfront.net/otel

The x-bt-parent header sets the trace's parent project or experiment. You can use a prefix like project_id:, project_name:, or experiment_id: here, or pass in a span slug (span.export()) to nest the trace under a span within the parent object.

To find your project ID, navigate to your project's configuration page and find the Copy Project ID button at the bottom of the page.

Vercel AI SDK

The Vercel AI SDK natively supports OpenTelemetry and works out of the box with Braintrust, either via Next.js or Node.js.

Next.js

If you are using Next.js, you can use the Braintrust exporter with @vercel/otel for the cleanest setup:

import { registerOTel } from "@vercel/otel";
import { BraintrustExporter } from "braintrust";
 
// In your instrumentation.ts file
export function register() {
  registerOTel({
    serviceName: "my-braintrust-app",
    traceExporter: new BraintrustExporter({
      parent: "project_name:your-project-name",
      filterAISpans: true, // Only send AI-related spans
    }),
  });
}

Or set the following environment variables in your app's .env file, with your API key and project ID:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"

Traced LLM calls will appear under the Braintrust project or experiment provided in the x-bt-parent header.

When you call the AI SDK, make sure to set experimental_telemetry:

const result = await generateText({
  model: openai("gpt-4o-mini"),
  prompt: "What is 2 + 2?",
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      query: "weather",
      location: "San Francisco",
    },
  },
});

The integration supports streaming functions like streamText. Each streamed call will produce ai.streamText spans in Braintrust.

import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
 
export async function POST(req: Request) {
  const { prompt } = await req.json();
 
  const result = await streamText({
    model: openai("gpt-4o-mini"),
    prompt,
    experimental_telemetry: { isEnabled: true },
  });
 
  return result.toDataStreamResponse();
}

Node.js

If you are using Node.js without a framework, you must configure the NodeSDK directly. Here, it's more straightforward to use the BraintrustSpanProcessor.

First, install the necessary dependencies:

npm install ai @ai-sdk/openai braintrust @opentelemetry/sdk-node @opentelemetry/sdk-trace-base zod

Then, set up the OpenTelemetry SDK:

import { NodeSDK } from "@opentelemetry/sdk-node";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { BraintrustSpanProcessor } from "braintrust";
 
const sdk = new NodeSDK({
  spanProcessors: [
    new BraintrustSpanProcessor({
      parent: "project_name:your-project-name",
      filterAISpans: true,
    }),
  ],
});
 
sdk.start();
 
async function main() {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    messages: [
      {
        role: "user",
        content: "What are my orders and where are they? My user ID is 123",
      },
    ],
    tools: {
      listOrders: tool({
        description: "list all orders",
        parameters: z.object({ userId: z.string() }),
        execute: async ({ userId }) =>
          `User ${userId} has the following orders: 1`,
      }),
      viewTrackingInformation: tool({
        description: "view tracking information for a specific order",
        parameters: z.object({ orderId: z.string() }),
        execute: async ({ orderId }) =>
          `Here is the tracking information for ${orderId}`,
      }),
    },
    experimental_telemetry: {
      isEnabled: true,
      functionId: "my-awesome-function",
      metadata: {
        something: "custom",
        someOtherThing: "other-value",
      },
    },
    maxSteps: 10,
  });
 
  await sdk.shutdown();
}
 
main().catch(console.error);

Traceloop

To export OTel traces from Traceloop OpenLLMetry to Braintrust, set the following environment variables:

TRACELOOP_BASE_URL=https://api.braintrust.dev/otel
TRACELOOP_HEADERS="Authorization=Bearer%20<Your API Key>, x-bt-parent=project_id:<Your Project ID>"

When setting the bearer token, be sure to encode the space between "Bearer" and your API key using %20.

Traces will then appear under the Braintrust project or experiment provided in the x-bt-parent header.

from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
 
Traceloop.init(disable_batch=True)
client = OpenAI()
 
 
@workflow(name="story")
def run_story_stream(client):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a short story about LLM evals."}],
    )
    return completion.choices[0].message.content
 
 
print(run_story_stream(client))

LlamaIndex

To trace LLM calls with LlamaIndex, you can use the OpenInference LlamaIndexInstrumentor to send OTel traces directly to Braintrust. Configure your environment and set the OTel endpoint:

import os
 
import llama_index.core
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = (
    f"Authorization=Bearer {BRAINTRUST_API_KEY}" + f"x-bt-parent=project_id:{PROJECT_ID}"
)
llama_index.core.set_global_handler("arize_phoenix", endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces")

Now traced LLM calls will appear under the provided Braintrust project or experiment.

from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
 
messages = [
    ChatMessage(role="system", content="Speak like a pirate. ARRR!"),
    ChatMessage(role="user", content="What do llamas sound like?"),
]
result = OpenAI().chat(messages)
print(result)

Mastra

To use Braintrust with Mastra, configure these environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_name:<Your Project Name>"

When you create your agent, enable telemetry and export the data using OpenTelemetry:

import { Mastra } from "@mastra/core";
 
export const mastra = new Mastra({
  // ... other config
  telemetry: {
    serviceName: "your-service-name",
    enabled: true,
    export: {
      type: "otlp",
    },
  },
});

This will automatically send interactions, tool calls, and performance metrics to Braintrust for monitoring and evaluation.

Mastra logs

Manual tracing

If you want to log LLM calls directly to the OTel endpoint, you can set up a custom OpenTelemetry tracer and add the appropriate attributes to your spans. This gives you fine-grained control over what data gets logged.

Braintrust implements the OpenTelemetry GenAI semantic conventions. When you send traces with these attributes, they are automatically mapped to Braintrust fields.

AttributeBraintrust FieldDescription
gen_ai.promptinputUser message (string). If you have an array of messages, you'll need to use gen_ai.prompt_json (see below) or set flattened attributes like gen_ai.prompt.0.role or gen_ai.prompt.0.content.
gen_ai.prompt_jsoninputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.completionoutputAssistant message (string). Note that if you have an array of messages, you'll need to use gen_ai.completion_json (see below) or set flattened attributes like gen_ai.completion.0.role or gen_ai.completion.0.content.
gen_ai.completion_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
gen_ai.request.modelmetadata.modelThe model name (e.g. "gpt-4o")
gen_ai.request.max_tokensmetadata.max_tokensmax_tokens
gen_ai.request.temperaturemetadata.temperaturetemperature
gen_ai.request.top_pmetadata.top_ptop_p
gen_ai.usage.prompt_tokensmetrics.prompt_tokensInput tokens
gen_ai.usage.completion_tokensmetrics.completion_tokensOutput tokens

You can also use the braintrust namespace to set fields in Braintrust directly:

AttributeBraintrust FieldNotes
braintrust.inputinputTypically a single user message (string). If you have an array of messages, use braintrust.input_json instead (see below) or set flattened attributes like braintrust.input.0.role or braintrust.input.0.content.
braintrust.input_jsoninputA JSON-serialized string containing an array of OpenAI messages.
braintrust.outputoutputTypically a single assistant message (string). If you have an array of messages, use braintrust.output_json instead (see below) or set flattened attributes like braintrust.output.0.role or braintrust.output.0.content.
braintrust.output_jsonoutputA JSON-serialized string containing an array of OpenAI messages.
braintrust.metadatametadataA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metadata.model or braintrust.metadata.temperature.
braintrust.metricsmetricsA JSON-serialized dictionary with string keys. Alternatively, you can use flattened attribute names, like braintrust.metrics.prompt_tokens or braintrust.metrics.completion_tokens.
braintrust.tagstagsAn array of strings that can be set on the root span.

Here's an example of how to set up manual tracing:

import json
import os
 
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
 
BRAINTRUST_API_URL = os.environ.get("BRAINTRUST_API_URL", "https://api.braintrust.dev")
BRAINTRUST_API_KEY = os.environ.get("BRAINTRUST_API_KEY", "<Your API Key>")
PROJECT_ID = "<Your Project ID>"
 
provider = TracerProvider()
processor = BatchSpanProcessor(
    OTLPSpanExporter(
        endpoint=f"{BRAINTRUST_API_URL}/otel/v1/traces",
        headers={"Authorization": f"Bearer {BRAINTRUST_API_KEY}", "x-bt-parent": f"project_id:{PROJECT_ID}"},
    )
)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
 
# Export a span with flattened attribute names.
with tracer.start_as_current_span("GenAI Attributes") as span:
    span.set_attribute("gen_ai.prompt.0.role", "system")
    span.set_attribute("gen_ai.prompt.0.content", "You are a helpful assistant.")
    span.set_attribute("gen_ai.prompt.1.role", "user")
    span.set_attribute("gen_ai.prompt.1.content", "What is the capital of France?")
 
    span.set_attribute("gen_ai.completion.0.role", "assistant")
    span.set_attribute("gen_ai.completion.0.content", "The capital of France is Paris.")
 
    span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
    span.set_attribute("gen_ai.request.temperature", 0.5)
    span.set_attribute("gen_ai.usage.prompt_tokens", 10)
    span.set_attribute("gen_ai.usage.completion_tokens", 30)
 
# Export a span using JSON-serialized attributes.
with tracer.start_as_current_span("GenAI JSON-Serialized Attributes") as span:
    span.set_attribute(
        "gen_ai.prompt_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Italy?"},
            ]
        ),
    )
    span.set_attribute(
        "gen_ai.completion_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Italy is Rome."},
            ]
        ),
    )
 
# Export a span using the `braintrust` namespace.
with tracer.start_as_current_span("Braintrust Attributes") as span:
    span.set_attribute("braintrust.input.0.role", "system")
    span.set_attribute("braintrust.input.0.content", "You are a helpful assistant.")
    span.set_attribute("braintrust.input.1.role", "user")
    span.set_attribute("braintrust.input.1.content", "What is the capital of Libya?")
 
    span.set_attribute("braintrust.output.0.role", "assistant")
    span.set_attribute("braintrust.output.0.content", "The capital of Brazil is Brasilia.")
 
    span.set_attribute("braintrust.metadata.model", "gpt-4o-mini")
    span.set_attribute("braintrust.metadata.country", "Brazil")
    span.set_attribute("braintrust.metrics.prompt_tokens", 10)
    span.set_attribute("braintrust.metrics.completion_tokens", 20)
 
# Export a span using JSON-serialized `braintrust` attributes.
with tracer.start_as_current_span("Braintrust JSON-Serialized Attributes") as span:
    span.set_attribute(
        "braintrust.input_json",
        json.dumps(
            [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "What is the capital of Argentina?"},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.output_json",
        json.dumps(
            [
                {"role": "assistant", "content": "The capital of Argentina is Buenos Aires."},
            ]
        ),
    )
    span.set_attribute(
        "braintrust.metadata",
        json.dumps({"model": "gpt-4o-mini", "country": "Argentina"}),
    )
    span.set_attribute(
        "braintrust.metrics",
        json.dumps({"prompt_tokens": 15, "completion_tokens": 45}),
    )

Troubleshooting

Why are my traces not showing up?

There are a few common reasons why your traces may not show up in Braintrust:

  • Braintrust's logs table only shows traces that have a root span (i.e. span_parents is empty). If you only send children spans, they will not appear in the logs table. A common reason for this is only sending spans to Braintrust which have a traceparent header. To fix this, make sure to send a root span for every trace you want to appear in the UI.
  • If you are self-hosting Braintrust, make sure you do not use https://api.braintrust.dev and instead use your custom API URL as the OTLP_ENDPOINT, for example https://dfwhllz61x709.cloudfront.net/otel.
  • You must explicitly set up OpenTelemetry in your application. If you're using Next.js, then follow the Next.js OpenTelemetry guide. If you are using Node.js without a framework, then follow this example to set up a basic exporter.

Vercel AI SDK v4 (native wrapper)

The Vercel AI SDK is an elegant tool for building AI-powered applications. Although you can use OpenTelemetry (see above) to trace your requests, Braintrust also natively supports tracing requests made with the Vercel AI SDK. When deciding which to use, consider the following:

  • Use OpenTelemetry (OTel) if:
    • You are already using OTel to trace your application, for example your Next.js web app.
    • You want to trace to multiple providers, not just Braintrust.
    • You want to automatically trace tool calls (although with limited control).
  • Use the native tracing wrapper if:
    • You are already using Braintrust to trace and want to weave in the Vercel AI SDK
    • You want to avoid setting up OTel.
    • You want granular control over tracing (e.g. wrapping and tracing nested function calls from within your tools).

To use the native tracing wrapper, you can wrap a Vercel AI SDK model with wrapAISDKModel and then use it as you would any other model.

import { initLogger, wrapAISDKModel } from "braintrust";
import { openai } from "@ai-sdk/openai";
 
// `initLogger` sets up your code to log to the specified Braintrust project using your API key.
// By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI.
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const response = await model.doGenerate({
    inputFormat: "messages",
    mode: {
      type: "regular",
    },
    prompt: [
      {
        role: "user",
        content: [{ type: "text", text: "What is the capital of France?" }],
      },
    ],
  });
  console.log(response);
}
 
main();

Wrapping tools

Wrap tool implementations with wrapTraced. Here is a full example, modified from the Node.js Quickstart.

import { openai } from "@ai-sdk/openai";
import { CoreMessage, streamText, tool } from "ai";
import { z } from "zod";
import * as readline from "node:readline/promises";
import { initLogger, traced, wrapAISDKModel, wrapTraced } from "braintrust";
 
const logger = initLogger({
  projectName: "<YOUR PROJECT NAME>",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});
 
const messages: CoreMessage[] = [];
 
async function main() {
  while (true) {
    const userInput = await terminal.question("You: ");
 
    await traced(async (span) => {
      span.log({ input: userInput });
      messages.push({ role: "user", content: userInput });
 
      const result = streamText({
        model: wrapAISDKModel(openai("gpt-4o")),
        messages,
        tools: {
          weather: tool({
            description: "Get the weather in a location (in Celsius)",
            parameters: z.object({
              location: z
                .string()
                .describe("The location to get the weather for"),
            }),
            execute: wrapTraced(
              async function weather({ location }) {
                return {
                  location,
                  temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
                };
              },
              {
                type: "tool",
              },
            ),
          }),
          convertCelsiusToFahrenheit: tool({
            description: "Convert a temperature from Celsius to Fahrenheit",
            parameters: z.object({
              celsius: z
                .number()
                .describe("The temperature in Celsius to convert"),
            }),
            execute: wrapTraced(
              async function convertCelsiusToFahrenheit({ celsius }) {
                const fahrenheit = (celsius * 9) / 5 + 32;
                return { fahrenheit: Math.round(fahrenheit * 100) / 100 };
              },
              {
                type: "tool",
              },
            ),
          }),
        },
        maxSteps: 5,
        onStepFinish: (step) => {
          console.log(JSON.stringify(step, null, 2));
        },
      });
 
      let fullResponse = "";
      process.stdout.write("\nAssistant: ");
      for await (const delta of result.textStream) {
        fullResponse += delta;
        process.stdout.write(delta);
      }
      process.stdout.write("\n\n");
 
      messages.push({ role: "assistant", content: fullResponse });
 
      span.log({ output: fullResponse });
    });
  }
}
 
main().catch(console.error);

When you run this code, you'll see traces like this in the Braintrust UI:

AI SDK with tool calls

Vercel AI SDK v5

Just like the v4 version, this native integration automatically traces your model calls with the ai-sdk. We now provide a middleware that is a more flexible approach that allows you to include additional middleware.

import { openai } from "@ai-sdk/openai";
import { generateText, streamText, wrapLanguageModel } from "ai";
import { initLogger, BraintrustMiddleware } from "braintrust";
 
// Initialize Braintrust logging
initLogger({
  projectName: "my-ai-project",
});
 
// Wrap your model with Braintrust middleware
const model = wrapLanguageModel({
  model: openai("gpt-4"),
  middleware: BraintrustMiddleware({ debug: true }),
});
 
async function main() {
  // Generate text with automatic tracing
  const result = await generateText({
    model,
    prompt: "What is the capital of France?",
    system: "Provide a concise answer.",
    maxTokens: 100,
  });
 
  console.log(result.text);
 
  // Stream text with automatic tracing
  const stream = streamText({
    model,
    prompt: "Write a haiku about programming.",
  });
 
  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
}
 
main().catch(console.error);

OpenAI Agents SDK

When installed with the openai-agents extra, the Braintrust SDK provides a tracing.TracingProcessor implementation that sends the traces and spans from the OpenAI Agents SDK to Braintrust.

pip install braintrust[openai-agents]
import asyncio
 
from agents import Agent, Runner, set_trace_processors
from braintrust import init_logger
from braintrust.wrappers.openai import BraintrustTracingProcessor
 
 
async def main():
    agent = Agent(
        name="Assistant",
        instructions="You only respond in haikus.",
    )
 
    result = await Runner.run(agent, "Tell me about recursion in programming.")
    print(result.final_output)
 
 
if __name__ == "__main__":
    set_trace_processors([BraintrustTracingProcessor(init_logger("openai-agent"))])
    asyncio.run(main())

The constructor of BraintrustTracingProcessor can take a braintrust.Span, braintrust.Experiment, or braintrust.Logger that serves as the root under which all spans will be logged. If None is passed, the current span, experiment, or logger will be selected exactly as in braintrust.start_span.

OpenAI Agents SDK Logs

The Agents SDK can also be used to implement a task in an Eval, making it straightforward to build and evaluate agentic workflows:

from agents import Agent, Runner, set_trace_processors
from braintrust import Eval
from braintrust.wrappers.openai import BraintrustTracingProcessor
 
from autoevals import ClosedQA
 
set_trace_processors([BraintrustTracingProcessor()])
 
 
async def task(input: str):
    agent = Agent(
        name="Assistant",
        instructions="You only respond in haikus.",
    )
 
    result = await Runner.run(agent, input)
    return result.final_output
 
 
Eval(
    name="openai-agent",
    data=[
        {
            "input": "Tell me about recursion in programming.",
        }
    ],
    task=task,
    scores=[
        ClosedQA.partial(
            criteria="The response should respond to the prompt and be a haiku.",
        )
    ],
)

OpenAI Agents SDK Eval

Instructor

To use Instructor to generate structured outputs, you need to wrap the OpenAI client with both Instructor and Braintrust. It's important that you call Braintrust's wrap_openai first, because it uses low-level usage info and headers returned by the OpenAI call to log metrics to Braintrust.

import instructor
from braintrust import init_logger, load_prompt, wrap_openai
 
logger = init_logger(project="Your project name")
 
 
def run_prompt(text: str):
    # Replace with your project name and slug
    prompt = load_prompt("Your project name", "Your prompt name")
 
    # wrap_openai will make sure the client tracks usage of the prompt.
    client = instructor.patch(wrap_openai(OpenAI()))
 
    # Render with parameters
    return client.chat.completions.create(**prompt.build(input=text), response_model=MyResponseModel)

LangChain

Trace your LangChain applications by configuring a global LangChain callback handler.

import {
  BraintrustCallbackHandler,
  setGlobalHandler,
} from "@braintrust/langchain-js";
import { ConsoleCallbackHandler } from "@langchain/core/tracers/console";
import { ChatOpenAI } from "@langchain/openai";
import { initLogger } from "braintrust";
 
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const handler = new BraintrustCallbackHandler();
setGlobalHandler(handler);
 
async function main() {
  const model = new ChatOpenAI({ modelName: "gpt-4o-mini" });
 
  await model.invoke("What is the capital of France?", {
    callbacks: [new ConsoleCallbackHandler()], // alternatively, you can manually pass the handler here instead of setting the handler globally
  });
}
 
main();

Learn more about LangChain callbacks in their documentation.

LangGraph

Trace your LangGraph applications by configuring a global LangChain callback handler.

import {
  BraintrustCallbackHandler,
  setGlobalHandler,
} from "@braintrust/langchain-js";
import { END, START, StateGraph, StateGraphArgs } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { initLogger } from "braintrust";
 
const logger = initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const handler = new BraintrustCallbackHandler({ logger });
setGlobalHandler(handler);
 
// Define the state structure for the graph
type HelloWorldGraphState = Record<string, any>;
 
const graphStateChannels: StateGraphArgs<HelloWorldGraphState>["channels"] = {};
 
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
});
 
async function sayHello(state: HelloWorldGraphState) {
  const res = await model.invoke("Say hello");
  return { message: res.content };
}
 
function sayBye(state: HelloWorldGraphState) {
  console.log(`From the 'sayBye' node: Bye world!`);
  return {};
}
 
async function main() {
  const graphBuilder = new StateGraph({ channels: graphStateChannels })
    .addNode("sayHello", sayHello)
    .addNode("sayBye", sayBye)
    .addEdge(START, "sayHello")
    .addEdge("sayHello", "sayBye")
    .addEdge("sayBye", END);
 
  const helloWorldGraph = graphBuilder.compile();
 
  // Execute the graph - all operations will be logged to Braintrust
  await helloWorldGraph.invoke({});
}
 
main();

Learn more about LangGraph in their documentation.

LangGraph trace visualization in Braintrust showing the execution flow of nodes and their relationships

Integrations - Docs - Guides - Braintrust