OpenAI provides access to GPT models including GPT-5 and other cutting-edge language models. Braintrust integrates seamlessly with OpenAI through direct API access, wrapOpenAI wrapper functions for automatic tracing, and proxy support.
Setup
To use OpenAI with Braintrust, you’ll need an OpenAI API key.
- Visit OpenAI’s API platform and create a new API key
- Add the OpenAI API key to your organization’s AI providers
- Set the OpenAI API key and your Braintrust API key as environment variables
OPENAI_API_KEY=<your-openai-api-key>
BRAINTRUST_API_KEY=<your-braintrust-api-key>
# If you are self-hosting Braintrust, set the URL of your hosted dataplane
# BRAINTRUST_API_URL=<your-braintrust-api-url>
API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.
Install the braintrust and openai packages.
pnpm add braintrust openai
Trace with OpenAI
Trace your OpenAI LLM calls for observability and monitoring.
Using the OpenAI Agents SDK? See the OpenAI Agents SDK framework docs.
Trace automatically with wrapOpenAI
Braintrust provides wrapOpenAI (TypeScript) and wrap_openai (Python) functions that automatically log OpenAI API calls. To use them, initialize the logger and pass the OpenAI client to the wrapOpenAI function.
wrapOpenAI is a convenience function that wraps the OpenAI client with the Braintrust logger. For more control, learn how to customize traces.
import OpenAI from "openai";
// Initialize the Braintrust logger
const logger = initLogger({
projectName: "My Project", // Your project name
apiKey: process.env.BRAINTRUST_API_KEY,
});
// Wrap the OpenAI client with wrapOpenAI
const client = wrapOpenAI(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
}),
);
// All API calls are automatically logged
const result = await client.chat.completions.create({
model: "gpt-5",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is machine learning?" },
],
});
Stream OpenAI responses
wrap_openai/wrapOpenAI can automatically log metrics like prompt_tokens, completion_tokens, and tokens for streaming LLM calls if the LLM API returns them. Set include_usage to true in the stream_options parameter to receive these metrics from OpenAI.
model: "gpt-5-mini",
messages: [{ role: "user", content: "Count to 10" }],
stream: true,
stream_options: {
include_usage: true, // Required for token metrics
},
});
for await (const chunk of result) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
Evaluate with OpenAI
Evaluations help you distill the non-deterministic outputs of OpenAI models into an effective feedback loop that enables you to ship more reliable, higher quality products. Braintrust Eval is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
Basic OpenAI eval setup
Evaluate the outputs of OpenAI models with Braintrust.
import { Eval } from "braintrust";
import { OpenAI } from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
Eval("OpenAI Evaluation", {
// An array of user inputs and expected outputs
data: () => [
{ input: "What is 2+2?", expected: "4" },
{ input: "What is the capital of France?", expected: "Paris" },
],
task: async (input) => {
// Your OpenAI LLM call
const response = await client.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: input }],
});
return response.choices[0].message.content;
},
scores: [
{
name: "accuracy",
// A simple scorer that returns 1 if the output matches the expected output, 0 otherwise
scorer: (args) => (args.output === args.expected ? 1 : 0),
},
],
});
Use OpenAI as an LLM judge
You can use OpenAI models to score the outputs of other AI systems. This example uses the LLMClassifierFromSpec scorer to score the relevance of the outputs of an AI system.
Install the autoevals package to use the LLMClassifierFromSpec scorer.
Create a scorer that uses the LLMClassifierFromSpec scorer to score the relevance of the outputs of an AI system. You can then include relevanceScorer as a scorer in your Eval function (see above).
import { LLMClassifierFromSpec } from "autoevals";
const relevanceScorer = LLMClassifierFromSpec("Relevance", {
choice_scores: { Relevant: 1, Irrelevant: 0 },
model: "gpt-5-mini",
use_cot: true,
});
Additional features
Structured outputs
OpenAI’s structured outputs are supported with the wrapper functions.
import { z } from "zod";
// Define a Zod schema for the response
const ResponseSchema = z.object({
name: z.string(),
age: z.number(),
});
const completion = await client.beta.chat.completions.parse({
model: "gpt-5-mini",
messages: [
{ role: "system", content: "Extract the person's name and age." },
{ role: "user", content: "My name is John and I'm 30 years old." },
],
response_format: {
type: "json_schema",
json_schema: {
name: "person",
// The Zod schema for the response
schema: ResponseSchema,
},
},
});
Braintrust supports OpenAI function calling for building AI agents with tools.
const tools = [
{
type: "function" as const,
function: {
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string" },
},
required: ["location"],
},
},
},
];
const response = await client.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
tools,
});
Multimodal content, attachments, errors, and masking sensitive data
To learn more about these topics, check out the customize traces guide.
Use OpenAI with Braintrust AI proxy
You can also access OpenAI models through the Braintrust AI Proxy, which provides a unified interface for multiple providers.
const client = new OpenAI({
baseURL: "https://api.braintrust.dev/v1/proxy",
apiKey: process.env.BRAINTRUST_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: "What is a proxy?" }],
seed: 1, // A seed activates the proxy's cache
});
Cookbooks