Skip to main content
Applies to:
  • Plan -
  • Deployment -

Summary

Goal: Use the OpenAI Responses API with Braintrust tracing, gateway routing, and prompt templates. Features: wrapOpenAI() instrumentation, Braintrust gateway, prompt.build() workaround.

Configuration steps

Step 1: Trace Responses API calls with wrapOpenAI()

wrapOpenAI() automatically instruments responses.create(), .stream(), .parse(), and .compact(). No additional configuration is needed.
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

initLogger({ projectName: "my-project", apiKey: process.env.BRAINTRUST_API_KEY });
const client = wrapOpenAI(new OpenAI());

const response = await client.responses.create({
  model: "gpt-4o-mini",
  input: "What is the capital of France?",
});

Step 2: Route Responses API calls through the Braintrust gateway

Point your OpenAI client base URL at the Braintrust gateway. Responses API and Chat Completions calls are both routed correctly.
const client = wrapOpenAI(new OpenAI({
  baseURL: "https://api.braintrust.dev/v1/proxy",
  apiKey: process.env.BRAINTRUST_API_KEY,
}));

Step 3: Use prompt.build() output with responses.create()

prompt.build() doesn’t support a 'responses' flavor at the time of writing this. Only 'chat' and 'completion' are available. As a workaround, use the default chat flavor and pass compiled.messages as input — the role-based structure is compatible.
const prompt = await loadPrompt({ projectId, slug });
const compiled = prompt.build(variables); // { messages, model, ... }

const response = await client.responses.create({
  model: compiled.model,
  input: compiled.messages, // same format, different key
});