Skip to main content
This quickstart shows you how to automatically log an application’s LLM calls to Braintrust using native SDK wrappers for OpenAI, Anthropic, or Gemini. For guidance on working with other major AI providers and frameworks, see Integrations.

Prerequisites

Before you begin, make sure you have:

1. Install SDKs

Install the Braintrust SDK and OpenAI SDK for your programming language.
# pnpm
pnpm add braintrust openai ts-node
# npm
npm install braintrust openai ts-node
Set both your Braintrust and OpenAI API keys as environment variables:
export BRAINTRUST_API_KEY=<your-braintrust-api-key>
export OPENAI_API_KEY=<your-openai-api-key>
API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.

2. Define your application

Create a customer support email classifier that categorizes emails into billing, technical, or feature_request.
classifier.ts
import OpenAI from "openai";

const client = new OpenAI();

// Test emails
const emails = [
  "I was charged twice for my monthly subscription. Can you refund the duplicate charge?",
  "The dashboard keeps showing a 500 error when I try to load my projects.",
  "Can you add support for exporting data to CSV format?",
];

async function classifyEmail(email: string) {
  const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      {
        role: "user",
        content: `Classify this customer support email into one of these categories:
- billing: Payment, subscription, invoices, refunds
- technical: Bugs, errors, system issues
- feature_request: New features or improvements

Email: ${email}

Answer with just the category name.`,
      },
    ],
  });

  return response.choices[0].message.content?.trim();
}

async function main() {
  for (const email of emails) {
    const category = await classifyEmail(email);
    console.log("Email:", email);
    console.log("Category:", category);
    console.log("---");
  }
}

main();

3. Trace LLM calls

To enable tracing, make two changes to your code: initialize Braintrust and wrap your OpenAI client. This captures every LLM call automatically.
  • TypeScript & Python: Use wrapOpenAI / wrap_openai wrapper functions
  • Go: Use the tracing middleware with the OpenAI client
  • Ruby: Use Braintrust::Trace::OpenAI.wrap to wrap the OpenAI client
  • Java: Use the tracing interceptor with the OpenAI client
  • C#: Use BraintrustOpenAI.WrapOpenAI() to wrap the OpenAI client
classifier.ts
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

// Test emails
const emails = [
  "I was charged twice for my monthly subscription. Can you refund the duplicate charge?",
  "The dashboard keeps showing a 500 error when I try to load my projects.",
  "Can you add support for exporting data to CSV format?",
];

// Initialize Braintrust logger
const logger = initLogger({ projectName: "Email Classifier" });
const client = wrapOpenAI(new OpenAI());

async function classifyEmail(email: string) {
  const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      {
        role: "user",
        content: `Classify this customer support email into one of these categories:
- billing: Payment, subscription, invoices, refunds
- technical: Bugs, errors, system issues
- feature_request: New features or improvements

Email: ${email}

Answer with just the category name.`,
      },
    ],
  });

  return response.choices[0].message.content?.trim();
}

async function main() {
  for (const email of emails) {
    const category = await classifyEmail(email);
    console.log("Email:", email);
    console.log("Category:", category);
    console.log("---");
  }
}

main();
Run this code:
npx ts-node classifier.ts
All 3 classification requests are automatically logged to Braintrust.

4. View traces

In the Braintrust UI, go to the “Email Classifier” project and select Logs. You’ll see a trace for each email classification request.Click into any trace to see:
  • Complete input prompt and model output
  • Token counts, latency, and cost
  • Model configuration (temperature, max tokens, etc.)
  • Request and response metadata
This is the value of observability - you can see every request, identify issues, and understand how your application behaves in production.

Next steps

  • Observe - View, filter, and analyze your logs
  • Annotate - Create datasets from your logs and add human feedback
  • Evaluate - Run experiments to test and validate improvements
  • Deploy - Ship changes and monitor production

Troubleshooting

Check your API key:
echo $BRAINTRUST_API_KEY
Make sure it’s set and starts with sk-.Verify the project name: The project is created automatically when you call initLogger({ projectName: "Email Classifier" }). Check that you’re looking at the correct project in the dashboard.Look for errors: Check your console output for any error messages from Braintrust. Common issues:
  • Invalid API key
  • Network connectivity issues
  • Firewall blocking requests to api.braintrust.dev
Enable debug logging:
const logger = initLogger({
  projectName: "Email Classifier",
  logLevel: "debug"
});
Check wrapper coverage: Make sure you’re wrapping the client before making API calls. Calls made with an unwrapped client won’t be traced.Verify async/await: If using async functions, ensure you’re awaiting API calls properly. Unawaited promises may not be fully traced.Check for errors: If your LLM call throws an error, the trace may be incomplete. Check logs for error messages.
Logging is async: Braintrust logs data asynchronously to avoid blocking your application. Traces should appear in the dashboard within seconds.Check network: If you’re experiencing delays, verify network connectivity to api.braintrust.dev.Batch size: Braintrust batches logs for efficiency. You can adjust batch settings if needed - see the SDK reference.