Vercel AI SDK

The Vercel AI SDK is an elegant tool for building AI-powered applications. Braintrust natively supports tracing requests made with the Vercel AI SDK.

Vercel AI SDK v4 (native wrapper)

To use the native tracing wrapper, you can wrap a Vercel AI SDK model with wrapAISDKModel and then use it as you would any other model.

import { initLogger, wrapAISDKModel } from "braintrust";
import { openai } from "@ai-sdk/openai";
 
// `initLogger` sets up your code to log to the specified Braintrust project using your API key.
// By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI.
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const response = await model.doGenerate({
    inputFormat: "messages",
    mode: {
      type: "regular",
    },
    prompt: [
      {
        role: "user",
        content: [{ type: "text", text: "What is the capital of France?" }],
      },
    ],
  });
  console.log(response);
}
 
main();

Wrapping tools

Wrap tool implementations with wrapTraced. Here is a full example, modified from the Node.js Quickstart.

import { openai } from "@ai-sdk/openai";
import { CoreMessage, streamText, tool } from "ai";
import { z } from "zod";
import * as readline from "node:readline/promises";
import { initLogger, traced, wrapAISDKModel, wrapTraced } from "braintrust";
 
const logger = initLogger({
  projectName: "<YOUR PROJECT NAME>",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});
 
const messages: CoreMessage[] = [];
 
async function main() {
  while (true) {
    const userInput = await terminal.question("You: ");
 
    await traced(async (span) => {
      span.log({ input: userInput });
      messages.push({ role: "user", content: userInput });
 
      const result = streamText({
        model: wrapAISDKModel(openai("gpt-4o")),
        messages,
        tools: {
          weather: tool({
            description: "Get the weather in a location (in Celsius)",
            parameters: z.object({
              location: z
                .string()
                .describe("The location to get the weather for"),
            }),
            execute: wrapTraced(
              async function weather({ location }) {
                return {
                  location,
                  temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
                };
              },
              {
                type: "tool",
              },
            ),
          }),
          convertCelsiusToFahrenheit: tool({
            description: "Convert a temperature from Celsius to Fahrenheit",
            parameters: z.object({
              celsius: z
                .number()
                .describe("The temperature in Celsius to convert"),
            }),
            execute: wrapTraced(
              async function convertCelsiusToFahrenheit({ celsius }) {
                const fahrenheit = (celsius * 9) / 5 + 32;
                return { fahrenheit: Math.round(fahrenheit * 100) / 100 };
              },
              {
                type: "tool",
              },
            ),
          }),
        },
        maxSteps: 5,
        onStepFinish: (step) => {
          console.log(JSON.stringify(step, null, 2));
        },
      });
 
      let fullResponse = "";
      process.stdout.write("\nAssistant: ");
      for await (const delta of result.textStream) {
        fullResponse += delta;
        process.stdout.write(delta);
      }
      process.stdout.write("\n\n");
 
      messages.push({ role: "assistant", content: fullResponse });
 
      span.log({ output: fullResponse });
    });
  }
}
 
main().catch(console.error);

When you run this code, you'll see traces like this in the Braintrust UI:

AI SDK with tool calls

Vercel AI SDK v5

Just like the v4 version, this native integration automatically traces your model calls with the ai-sdk. We now provide a middleware that is a more flexible approach that allows you to include additional middleware.

import { openai } from "@ai-sdk/openai";
import { generateText, streamText, wrapLanguageModel } from "ai";
import { initLogger, BraintrustMiddleware } from "braintrust";
 
// Initialize Braintrust logging
initLogger({
  projectName: "my-ai-project",
});
 
// Wrap your model with Braintrust middleware
const model = wrapLanguageModel({
  model: openai("gpt-4"),
  middleware: BraintrustMiddleware({ debug: true }),
});
 
async function main() {
  // Generate text with automatic tracing
  const result = await generateText({
    model,
    prompt: "What is the capital of France?",
    system: "Provide a concise answer.",
    maxTokens: 100,
  });
 
  console.log(result.text);
 
  // Stream text with automatic tracing
  const stream = streamText({
    model,
    prompt: "Write a haiku about programming.",
  });
 
  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }
}
 
main().catch(console.error);

On this page

Vercel AI SDK - Docs - Braintrust