Vercel

Braintrust integrates with Vercel in two ways: through the Vercel AI SDK for tracing AI applications, and through the Vercel Marketplace for project setup and observability through your Vercel dashboard.

Vercel Marketplace vs. Vercel AI SDK

Both the Vercel Marketplace and Vercel AI SDK approaches enable you to send your Vercel application's AI traces to Braintrust. The method you choose is related to how you want to set up and manage your integration and how much flexibility and control you want over how your traces are logged in Braintrust.

The Vercel Marketplace integration provides less fine-grained control over what parts of your app are traced to Braintrust, but requires almost no code changes to your project. This is a good method for getting set up with Braintrust quickly.

The Vercel AI SDK integration requires installing the Braintrust SDK and adding several lines of code to your project, but allows you more control over what parts of your application are traced. This is a good method for more complex applications where you only want to capture specific traces.

Vercel Marketplace

Braintrust is available as a native integration on the Vercel Marketplace. This integration allows you to run evals, monitor model quality and user experience, and benchmark across models from OpenAI, Anthropic, Gemini, and more from your Vercel dashboard.

Set up the Vercel Marketplace integration

Install the Braintrust integration

From the Vercel Marketplace listing, select Install. You will be prompted to create a new Braintrust account. On the next screen, select either the Free plan or the Pro installation plan and select Continue. Create a name for your Braintrust project in the Product Name field and select Create. This creates a Braintrust project and links it to your Vercel account.

From here, either select Done to go to your integration page or Add Drain to start sending logs to Braintrust.

Add Drain

In the Add Drain panel, select Traces and Next. Create a name for the drain and choose which Vercel projects you want to send traces to Braintrust. You can select All Projects or designate one or several specific projects. Adjust the sampling rate to designate what percentage of Vercel logs are sent to Braintrust.

Configure OpenTelemetry

Once you've added the integration, you need to configure OpenTelemetry on each project that's sending traces to Braintrust. In your Next.js project, create an instrumentation.ts file and call registerOtel. Check out the Vercel docs on initializing OTel for an example.

Vercel AI SDK

Braintrust natively supports tracing requests made with the Vercel AI SDK. The Vercel AI SDK is an elegant tool for building AI-powered applications.

Vercel AI SDK v5 (wrapAISDK)

wrapAISDK wraps the top-level AI SDK functions (generateText, streamText, generateObject, streamObject) and automatically creates spans with full input/output logging, metrics, and tool call tracing.

trace-vercel-ai-sdk-v5.ts
import { initLogger, wrapAISDK } from "braintrust";
import * as ai from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
 
// `initLogger` sets up your code to log to the specified Braintrust project using your API key.
// By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI.
initLogger({
  projectName: "My AI Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const { generateText } = wrapAISDK(ai);
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const { text } = await generateText({
    model: openai("gpt-4"),
    prompt: "What is the capital of France?",
  });
  console.log(text);
}
 
main();

Tool calls with wrapAISDK

wrapAISDK automatically traces both the LLM's tool call suggestions and the actual tool executions. It supports both the array-based and object-based tools formats from the AI SDK.

wrap-ai-sdk-tools.ts
import { initLogger, wrapAISDK } from "braintrust";
import * as ai from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
 
initLogger({
  projectName: "Tool Tracing",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const { generateText } = wrapAISDK(ai);
 
async function main() {
  const { text } = await generateText({
    model: openai("gpt-4"),
    prompt: "What's the weather like in San Francisco?",
    tools: {
      getWeather: {
        description: "Get weather for a location",
        parameters: z.object({
          location: z.string().describe("The city name"),
        }),
        // Tool executions are automatically wrapped and traced
        execute: async ({ location }: { location: string }) => {
          // This execution will appear as a child span
          return {
            location,
            temperature: 72,
            conditions: "sunny",
          };
        },
      },
    },
  });
 
  console.log(text);
}
 
main();

Vercel AI SDK v4 (model-level wrapper)

To wrap individual models, you can use wrapAISDKModel with specific model instances.

trace-vercel-ai-sdk.ts
import { initLogger, wrapAISDKModel } from "braintrust";
import { openai } from "@ai-sdk/openai";
 
// `initLogger` sets up your code to log to the specified Braintrust project using your API key.
// By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI.
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));
 
async function main() {
  // This will automatically log the request, response, and metrics to Braintrust
  const response = await model.doGenerate({
    inputFormat: "messages",
    mode: {
      type: "regular",
    },
    prompt: [
      {
        role: "user",
        content: [{ type: "text", text: "What is the capital of France?" }],
      },
    ],
  });
  console.log(response);
}
 
main();

Wrapping tools

Wrap tool implementations with wrapTraced. Here is a full example, modified from the Node.js Quickstart.

trace-vercel-ai-sdk-tools.ts
import { openai } from "@ai-sdk/openai";
import { CoreMessage, streamText, tool } from "ai";
import { z } from "zod";
import * as readline from "node:readline/promises";
import { initLogger, traced, wrapAISDKModel, wrapTraced } from "braintrust";
 
const logger = initLogger({
  projectName: "<YOUR PROJECT NAME>",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const terminal = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});
 
const messages: CoreMessage[] = [];
 
async function main() {
  while (true) {
    const userInput = await terminal.question("You: ");
 
    await traced(async (span) => {
      span.log({ input: userInput });
      messages.push({ role: "user", content: userInput });
 
      const result = streamText({
        model: wrapAISDKModel(openai("gpt-4o")),
        messages,
        tools: {
          weather: tool({
            description: "Get the weather in a location (in Celsius)",
            parameters: z.object({
              location: z
                .string()
                .describe("The location to get the weather for"),
            }),
            execute: wrapTraced(
              async function weather({ location }) {
                return {
                  location,
                  temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
                };
              },
              {
                type: "tool",
              },
            ),
          }),
          convertCelsiusToFahrenheit: tool({
            description: "Convert a temperature from Celsius to Fahrenheit",
            parameters: z.object({
              celsius: z
                .number()
                .describe("The temperature in Celsius to convert"),
            }),
            execute: wrapTraced(
              async function convertCelsiusToFahrenheit({ celsius }) {
                const fahrenheit = (celsius * 9) / 5 + 32;
                return { fahrenheit: Math.round(fahrenheit * 100) / 100 };
              },
              {
                type: "tool",
              },
            ),
          }),
        },
        maxSteps: 5,
        onStepFinish: (step) => {
          console.log(JSON.stringify(step, null, 2));
        },
      });
 
      let fullResponse = "";
      process.stdout.write("\nAssistant: ");
      for await (const delta of result.textStream) {
        fullResponse += delta;
        process.stdout.write(delta);
      }
      process.stdout.write("\n\n");
 
      messages.push({ role: "assistant", content: fullResponse });
 
      span.log({ output: fullResponse });
    });
  }
}
 
main().catch(console.error);

When you run this code, you'll see traces like this in the Braintrust UI:

AI SDK with tool calls

On this page

Vercel - Docs - Braintrust