Gemini

Google's Gemini models include Gemini 2.0 Flash, Gemini 2.5 Pro, and other advanced multimodal language models. Braintrust integrates seamlessly with Gemini through direct API access, wrapper functions for automatic tracing, and proxy support.

Setup

To use Gemini models, configure your Gemini API key in Braintrust.

  1. Get a Gemini API key from Google AI Studio
  2. Add the Gemini API key to your organization's AI providers
  3. Set the Gemini API key and your Braintrust API key as environment variables
.env
GEMINI_API_KEY=<your-gemini-api-key>
BRAINTRUST_API_KEY=<your-braintrust-api-key>
 
# If you are self-hosting Braintrust, set the URL of your hosted dataplane
# BRAINTRUST_API_URL=<your-braintrust-api-url-here>

API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.

Use Gemini with Braintrust AI proxy

The Braintrust AI Proxy allows you to access Gemini models through a unified OpenAI-compatible interface.

Install the braintrust and openai packages.

pnpm add braintrust openai

Then, initialize the client and make a request to a Gemini model via the Braintrust AI Proxy.

gemini_proxy.ts
import { OpenAI } from "openai";
 
const client = new OpenAI({
  baseURL: "https://api.braintrust.dev/v1/proxy",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const response = await client.chat.completions.create({
  model: "gemini-2.0-flash",
  messages: [{ role: "user", content: "Hello, world!" }],
});

Trace logs with Gemini

Trace your Gemini LLM calls for observability and monitoring.

When using the Braintrust AI Proxy, API calls are automatically logged to the specified project.

gemini_trace.ts
import { OpenAI } from "openai";
import { initLogger } from "braintrust";
 
initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const client = new OpenAI({
  baseURL: "https://api.braintrust.dev/v1/proxy",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
// All API calls are automatically logged
const result = await client.chat.completions.create({
  model: "gemini-2.0-flash",
  messages: [{ role: "user", content: "What is machine learning?" }],
});

The Braintrust AI Proxy is not required to trace Gemini API calls. For more control, learn how to customize traces.

Stream Gemini responses

Gemini models support streaming:

gemini_stream.ts
const stream = await client.chat.completions.create({
  model: "gemini-2.0-flash",
  messages: [{ role: "user", content: "Count to 10" }],
  stream: true,
});
 
for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

Evaluate with Gemini

Evaluations distill the non-deterministic outputs of Gemini models into an effective feedback loop that enables you to ship more reliable, higher quality products. Braintrust Eval is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.

gemini_eval.ts
import { Eval } from "braintrust";
import { OpenAI } from "openai";
 
const client = new OpenAI({
  baseURL: "https://api.braintrust.dev/v1/proxy",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
Eval("Gemini Evaluation", {
  data: () => [
    { input: "What is 2+2?", expected: "4" },
    { input: "What is the capital of France?", expected: "Paris" },
  ],
  task: async (input) => {
    const response = await client.chat.completions.create({
      model: "gemini-2.0-flash",
      messages: [{ role: "user", content: input }],
    });
    return response.choices[0].message.content;
  },
  scores: [
    {
      name: "accuracy",
      scorer: (args) => (args.output === args.expected ? 1 : 0),
    },
  ],
});

Gemini reasoning model support

Gemini reasoning models support unified reasoning parameters.

gemini_reasoning.ts
import { OpenAI } from "openai";
import "@braintrust/proxy/types"; // for type safety
 
const client = new OpenAI({
  baseURL: "https://api.braintrust.dev/v1/proxy",
  apiKey: process.env.BRAINTRUST_API_KEY,
});
 
const response = await client.chat.completions.create({
  model: "gemini-2.5-flash-preview-05-20",
  reasoning_enabled: true,
  reasoning_budget: 1024,
  messages: [{ role: "user", content: "How many rs in 'ferrocarril'?" }],
});
 
console.log(response.choices[0].reasoning); // Access reasoning steps

To learn more about tool use, multimodal support, attachments, and masking sensitive data with Gemini, visit the customize traces guide.

Models and capabilities

ModelMultimodalReasoningMax inputMax outputInput $/1MOutput $/1M
gemini-2.5-flash1,048,57665,535$0.30$2.50
gemini-2.5-flash-preview-05-201,048,57665,535$0.30$2.50
gemini-2.5-flash-preview-04-171,048,57665,535$0.15$0.60
gemini-2.5-pro1,048,57665,535$1.25$10.00
gemini-2.5-pro-preview-06-051,048,57665,535$1.25$10.00
gemini-2.5-pro-preview-05-061,048,57665,535$1.25$10.00
gemini-2.5-pro-preview-03-251,048,57665,535$1.25$10.00
gemini-2.5-pro-exp-03-251,048,57665,535$1.25$10.00
gemini-2.0-pro-exp-02-052,097,1528,192$1.25$10.00
gemini-2.5-flash-lite1,048,57665,535$0.10$0.40
gemini-2.5-flash-lite-preview-06-171,048,57665,535$0.10$0.40
gemini-2.0-flash1,048,5768,192$0.10$0.40
gemini-2.0-flash-0011,048,5768,192$0.15$0.60
gemini-2.0-flash-exp1,048,5768,192$0.15$0.60
gemini-2.0-flash-thinking-exp-01-211,048,57665,536$0$0
gemini-2.0-flash-lite1,048,5768,192$0.075$0.30
gemini-2.0-flash-lite-0011,048,5768,192$0.075$0.30
gemini-1.5-flash1,000,0008,192$0.075$0.30
gemini-1.5-flash-latest1,048,5768,192$0.075$0.30
gemini-1.5-flash-0011,000,0008,192$0.075$0.30
gemini-1.5-flash-0021,048,5768,192$0.075$0.30
gemini-1.5-flash-8b1,048,5768,192$0$0
gemini-1.5-flash-8b-latest$0.038$0.15
gemini-1.5-flash-8b-001$0.038$0.15
gemini-1.5-pro2,097,1528,192$1.25$5.00
gemini-1.5-pro-latest1,048,5768,192$3.50$1.05
gemini-1.5-pro-0011,000,0008,192$1.25$5.00
gemini-1.5-pro-0022,097,1528,192$1.25$5.00
learnlm-1.5-pro-experimental32,7678,192$0$0
gemini-exp-12062,097,1528,192$0$0
gemini-1.0-pro32,7608,192$0.50$1.50
gemini-pro32,7608,192$0.50$1.50

On this page

Gemini - Docs - Braintrust