Setup
To use Gemini models, configure your Gemini API key in Braintrust.- Get a Gemini API key from Google AI Studio
- Add the Gemini API key to your organization’s AI providers
- Set the Gemini API key and your Braintrust API key as environment variables
.env
API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.
Trace with Gemini
Trace your Gemini LLM calls for observability and monitoring using either the native Google GenAI SDK or the Braintrust AI proxy.Trace automatically with native Google GenAI SDK
Braintrust provides wrapper functions that automatically log Google GenAI API calls. All subsequent API calls will be automatically traced.These wrapper functions are convenience functions that integrate the Braintrust logger with the Google GenAI client. For more control, see the manual wrapping section below.
Stream responses with native Google GenAI SDK
The native Google GenAI client supports streaming with automatic tracing of token metrics.Manual wrapping for more control
If you need more control over when tracing is enabled, you can manually wrap the client.Use Gemini with Braintrust AI proxy
The Braintrust AI Proxy allows you to access Gemini models through a unified OpenAI-compatible interface. Install thebraintrust and openai packages.
Trace AI proxy calls
When using the Braintrust AI Proxy, API calls are automatically logged to the specified project.Stream with proxy
Gemini models support streaming through the proxy.Evaluate with Gemini
Evaluations distill the non-deterministic outputs of Gemini models into an effective feedback loop that enables you to ship more reliable, higher quality products. BraintrustEval is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
Evaluate with native SDK
Evaluate with proxy
Additional features
Reasoning models
Gemini’s reasoning models likegemini-2.0-flash-thinking-exp-1219 provide detailed thought processes before generating responses. The wrapper automatically captures both the reasoning tokens and the final response.
Structured outputs
Gemini supports structured JSON outputs using response schemas.Function calling and tools
Gemini supports function calling for building AI agents with tools.Multimodal content
Gemini models support multimodal inputs including images, audio, and video.Streaming with token metrics
Stream responses with automatic token tracking.Context caching
Gemini supports context caching for efficient reuse of large contexts.Error handling, attachments, and masking sensitive data
To learn more about these topics, check out the customize traces guide.To learn more about multimodal support, attachments, error handling, and masking sensitive data with Gemini, visit the customize traces guide.