Setup
To use Gemini models, configure your Gemini API key in Braintrust.- Get a Gemini API key from Google AI Studio
- Add the Gemini API key to your organization’s AI providers or to a project’s AI providers
- Set the Gemini API key and your Braintrust API key as environment variables
.env
API keys are encrypted at rest using transparent data encryption with a unique 256-bit key and nonce.
Trace with Gemini
Trace your Gemini LLM calls for observability and monitoring using either the native Google GenAI SDK or the Braintrust AI proxy.Trace automatically with native Google GenAI SDK
Braintrust provides wrapper functions that automatically log Google GenAI API calls. All subsequent API calls will be automatically traced. Install the required packages:Stream responses with native Google GenAI SDK
The native Google GenAI client supports streaming with automatic tracing of token metrics.Manual wrapping for more control
If you need more control over when tracing is enabled, you can manually wrap the client.Use Gemini with Braintrust AI proxy
The Braintrust AI Proxy allows you to access Gemini models through a unified OpenAI-compatible interface. Install thebraintrust and openai packages.
Trace AI proxy calls
When using the Braintrust AI Proxy, API calls are automatically logged to the specified project.Stream with proxy
Gemini models support streaming through the proxy.Evaluate with Gemini
Evaluations distill the non-deterministic outputs of Gemini models into an effective feedback loop that enables you to ship more reliable, higher quality products. BraintrustEval is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
Evaluate with native SDK
Evaluate with proxy
Additional features
Reasoning models
Gemini 2.5 models (gemini-2.5-flash, gemini-2.5-pro) have built-in reasoning capabilities enabled by default. You can configure reasoning behavior using thinkingConfig.
Native SDK
Structured outputs
Gemini supports structured JSON outputs using response schemas.Function calling and tools
Gemini supports function calling for building AI agents with tools.Multimodal content
Gemini models support multimodal inputs including images, audio, and video.Streaming with token metrics
Stream responses with automatic token tracking.Context caching
Gemini supports context caching for efficient reuse of large contexts.Use with Spring AI
For Java applications using Spring AI, you can integrate Braintrust by wrapping the underlying Google GenAI client and passing it to Spring AI’sGoogleGenAiChatModel.
ChatModel are automatically traced to Braintrust.