Skip to main content
Trace your LiteLLM applications by patching LiteLLM with Braintrust’s wrapper. LiteLLM is a unified interface for calling 100+ LLM APIs using the OpenAI format, supporting providers like OpenAI, Azure, Anthropic, Cohere, Replicate, and more.

Installation

uv add braintrut litellm

Usage

trace-litellm.py
from braintrust.wrappers.litellm import patch_litellm

patch_litellm()

import litellm
from braintrust import init_logger

# Initialize Braintrust
logger = init_logger(project="litellm-example")

# Use LiteLLM as normal - all calls will be automatically traced
response = litellm.completion(
    model="gpt-4o-mini", messages=[{"role": "user", "content": "What is the capital of France?"}]
)
This will automatically send all LiteLLM interactions to Braintrust, including:
  • Model calls across different providers
  • Request and response data
  • Token usage and costs
  • Latency metrics
  • Error tracking

Learn More