- LLM calls (chat completions, agent responses, embeddings)
- Token usage and cost metrics
- Request and response data
- Latency and performance data
- Hierarchical trace trees showing relationships between calls
Prerequisites
Before configuring the integration, ensure you have:- A TrueFoundry account (sign up at truefoundry.com)
- A Braintrust account (sign up at braintrust.dev)
- Your Braintrust API key from Settings → API Keys
- Your Braintrust project ID from your project’s configuration page
Trace with TrueFoundry
TrueFoundry exports traces to Braintrust using OpenTelemetry. Configure the integration through the TrueFoundry dashboard:1
Enable OpenTelemetry export
In the TrueFoundry dashboard, navigate to AI Gateway → Controls → OTEL Config and enable the “Otel Traces Exporter Configuration” toggle.
2
Configure the Braintrust endpoint
Select the HTTP Configuration tab and configure these settings:
- Traces endpoint:
https://api.braintrust.dev/otel/v1/traces - Encoding:
Proto
3
Add authentication headers
Add two HTTP headers to authenticate and route traces to your Braintrust project:Replace
<YOUR_BRAINTRUST_API_KEY> with your API key from Braintrust settings, and <YOUR_PROJECT_ID> with your project ID.The
x-bt-parent header supports multiple prefixes for organizing traces: project_id:, project_name:, or experiment_id:.4
Save the configuration
Click Save to apply the settings. TrueFoundry will automatically export all subsequent LLM traces to Braintrust.
View traces in Braintrust
After configuration, all LLM interactions through TrueFoundry AI Gateway appear in the Logs page of your Braintrust project. Each trace includes:- Request parameters and prompts
- Response data and completions
- Token usage and estimated costs
- Latency measurements
- Hierarchical span relationships
Self-hosted Braintrust
For self-hosted Braintrust deployments, replace the standard endpoint with your custom Braintrust URL:Proto and use the same authentication headers with your self-hosted instance’s API key.