Custom providers
Braintrust supports custom AI providers, allowing you to integrate any AI model or endpoint into your evaluation and tracing workflows. This includes custom models from existing providers, self-hosted models, or proprietary AI services.
Setup
If you have custom models as part of your OpenAI or other accounts, or if you're running your own AI endpoints, you can add them to Braintrust by configuring a custom provider.
- Navigate to AI providers in your Braintrust dashboard
- Select Add provider and Custom
- Configure your custom endpoint with the required parameters

Configuration options
Specify the following for your custom provider.
- Provider name: A unique name for your custom provider
- Model name: The name of your custom model (e.g.,
gpt-3.5-acme
,my-custom-llama
) - Endpoint URL: The API endpoint for your custom model
- Format: The API format (
openai
,anthropic
,google
,window
, orjs
) - Flavor: Whether it's a
chat
orcompletion
model (default:chat
) - Headers: Any custom headers required for authentication or configuration
Custom headers and templating
Any headers you add to the configuration are passed through in the request to the custom endpoint. The values of the headers can be templated using Mustache syntax with these supported variables:
{{email}}
: Email of the user associated with the Braintrust API key{{model}}
: The model name being requested
Example header configuration:
Streaming support
If your endpoint doesn't support streaming natively, set the "Endpoint supports streaming" flag to false. Braintrust will automatically convert the response to streaming format, allowing your models to work in the playground and other streaming contexts.
Model metadata
You can optionally specify:
- Multimodal: Whether the model supports multimodal inputs
- Input cost: Cost per million input tokens (for experiment cost estimation)
- Output cost: Cost per million output tokens (for experiment cost estimation)
API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.
Trace logs with custom providers
Trace custom provider LLM calls for observability and monitoring.
Automatic tracing
Once your custom provider is configured, tracing works automatically.
Manual tracing
For more control over tracing, you can manually log calls to your custom provider.
Evaluations
Evaluations distill the non-deterministic outputs of custom models into an effective feedback loop that enables you to ship more reliable, higher quality products. Braintrust Eval
is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
Basic evaluation setup
Use your custom models as evaluators in Braintrust experiments.
Use custom providers for LLM-as-a-judge
Custom models can serve as evaluators for other AI systems.
Compare custom models
You can run experiments comparing your custom models against standard providers.
Common use cases
Self-hosted models
For self-hosted models (e.g. using Ollama, vLLM, or custom deployments):
- Set the endpoint URL to your self-hosted service
- Choose the appropriate format based on your API compatibility
- Configure any required authentication headers
- Set streaming support based on your implementation
Fine-tuned models
For fine-tuned versions of existing models:
- Use the same format as the base model
- Set the model name to your fine-tuned model identifier
- Configure the endpoint URL if using a custom deployment
- Add any provider-specific headers for accessing fine-tuned models
Proprietary AI services
For proprietary or enterprise AI services:
- Configure the endpoint URL provided by your AI service
- Set up authentication headers as required
- Choose the format that best matches your service's API
- Enable or disable streaming based on service capabilities
Test your custom provider configuration in a Braintrust Playground before running large-scale evaluations to ensure everything is working correctly.