Why deploy with Braintrust
Deploying through Braintrust gives you:- Unified API: Call any AI provider (OpenAI, Anthropic, Google, AWS, etc.) through a single interface
- Automatic observability: Every production request is logged and traceable
- Caching: Reduce costs and latency with built-in response caching
- Version control: Deploy prompts and functions with full version history
- Environment management: Separate dev, staging, and production configurations
- Fallbacks: Automatically retry failed requests with backup providers
Deploy prompts and functions
Prompts and functions created in Braintrust can be called from your application code. Changes to prompts in the UI immediately affect production behavior, enabling rapid iteration without redeployment.Use the AI Proxy
The AI Proxy provides a unified interface to call any AI provider through Braintrust. Use the OpenAI SDK with any provider:Manage environments
Environments separate your development, staging, and production configurations. Set different prompts, functions, or API keys per environment:Monitor deployments
Every production request flows through the same observability system you used during development. View logs, filter by errors, score online, and create dashboards to track performance. Set up alerts to notify you when error rates spike or latency exceeds thresholds.Use the API
Access all Braintrust functionality through the Data API. Export logs, create datasets, run experiments, and manage prompts programmatically:Next steps
Use the AI Proxy
Call any AI provider through a unified interface
Deploy prompts
Ship and version prompts in production
Deploy functions
Deploy tools, scorers, and agents
Monitor deployments
Track production performance and errors
Manage environments
Separate dev, staging, and production
Use the API
Access Braintrust programmatically