Add provider API keys
Configure API keys for standard providers:- Navigate to Settings > Organization > AI providers.
- Click the provider you want to configure.
- Enter your API key.
- Click Save.
Supported providers
Standard providers include:- OpenAI (GPT-4o, GPT-4o-mini, o4-mini, etc.).
- Anthropic (Claude 4 Sonnet, Claude 3.5 Sonnet, etc.).
- Google (Gemini 2.5 Flash, Gemini 2.5 Pro, etc.).
- AWS Bedrock (Claude, Llama, Mistral models).
- Azure OpenAI Service.
- Third-party providers (Together AI, Fireworks, Groq, Replicate, etc.).
Add custom providers
Configure custom models or private endpoints:- Navigate to Settings > Organization > AI providers.
- Click + Custom provider.
- Enter provider details:
- Name: Display name.
- Model name: Identifier for code (e.g.,
gpt-4o-acme). - Endpoint URL: API endpoint.
- Headers: Optional authentication headers.
- Flavor: Chat or completion.
- Format: OpenAI, Anthropic, Google, Window, or JS.
- Streaming: Whether endpoint supports streaming.
- Multimodal: Whether model accepts images.
- Input cost: Price per million input tokens.
- Output cost: Price per million output tokens.
- Click Save.
Templated headers
Headers support Mustache templates with these variables:{{email}}: Email of the user making the request.{{model}}: Model name being requested.
Non-streaming endpoints
If your endpoint doesn’t support streaming, disable Endpoint supports streaming. The proxy will convert responses to streaming format for compatibility with playgrounds.Load balance across providers
Configure multiple API keys for the same model to automatically load balance requests:- Add your primary provider key (e.g., OpenAI).
- Add Azure OpenAI as a custom provider for the same models.
- The proxy automatically distributes requests across both.
- Resilience if one provider is down.
- Higher effective rate limits.
- Geographic distribution.
Set up for self-hosted
For self-hosted deployments, configure proxy URLs:- Navigate to Settings > Organization > API URL.
- Enter your URLs:
- API URL: Main API endpoint.
- Proxy URL: AI Proxy endpoint (usually
<API_URL>/v1/proxy). - Realtime URL: Realtime API endpoint.
- Click Save.
Access the proxy
Users and applications access the proxy through configured endpoints:Monitor proxy usage
Track proxy usage across your organization:- Create a project for proxy logs.
- Enable logging by setting the
x-bt-parentheader when calling the proxy. - View logs in the Logs page.
- Create dashboards to track usage, costs, and errors.
Next steps
- Use the AI Proxy for detailed usage instructions
- Manage organizations to configure AI providers
- Deploy prompts that use the proxy
- Monitor deployments to track proxy usage