Together
Together AI provides access to a wide range of open-source language models including Llama, Mixtral, Code Llama, and other state-of-the-art models. Braintrust integrates seamlessly with Together through direct API access, wrapper functions for automatic tracing, and proxy support.
Setup
To use Together models, configure your Together API key in Braintrust.
- Get a Together API key from Together AI Console
- Add the Together API key to your organization's AI providers
- Set the Together API key and your Braintrust API key as environment variables
API keys are encrypted using 256-bit AES-GCM encryption and are not stored or logged by Braintrust.
Use Together with Braintrust AI proxy
The Braintrust AI Proxy allows you to access Together models through a unified OpenAI-compatible interface.
Install the braintrust
and openai
packages.
pnpm add braintrust openai
Then, initialize the client and make a request to a Together model via the Braintrust AI Proxy.
Trace logs with Together
Trace your Together LLM calls for observability and monitoring.
When using the Braintrust AI Proxy, API calls are automatically logged to the specified project.
The Braintrust AI Proxy is not required to trace Together API calls. For more control, learn how to customize traces.
Evaluate with Together
Evaluations distill the non-deterministic outputs of Together models into an effective feedback loop that enables you to ship more reliable, higher quality products. Braintrust Eval
is a simple function composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
To learn more about tool use, multimodal support, attachments, and masking sensitive data with Together, visit the customize traces guide.
Models and capabilities
Model | Multimodal | Reasoning | Max input | Max output | Input $/1M | Output $/1M |
---|---|---|---|---|---|---|
openai/gpt-oss-120b | ﹣ | ﹣ | $0.15 | $0.60 | ||
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 | ﹣ | ﹣ | ﹣ | ﹣ | ||
meta-llama/Llama-4-Scout-17B-16E-Instruct | ﹣ | ﹣ | ﹣ | ﹣ | ||
meta-llama/Llama-3.3-70B-Instruct-Turbo | ﹣ | ﹣ | $0.88 | $0.88 | ||
meta-llama/Llama-3.3-70B-Instruct-Turbo-Free | ﹣ | ﹣ | $0 | $0 | ||
meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo | ﹣ | ﹣ | $1.20 | $1.20 | ||
meta-llama/Llama-3.2-11B-Vision-Instruct-Turbo | ﹣ | ﹣ | $0.18 | $0.18 | ||
meta-llama/Llama-Vision-Free | ﹣ | ﹣ | $0 | $0 | ||
meta-llama/Llama-3.2-3B-Instruct-Turbo | ﹣ | ﹣ | $0.06 | $0.06 | ||
meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo | ﹣ | ﹣ | $3.50 | $3.50 | ||
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | ﹣ | ﹣ | $0.88 | $0.88 | ||
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo | ﹣ | ﹣ | $0.18 | $0.18 | ||
meta-llama/Llama-3-70b-chat-hf | ﹣ | ﹣ | $0.90 | $0.90 | ||
meta-llama/Meta-Llama-3-70B-Instruct-Turbo | ﹣ | ﹣ | $0.88 | $0.88 | ||
meta-llama/Meta-Llama-3-70B-Instruct-Lite | ﹣ | ﹣ | $0.54 | $0.54 | ||
meta-llama/Llama-3-8b-chat-hf | ﹣ | ﹣ | $0.20 | $0.20 | ||
meta-llama/Meta-Llama-3-8B-Instruct-Turbo | ﹣ | ﹣ | $0.18 | $0.18 | ||
meta-llama/Meta-Llama-3-8B-Instruct-Lite | ﹣ | ﹣ | $0.10 | $0.10 | ||
google/gemma-2-27b-it | ﹣ | 8,192 | $0.35 | $1.05 | ||
google/gemma-2-9b-it | ﹣ | 8,192 | $0.35 | $1.05 | ||
google/gemma-2b-it | ﹣ | ﹣ | $0.10 | $0.10 | ||
mistralai/Mistral-Small-24B-Instruct-2501 | ﹣ | ﹣ | $0.80 | $0.80 | ||
mistralai/Mistral-7B-Instruct-v0.3 | ﹣ | ﹣ | $0.20 | $0.20 | ||
mistralai/Mistral-7B-Instruct-v0.2 | ﹣ | ﹣ | $0.20 | $0.20 | ||
mistralai/Mistral-7B-Instruct-v0.1 | ﹣ | ﹣ | $0.20 | $0.20 | ||
mistralai/Mixtral-8x22B-Instruct-v0.1 | ﹣ | ﹣ | $1.20 | $1.20 | ||
mistralai/Mixtral-8x7B-Instruct-v0.1 | ﹣ | ﹣ | $0.60 | $0.60 | ||
deepseek-ai/DeepSeek-V3 | ﹣ | ﹣ | $1.25 | $1.25 | ||
deepseek-ai/DeepSeek-R1 | ﹣ | ﹣ | $7.00 | $7.00 | ||
deepseek-ai/DeepSeek-R1-Distill-Llama-70B | ﹣ | ﹣ | $2.00 | $2.00 | ||
deepseek-ai/DeepSeek-R1-Distill-Llama-70B-Free | ﹣ | ﹣ | $0 | $0 | ||
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | ﹣ | ﹣ | $1.60 | $1.60 | ||
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | ﹣ | ﹣ | $0.18 | $0.18 | ||
deepseek-ai/deepseek-llm-67b-chat | ﹣ | ﹣ | $0.90 | $0.90 | ||
Qwen/Qwen2.5-72B-Instruct-Turbo | ﹣ | ﹣ | $1.20 | $1.20 | ||
Qwen/Qwen2.5-7B-Instruct-Turbo | ﹣ | ﹣ | $0.30 | $0.30 | ||
Qwen/Qwen2.5-Coder-32B-Instruct | ﹣ | ﹣ | $0.80 | $0.80 | ||
Qwen/QwQ-32B | ﹣ | ﹣ | $0.80 | $0.80 | ||
Qwen/Qwen2-VL-72B-Instruct | ﹣ | ﹣ | $1.20 | $1.20 | ||
Qwen/Qwen2-72B-Instruct | ﹣ | ﹣ | $0.90 | $0.90 | ||
nvidia/Llama-3.1-Nemotron-70B-Instruct-HF | ﹣ | ﹣ | $0.88 | $0.88 | ||
microsoft/WizardLM-2-8x22B | ﹣ | ﹣ | $1.20 | $1.20 | ||
databricks/dbrx-instruct | ﹣ | ﹣ | $1.20 | $1.20 | ||
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | ﹣ | ﹣ | $0.60 | $0.60 | ||
Gryphe/MythoMax-L2-13b | ﹣ | ﹣ | $0.30 | $0.30 | ||
Gryphe/MythoMax-L2-13b-Lite | ﹣ | ﹣ | $0.10 | $0.10 | ||
meta-llama/Meta-Llama-3-70B | ﹣ | ﹣ | $0.90 | $0.90 | ||
meta-llama/Llama-3-8b-hf | ﹣ | ﹣ | $0.20 | $0.20 | ||
meta-llama/Llama-2-70b-chat-hf | ﹣ | ﹣ | $0.90 | $0.90 | ||
deepseek-ai/deepseek-coder-33b-instruct | ﹣ | ﹣ | $0.80 | $0.80 | ||
Qwen/QwQ-32B-Preview | ﹣ | ﹣ | $0.80 | $0.80 | ||
NousResearch/Nous-Hermes-2-Yi-34B | ﹣ | ﹣ | $0.80 | $0.80 | ||
mistralai/mixtral-8x7b-32kseqlen | ﹣ | ﹣ | $0.06 | $0.06 | ||
mistralai/Mixtral-8x7B-Instruct-v0.1-json | ﹣ | ﹣ | $0.60 | $0.60 | ||
mistralai/Mixtral-8x22B | ﹣ | ﹣ | $1.08 | $1.08 |