wrapAnthropic wrapper functions for automatic tracing, and proxy support.
Setup
To use Anthropic with Braintrust, you’ll need an Anthropic API key.- Visit Anthropic’s Console and create a new API key
- Add the Anthropic API key to your organization’s AI providers or to a project’s AI providers
- Set the Anthropic API key and your Braintrust API key as environment variables
.env
API keys are encrypted at rest using transparent data encryption with a unique 256-bit key and nonce.
braintrust and @anthropic-ai/sdk packages.
Trace with Anthropic
Trace your Anthropic LLM calls for observability and monitoring.Trace automatically
Braintrust provides automatic tracing for Anthropic API calls. Braintrust handles streaming, metric collection (including cached tokens), and other details.- TypeScript & Python: Use
wrapAnthropic/wrap_anthropicwrapper functions - Go: Use the tracing middleware with the Anthropic client
- Ruby: Use
Braintrust::Trace::Anthropic.wrapto wrap the Anthropic client - Java: Use the tracing interceptor with the Anthropic client
Evaluate with Anthropic
Evaluations distill the non-deterministic outputs of Anthropic models into an effective feedback loop that enables you to ship more reliable, higher quality products. The BraintrustEval function is composed of a dataset of user inputs, a task, and a set of scorers. To learn more about evaluations, see the Experiments guide.
Basic Anthropic eval setup
Evaluate the outputs of Anthropic models with Braintrust.Use Anthropic as an LLM judge
You can use Anthropic models to score the outputs of other AI systems. This example uses theLLMClassifierFromSpec scorer to score the relevance of the outputs of an AI system.
Install the autoevals package to use the LLMClassifierFromSpec scorer.
LLMClassifierFromSpec scorer to score the relevance of the output. You can then include relevanceScorer as a scorer in your Eval function (see above).