- Inline in SDK code: Define scorers directly in your evaluation scripts for local development or application-specific logic.
- Pushed via CLI: Define scorers in TypeScript or Python files and push them to Braintrust for team-wide sharing and automatic evaluation of production logs.
- Created in UI: Build scorers in the Braintrust web interface for rapid prototyping and simple configurations.
Score spans
Span-level scorers evaluate individual operations or outputs. Use them for measuring single LLM responses, checking specific tool calls, or validating individual outputs. Each matching span receives an independent score. Your prompt template can reference these variables:{{input}}: The input to your task{{output}}: The output from your task{{expected}}: The expected output (optional){{metadata}}: Custom metadata from the test case
- SDK
- CLI
- UI
Use scorers inline in your evaluation code:
llm_scorer.eval.ts
Score traces
Trace-level scorers evaluate entire execution traces including all spans and conversation history. Use these for assessing multi-turn conversation quality, overall workflow completion, or when your scorer needs access to the full execution context. The scorer runs once per trace. Your prompt template can reference the{{thread}} variable, which provides the full conversation formatted as human-readable text. input, output, expected, and metadata are automatically populated from the root span of the trace.
Trace-level scoring requires TypeScript SDK v2.2.1+, Python SDK v0.5.6+, or Ruby SDK v0.2.1+.
- SDK
- CLI
- UI
Use scorers inline in your evaluation code:
trace_llm_scorer.eval.ts
Set pass thresholds
Define minimum acceptable scores to automatically mark results as passing or failing. When configured, scores that meet or exceed the threshold are marked as passing (green highlighting with checkmark), while scores below are marked as failing (red highlighting).- CLI
- UI
Add
__pass_threshold to the scorer’s metadata (value between 0 and 1):Next steps
- Autoevals for pre-built scorers you can drop in without writing a prompt
- Custom code for deterministic logic or when you need full control
- Run evaluations using your scorers
- Score production logs with online scoring rules