Skip to main content
After observing your application in production, the next step is annotating and curating data to build evaluation datasets. This process transforms raw production logs into high-quality test cases that help you systematically improve your application.

Why annotate

Annotation creates the ground truth data needed for evaluation. By collecting feedback, adding labels, and curating examples from production, you build datasets that:
  • Represent real user interactions and edge cases
  • Include expected outputs and quality assessments
  • Enable systematic testing and comparison
  • Support automated and human evaluation
Braintrust integrates annotation seamlessly with logs and experiments, making it easy to capture feedback and build datasets without context switching.

Build datasets

Datasets are versioned collections of test cases that you use to run evaluations. Each record contains:
  • Input: The data sent to your application
  • Expected: The ideal output (optional but recommended)
  • Metadata: Tags, user IDs, or other contextual information
Create datasets from:
  • Production logs with interesting patterns
  • User feedback (thumbs up/down, corrections)
  • Manual curation by subject matter experts
  • Generated examples from Loop

Gather human feedback

Human review provides qualitative assessments that complement automated scoring. Configure review scores in your project to collect:
  • Continuous scores: Numeric ratings with slider controls (0-100%)
  • Categorical scores: Predefined options with assigned values
  • Expected values: Corrections showing what the output should be
  • Comments: Free-form feedback and context
Use focused review mode to efficiently evaluate large batches of logs or experiments with keyboard navigation.

Add labels and corrections

Beyond scores, you can annotate spans with:
  • Tags: Categorize traces for organization and filtering
  • Comments: Provide context or explain issues
  • Expected values: Specify correct outputs
  • Metadata: Add custom fields for analysis
These annotations flow between logs, datasets, and experiments, maintaining context throughout your workflow.

Export data

Extract annotated data for use in:
  • External evaluation frameworks
  • Custom analysis pipelines
  • Reporting and documentation
  • Training data for fine-tuning
Export via the UI or programmatically through the API to integrate with your existing tools and workflows.

Next steps