
Open Loop
Select Loop in the bottom right corner of the Logs page to open the chat window. Loop keeps track of your queries in a queue, so you can ask multiple follow-ups while it’s running. Use the Enter key to interrupt the current operation and execute the next query in the queue.
Configure Loop
Select a model
Change the AI model in the dropdown at the bottom of the Loop chat window. Supported models:claude-4.5-sonnet(recommended)claude-4.5-haikuclaude-4.5-opusclaude-4-sonnetclaude-4.1-opusgpt-5.1gpt-5.2
Only models from organization-level AI providers are available to Loop. Administrators can configure AI providers at the organization level and select which models are available to Loop.
Toggle auto-accept
By default, Loop asks for confirmation before executing certain actions. To enable auto-accept, select settings in your Loop chat window and select Auto-accept edits.Select data sources
Loop can access different parts of your project. Select add context and search for the data sources you want Loop to query, such as specific datasets or experiments.Analyze logs
Ask Loop to analyze your logs and provide insights about health, activity trends, errors, performance, and recommendations.
- “What are the most common errors?”
- “What user retention trends do you see?”
- “Find common failure modes”
- “Show me traces where users were frustrated”
- “What patterns do you see in high-latency requests?”
Generate filters
Use Loop to create SQL queries from natural language descriptions:- Select Filter to open the filter editor
- Switch to SQL mode
- Select Generate and describe the filter you want
- “Only LLM spans”
- “From user John Smith”
- “Logs from the last 5 days where factuality score is less than 0.5”
- “Traces that took longer than 60 seconds”
Find similar traces
Select rows in the logs table and use Find similar traces. Loop analyzes the selected traces to identify common traits and returns semantically similar traces. This helps you:- Discover patterns across different user interactions
- Find edge cases with similar characteristics
- Group related issues together
- Build datasets from similar examples
Generate datasets
Create datasets from your logs based on specific criteria:
- “Create a dataset from the most common inputs in the logs”
- “Generate a dataset from logs with errors”
- “Build a dataset from high-scoring examples”
Generate scorers
Create scorers based on patterns you identify in logs:
- “Generate a code-based scorer based on project logs”
- “Write a scorer that detects the errors I just identified”
- “Create an LLM-as-a-judge scorer for helpfulness based on these logs”
Search documentation
Ask Loop to search through Braintrust documentation for relevant information and guidance:
- “How do I use the Braintrust SDK?”
- “What is the difference between a prompt and a scorer?”
- “How do I configure online scoring?”
Next steps
- Build datasets from patterns you identify
- Create scorers based on log analysis
- Run experiments to validate improvements
- Try the Loop cookbook for more examples