Why use SQL?
SQL gives you precise control over your AI application data. You can:- Filter and search for relevant logs and experiments
- Create consistent, reusable queries for monitoring
- Build automated reporting and analysis pipelines
- Write complex queries to analyze model performance
SQL in Braintrust
Use SQL when filtering logs and experiments, in the SQL sandbox, and programmatically through the Braintrust API.Filter logs and experiments
Use SQL to filter logs and experiments based on specific criteria. You can filter logs by tags, metadata, or any other relevant fields. Filtering in logs and experiments only supportsWHERE clauses.
At the top of your experiment or log view, select Filter to open the filter editor and select SQL to switch to SQL mode.
SQL sandbox
To test SQL with autocomplete, validation, and a table of results, use the SQL sandbox in the dashboard. In your project, select SQL sandbox at the bottom of the sidebar.API access
Access SQL programmatically with the Braintrust API:query(required): your SQL query stringfmt: response format (jsonorparquet, defaults tojson)tz_offset: timezone offset in minutes for time-based operationsaudit_log: include audit log data
For correct day boundaries, set
tz_offset to match your timezone. For example, use 480 for US Pacific Standard Time.Query structure
SQL queries follow a familiar structure that lets you define what data you want, how you want it returned, and how to analyze it. This example returns every log from a project where Factuality is greater than 0.8, sorts by created date descending, and limits the results to 100.SELECT/select:: choose which fields to retrieveFROM/from:: specify the data source. Has an optional designator for the shape of the data:spans,traces,summary. If not specified, defaults tospansWHERE/filter:: define conditions to filter the datasample:: (BTQL-only) randomly sample a subset of the filtered data (rate or count-based)ORDER BY/sort:: set the order of results (ASC/DESCorasc/desc)LIMIT/limit:: control result sizecursor:: (BTQL-only) enable pagination
BTQL syntax
Braintrust also supports BTQL, an alternative pipe-delimited clause syntax. The parser automatically detects whether your query is SQL or BTQL:- SQL queries start with
SELECT,WITH, etc. followed by whitespace - BTQL queries use clause syntax like
select:,filter:, etc.
| SQL Clause | BTQL Clause |
|---|---|
SELECT ... | select: ... |
FROM table('id', shape => 'traces') | from: table('id') traces |
WHERE ... | filter: ... |
GROUP BY ... | dimensions: ... |
ORDER BY ... | sort: ... |
LIMIT n | limit: n |
SQL syntax specifies the shape with a named parameter (e.g.,
FROM experiment('id', shape => 'traces')), while BTQL uses a trailing token (e.g., from: experiment('id') traces). Table aliases (e.g., AS t) are reserved for future use.Full-text search: Use the
MATCH infix operator for full-text search:WHERE input MATCH 'search term'→filter: input MATCH 'search term'- Multiple columns require OR:
WHERE input MATCH 'x' OR output MATCH 'x'→filter: input MATCH 'x' OR output MATCH 'x'
FROM data source options
The FROM clause in SQL specifies the data source for your query.
experiment('<experiment_id1>', <experiment_id2>): a specific experiment or list of experimentsdataset('<dataset_id1>', <dataset_id2>): a specific dataset or list of datasetsprompt('<prompt_id1>', <prompt_id2>): a specific prompt or list of promptsfunction('<function_id1>', <function_id2>): a specific function or list of functionsview('<view_id1>', <view_id2>): a specific saved view or list of saved viewsproject_logs('<project_id1>', <project_id2>): all logs for a specific project or list of projectsproject_prompts('<project_id1>', <project_id2>): all prompts for a specific project or list of projectsproject_functions('<project_id1>', <project_id2>): all functions for a specific project or list of projectsorg_prompts('<org_id1>', <org_id2>): all prompts for a specific organization or list of organizationsorg_functions('<org_id1>', <org_id2>): all functions for a specific organization or list of organizations
Retrieve records
When retrieving records with SQL, you can either useSELECT or SELECT ... GROUP BY. You can use most tools when using either method, but you must use GROUP BY if you want to aggregate functions to retrieve results.
Both retrieval methods work with all data shapes (spans, traces, and summary). Using GROUP BY with the summary shape enables trace-level aggregations.
SELECT
SELECT in SQL lets you choose specific fields, compute values, or use * to retrieve every field.
Implicit aliasing: Multi-part identifiers like
metadata.model automatically create implicit aliases using their last component (e.g., model), which you can use in WHERE, ORDER BY, and GROUP BY clauses when unambiguous. See Field access for details.SELECT clause. This query returns metadata.model, whether metrics.tokens is greater than 1000, and a quality indicator of either “high” or “low” depending on whether or not the Factuality score is greater than 0.8.
SELECT clause to transform values and create meaningful aliases for your results. This query extracts the day the log was created, the hour, and a Factuality score rounded to 2 decimal places.
GROUP BY for aggregations
Instead of a simple SELECT, you can use SELECT ... GROUP BY to group and aggregate data. This query returns a row for each distinct model with the day it was created, the total number of calls, the average Factuality score, and the latency percentile.
count(expr): number of rowscount_distinct(expr): number of distinct valuessum(expr): sum of numeric valuesavg(expr): mean (average) of numeric valuesmin(expr): minimum valuemax(expr): maximum valuepercentile(expr, p): a percentile wherepis between 0 and 1
FROM
The FROM clause identifies where the records are coming from. This can be an identifier like project_logs or a function call like experiment("id").
You can add an optional parameter to the FROM clause that defines how the data is returned. The options are spans (default), traces, and summary.
spans
spans returns individual spans that match the filter criteria. This example returns 10 LLM call spans that took more than 0.2 seconds to use the first token.
traces
traces returns all spans from traces that contain at least one matching span. This is useful when you want to see the full context of a specific event or behavior, for example if you want to see all spans in traces where an error occurred.
This example returns all spans for a specific trace where one span in the trace had an error.
summary
summary provides trace-level views of your data by aggregating metrics across all spans in a trace. This shape is useful for analyzing overall performance and comparing results across experiments.
The summary shape can be used in two ways:
- Individual trace summaries (using
SELECT): Returns one row per trace with aggregated span metrics. Use this to see trace-level details. Example: “What are the details of traces with errors?” - Aggregated trace analytics (using
GROUP BY): Groups multiple traces and computes statistics. Use this to analyze patterns across many traces. Example: “What’s the average cost per model per day?”
Individual trace summaries
UseSELECT with the summary shape to retrieve individual traces with aggregated metrics. This is useful for inspecting specific trace details, debugging issues, or exporting trace-level data.
This example returns 10 summary rows from the project logs for ‘my-project-id’:
scores: an object with all scores averaged across all spansmetrics: an object with aggregated metrics across all spansprompt_tokens: total number of prompt tokens usedcompletion_tokens: total number of completion tokens usedprompt_cached_tokens: total number of cached prompt tokens usedprompt_cache_creation_tokens: total number of tokens used to create cache entriestotal_tokens: total number of tokens used (prompt + completion)estimated_cost: total estimated cost of the trace in US dollars (prompt + completion costs)llm_calls: total number of LLM callstool_calls: total number of tool callserrors: total number of errors (LLM + tool errors)llm_errors: total number of LLM errorstool_errors: total number of tool errorsstart: Unix timestamp of the first span start timeend: Unix timestamp of the last span end timeduration: maximum duration of any span in seconds. Note: this is not the total trace duration.llm_duration: sum of all durations across LLM spans in secondstime_to_first_token: the average time to first token across LLM spans in seconds
span_type_info: an object with span type info. Some fields in this object are aggregated across all spans and some reflect attributes from the root span.cached: true only if all LLM spans were cachedhas_error: true if any span had an error
input, output, expected, error, and metadata.
Aggregated trace analytics
UseGROUP BY with the summary shape to group and aggregate traces. This is useful for analyzing patterns, monitoring performance trends, and comparing metrics across models or time periods.
This example shows how to group traces by model to track performance over time:
WHERE
The WHERE clause lets you specify conditions to narrow down results. It supports a wide range of operators and functions, including complex conditions.
This example WHERE clause only retrieves data where:
- Factuality score is greater than 0.8
- model is “gpt-4”
- tag list includes “triage”
- input contains the word “question” (case-insensitive)
- created date is later than January 1, 2024
- more than 1000 tokens were used or the data being traced was made in production
Single span filters
By default, each returned trace includes at least one span that matches all filter conditions. Use single span filters to find traces where different spans match different conditions. This is helpful for finding errors in tagged traces where the error may not be on the root span. Wrap any filter expression withany_span() to mark it as a single span filter. This WHERE example returns traces with a “production” tag that encountered an error.
traces and summary data shapes.
Pattern matching
SQL supports the% wildcard for pattern matching with LIKE (case-sensitive) and ILIKE (case-insensitive).
The % wildcard matches any sequence of zero or more characters.
Time intervals
SQL supports intervals for time-based operations. This query returns all project logs from ‘my-project-id’ that were created in the last day.ORDER BY
The ORDER BY clause determines the order of results. The options are DESC (descending) and ASC (ascending) on a numerical field. You can sort by a single field, multiple fields, or computed values.
PIVOT and UNPIVOT
PIVOT and UNPIVOT are advanced operations that transform your results for easier analysis and comparison. Both SQL and BTQL syntax support these operations.
PIVOT
PIVOT transforms rows into columns, which makes comparisons easier by creating a column for each distinct value. This is useful when comparing metrics across different categories, models, or time periods.
Structure:
- The pivot column must be a single identifier (e.g.,
metadata.model) - Must include at least one aggregate measure (e.g.,
SUM(value),AVG(score)) - Only
IN (ANY)is supported (explicit value lists, subqueries,ORDER BY, andDEFAULT ON NULLare not supported) SELECTlist must include the pivot column, all measures, and allGROUP BYcolumns (or useSELECT *)
metadata.model with a model named “gpt-4” for measure avg_score, the column becomes gpt-4_avg_score. When using aliases, the alias replaces the measure name in the output column.
Single aggregate - pivot one metric across categories:
PIVOT with GROUP BY for multi-dimensional analysis:
SELECT * - automatically includes all required columns:
UNPIVOT
UNPIVOT transforms columns into rows, which is useful when you need to analyze arbitrary scores and metrics without specifying each field name in advance. This is helpful when working with dynamic sets of metrics or when you want to normalize data for aggregation.
Key-value unpivot - transforms an object into rows with key-value pairs:
When using key-value unpivot, the source column must be an object (e.g.,
scores). When using array unpivot with _, the source column must be an array (e.g., tags)._ as the name column:
UNPIVOT operations to expand multiple columns:
UNPIVOT with GROUP BY to aggregate across unpivoted rows:
LIMIT and cursors
LIMIT
The LIMIT clause controls the size of the result in number of records.
Cursors for pagination
Cursors are only supported in BTQL syntax, not in SQL syntax.
limit. In order to implement pagination, after an initial query, provide the subsequent cursor token returned in the results in the cursor clause in follow-on queries. When a cursor has reached the end of the result set, the data array will be empty, and no cursor token will be returned by the query.
BTQL syntax
sort clause is specified. If you need sorted results, you’ll need to implement offset-based pagination by using the last value from your sort field as a filter in the next query.
Expressions
SQL operators
You can use the following operators in your SQL queries.SQL functions
You can use the following functions inSELECT, WHERE, GROUP BY clauses, and aggregate measures.
Field access
SQL provides flexible ways to access nested data in arrays and objects:Array indices are 0-based, and negative indices count from the end (-1 is the last element).
json_extract function to access values within it. The path parameter is treated as a literal string key name:
Implicit aliasing
When you reference multi-part identifiers (e.g.,metadata.category), SQL automatically creates an implicit alias using the last component of the path (e.g., category). This allows you to use the short form in your queries when unambiguous.
-
Ambiguity prevention: If multiple fields share the same last component (e.g.,
metadata.nameanduser.name), the short formnamebecomes ambiguous and cannot be used. You must use the full path instead. -
Top-level field priority: Top-level fields take precedence over nested fields. If you have both
idandmetadata.id, the short formidrefers to the top-level field. -
Explicit aliases override: When you provide an explicit alias (e.g.,
metadata.category AS cat), the implicit alias is disabled and you must use either the explicit alias or the full path. -
Duplicate alias detection: SQL will detect and reject queries with duplicate aliases in the SELECT list, whether explicit or implicit. For example,
SELECT id, user.number AS idwill raise an error.
Conditional expressions
SQL supports conditional logic using the ternary operator (? :):
Examples
Track token usage
This query helps you monitor token consumption across your application.Monitor model quality
Track model performance across different versions and configurations.Analyze errors
Identify and investigate errors in your application.Analyze latency
Monitor and optimize response times.Analyze prompts
Analyze prompt effectiveness and patterns.Analyze based on tags
Use tags to track and analyze specific behaviors.Extract data from JSON strings
Usejson_extract to extract values from a JSON string using a key name. This is useful when you have JSON data stored as a string field and need to access specific values within it. The path parameter is treated as a literal key name (not a path expression with traversal).
json_extract returns null for invalid JSON or missing keys rather than raising an error, making it safe to use in filters and aggregations. The path parameter is a literal key name, not a path expression - characters like dots, brackets, etc. are treated as part of the key name itself.