Skip to main content
SQL queries in Braintrust provide a precise, standard syntax for querying Braintrust experiments, logs, and datasets. Use SQL to better analyze and understand your data. Braintrust supports two syntax styles: standard SQL syntax, and the native BTQL syntax with pipe-delimited clauses. The parser automatically detects which style you’re using.

Why use SQL?

SQL gives you precise control over your AI application data. You can:
  • Filter and search for relevant logs and experiments
  • Create consistent, reusable queries for monitoring
  • Build automated reporting and analysis pipelines
  • Write complex queries to analyze model performance

SQL in Braintrust

Use SQL when filtering logs and experiments, in the SQL sandbox, and programmatically through the Braintrust API.

Filter logs and experiments

Use SQL to filter logs and experiments based on specific criteria. You can filter logs by tags, metadata, or any other relevant fields. Filtering in logs and experiments only supports WHERE clauses. At the top of your experiment or log view, select Filter to open the filter editor and select SQL to switch to SQL mode.

SQL sandbox

To test SQL with autocomplete, validation, and a table of results, use the SQL sandbox in the dashboard. In your project, select SQL sandbox at the bottom of the sidebar.

API access

Access SQL programmatically with the Braintrust API:
curl -X POST https://api.braintrust.dev/btql \
  -H "Authorization: Bearer <YOUR_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{"query": "SELECT * FROM project_logs('"'<YOUR_PROJECT_ID>'"') WHERE tags INCLUDES '"'triage'"'"}'
The API accepts these parameters:
  • query (required): your SQL query string
  • fmt: response format (json or parquet, defaults to json)
  • tz_offset: timezone offset in minutes for time-based operations
  • audit_log: include audit log data
For correct day boundaries, set tz_offset to match your timezone. For example, use 480 for US Pacific Standard Time.

Query structure

SQL queries follow a familiar structure that lets you define what data you want, how you want it returned, and how to analyze it. This example returns every log from a project where Factuality is greater than 0.8, sorts by created date descending, and limits the results to 100.
SELECT *
FROM project_logs('<PROJECT_ID>', shape => 'spans')
WHERE scores.Factuality > 0.8
ORDER BY created DESC
LIMIT 100
  • SELECT / select:: choose which fields to retrieve
  • FROM / from:: specify the data source. Has an optional designator for the shape of the data: spans, traces, summary. If not specified, defaults to spans
  • WHERE / filter:: define conditions to filter the data
  • sample:: (BTQL-only) randomly sample a subset of the filtered data (rate or count-based)
  • ORDER BY / sort:: set the order of results (ASC/DESC or asc/desc)
  • LIMIT / limit:: control result size
  • cursor:: (BTQL-only) enable pagination

BTQL syntax

Braintrust also supports BTQL, an alternative pipe-delimited clause syntax. The parser automatically detects whether your query is SQL or BTQL:
  • SQL queries start with SELECT, WITH, etc. followed by whitespace
  • BTQL queries use clause syntax like select:, filter:, etc.
SQL ClauseBTQL Clause
SELECT ...select: ...
FROM table('id', shape => 'traces')from: table('id') traces
WHERE ...filter: ...
GROUP BY ...dimensions: ...
ORDER BY ...sort: ...
LIMIT nlimit: n
SQL syntax specifies the shape with a named parameter (e.g., FROM experiment('id', shape => 'traces')), while BTQL uses a trailing token (e.g., from: experiment('id') traces). Table aliases (e.g., AS t) are reserved for future use.
Full-text search: Use MySQL’s MATCH...AGAINST syntax for BTQL’s MATCH operator:
  • WHERE MATCH(input) AGAINST ('search term')filter: input MATCH 'search term'
  • Multiple columns are OR’d: MATCH(col1, col2) AGAINST ('x')col1 MATCH 'x' OR col2 MATCH 'x'
Unsupported SQL features: The SQL parser does not support JOIN, subqueries, UNION/INTERSECT/EXCEPT, HAVING, or window functions. Use BTQL’s native syntax for queries that would require these features.

FROM data source options

The FROM clause in SQL specifies the data source for your query.
  • experiment('<experiment_id1>', <experiment_id2>): a specific experiment or list of experiments
  • dataset('<dataset_id1>', <dataset_id2>): a specific dataset or list of datasets
  • prompt('<prompt_id1>', <prompt_id2>): a specific prompt or list of prompts
  • function('<function_id1>', <function_id2>): a specific function or list of functions
  • view('<view_id1>', <view_id2>): a specific saved view or list of saved views
  • project_logs('<project_id1>', <project_id2>): all logs for a specific project or list of projects
  • project_prompts('<project_id1>', <project_id2>): all prompts for a specific project or list of projects
  • project_functions('<project_id1>', <project_id2>): all functions for a specific project or list of projects
  • org_prompts('<org_id1>', <org_id2>): all prompts for a specific organization or list of organizations
  • org_functions('<org_id1>', <org_id2>): all functions for a specific organization or list of organizations

Retrieve records

When retrieving records with SQL, you can either use SELECT or SELECT ... GROUP BY. You can use most tools when using either method, but you must use GROUP BY if you want to aggregate functions to retrieve results. Both retrieval methods work with all data shapes (spans, traces, and summary). Using GROUP BY with the summary shape enables trace-level aggregations.

SELECT

SELECT in SQL lets you choose specific fields, compute values, or use * to retrieve every field.
-- Get specific fields
SELECT
  metadata.model AS model,
  scores.Factuality AS score,
  created AS timestamp
FROM project_logs('my-project-id')
SQL allows you to transform data directly in the SELECT clause. This query returns metadata.model, whether metrics.tokens is greater than 1000, and a quality indicator of either “high” or “low” depending on whether or not the Factuality score is greater than 0.8.
SELECT
  -- Simple field access
  metadata.model,

  -- Computed values
  metrics.tokens > 1000 AS is_long_response,

  -- Conditional logic
  (scores.Factuality > 0.8 ? "high" : "low") AS quality
FROM project_logs('my-project-id')
You can also use functions in the SELECT clause to transform values and create meaningful aliases for your results. This query extracts the day the log was created, the hour, and a Factuality score rounded to 2 decimal places.
SELECT
  -- Date/time functions
  day(created) AS date,
  hour(created) AS hour,

  -- Numeric calculations
  round(scores.Factuality, 2) AS rounded_score
FROM project_logs('my-project-id')

GROUP BY for aggregations

Instead of a simple SELECT, you can use SELECT ... GROUP BY to group and aggregate data. This query returns a row for each distinct model with the day it was created, the total number of calls, the average Factuality score, and the latency percentile.
-- Analyze model performance over time
SELECT
  metadata.model AS model,
  day(created) AS date,
  count(1) AS total_calls,
  avg(scores.Factuality) AS avg_score,
  percentile(latency, 0.95) AS p95_latency
FROM project_logs('my-project-id')
GROUP BY metadata.model, day(created)
The available aggregate functions are:
  • count(expr): number of rows
  • count_distinct(expr): number of distinct values
  • sum(expr): sum of numeric values
  • avg(expr): mean (average) of numeric values
  • min(expr): minimum value
  • max(expr): maximum value
  • percentile(expr, p): a percentile where p is between 0 and 1

FROM

The FROM clause identifies where the records are coming from. This can be an identifier like project_logs or a function call like experiment("id"). You can add an optional parameter to the FROM clause that defines how the data is returned. The options are spans (default), traces, and summary.

spans

spans returns individual spans that match the filter criteria. This example returns 10 LLM call spans that took more than 0.2 seconds to use the first token.
SELECT *
FROM project_logs('my-project-id', shape => 'spans')
WHERE span_attributes.type = 'llm' AND metrics.time_to_first_token > 0.1
LIMIT 10
The response is an array of spans. Check out the Extend traces page for more details on span structure.

traces

traces returns all spans from traces that contain at least one matching span. This is useful when you want to see the full context of a specific event or behavior, for example if you want to see all spans in traces where an error occurred. This example returns all spans for a specific trace where one span in the trace had an error.
SELECT *
FROM project_logs('my-project-id', shape => 'traces')
WHERE root_span_id = 'trace-id' AND error IS NOT NULL
The response is an array of spans. Check out the Extend traces page for more details on span structure.

summary

summary provides trace-level views of your data by aggregating metrics across all spans in a trace. This shape is useful for analyzing overall performance and comparing results across experiments. The summary shape can be used in two ways:
  • Individual trace summaries (using SELECT): Returns one row per trace with aggregated span metrics. Use this to see trace-level details. Example: “What are the details of traces with errors?”
  • Aggregated trace analytics (using GROUP BY): Groups multiple traces and computes statistics. Use this to analyze patterns across many traces. Example: “What’s the average cost per model per day?”

Individual trace summaries

Use SELECT with the summary shape to retrieve individual traces with aggregated metrics. This is useful for inspecting specific trace details, debugging issues, or exporting trace-level data. This example returns 10 summary rows from the project logs for ‘my-project-id’.
SELECT *
FROM project_logs('my-project-id', shape => 'summary')
LIMIT 10
Summary rows include some aggregated metrics and some preview fields that show data from the root span of the trace. The following fields are aggregated metrics across all spans in the trace.
  • scores: an object with all scores averaged across all spans
  • metrics: an object with aggregated metrics across all spans
    • prompt_tokens: total number of prompt tokens used
    • completion_tokens: total number of completion tokens used
    • prompt_cached_tokens: total number of cached prompt tokens used
    • prompt_cache_creation_tokens: total number of tokens used to create cache entries
    • total_tokens: total number of tokens used (prompt + completion)
    • estimated_cost: total estimated cost of the trace in US dollars (prompt + completion costs)
    • llm_calls: total number of LLM calls
    • tool_calls: total number of tool calls
    • errors: total number of errors (LLM + tool errors)
    • llm_errors: total number of LLM errors
    • tool_errors: total number of tool errors
    • start: Unix timestamp of the first span start time
    • end: Unix timestamp of the last span end time
    • duration: maximum duration of any span in seconds. Note: this is not the total trace duration.
    • llm_duration: sum of all durations across LLM spans in seconds
    • time_to_first_token: the average time to first token across LLM spans in seconds
  • span_type_info: an object with span type info. Some fields in this object are aggregated across all spans and some reflect attributes from the root span.
    • cached: true only if all LLM spans were cached
    • has_error: true if any span had an error
Root span preview fields include input, output, expected, error, and metadata.

Aggregated trace analytics

Use GROUP BY with the summary shape to group and aggregate traces. This is useful for analyzing patterns, monitoring performance trends, and comparing metrics across models or time periods. These examples show how to group traces by model to track performance over time, and how to compare workflows across experiments:
-- Group traces by model to analyze performance over time
SELECT
  metadata.model AS model,
  day(created) AS date,
  count(1) AS trace_count,
  avg(scores.Factuality) AS avg_score,
  avg(metrics.estimated_cost) AS avg_cost
FROM project_logs('my-project-id', shape => 'summary')
GROUP BY 1, 2
ORDER BY date DESC
-- Compare workflows across experiments
SELECT
  metadata.workflow_type AS workflow,
  origin.experiment_id AS experiment,
  count(1) AS trace_count,
  avg(metrics.estimated_cost) AS avg_cost,
  avg(scores.Success) AS success_rate
FROM experiment('<EXPERIMENT_ID_1>', '<EXPERIMENT_ID_2>', shape => 'summary')
GROUP BY 1, 2

WHERE

The WHERE clause lets you specify conditions to narrow down results. It supports a wide range of operators and functions, including complex conditions. This example WHERE clause only retrieves data where:
  • Factuality score is greater than 0.8
  • model is “gpt-4”
  • tag list includes “triage”
  • input contains the word “question” (case-insensitive)
  • created date is later than January 1, 2024
  • more than 1000 tokens were used or the data being traced was made in production
SELECT *
FROM project_logs('my-project-id')
WHERE
  -- Simple comparisons
  scores.Factuality > 0.8 AND
  metadata.model = 'gpt-4' AND

  -- Array operations
  tags INCLUDES 'triage' AND

  -- Text search (case-insensitive)
  input ILIKE '%question%' AND

  -- Date ranges
  created > '2024-01-01' AND

  -- Complex conditions
  (
    metrics.tokens > 1000 OR
    metadata.is_production = true
  )

Single span filters

By default, each returned trace includes at least one span that matches all filter conditions. Use single span filters to find traces where different spans match different conditions. This is helpful for finding errors in tagged traces where the error may not be on the root span. Wrap any filter expression with any_span() to mark it as a single span filter. This WHERE example returns traces with a “production” tag that encountered an error.
WHERE
  any_span(tags INCLUDES "production") AND
  any_span(error IS NOT NULL)
Single span filters work with the traces and summary data shapes.

Pattern matching

SQL supports the % wildcard for pattern matching with LIKE (case-sensitive) and ILIKE (case-insensitive). The % wildcard matches any sequence of zero or more characters.
-- Match any input containing "question"
WHERE input ILIKE '%question%'

-- Match inputs starting with "How"
WHERE input LIKE 'How%'

-- Match emails ending with specific domains
WHERE metadata.email ILIKE '%@braintrust.com'

-- Escape literal wildcards with backslash
WHERE metadata.description LIKE '%50\% off%'  -- Matches "50% off"

Time intervals

SQL supports intervals for time-based operations. This query returns all project logs from ‘my-project-id’ that were created in the last day.
SELECT *
FROM project_logs('my-project-id')
WHERE created > now() - interval 1 day
This query returns all project logs from ‘my-project-id’ that were created up to an hour ago.
SELECT *
FROM project_logs('my-project-id')
WHERE created > now() - interval 1 hour
  AND created < now()
This query returns all project logs from ‘my-project-id’ that were created last week and last month.
-- Examples with different units
SELECT *
FROM project_logs('my-project-id')
WHERE created > now() - interval 7 day    -- Last week
  AND created > now() - interval 1 month  -- Last month

ORDER BY

The ORDER BY clause determines the order of results. The options are DESC (descending) and ASC (ascending) on a numerical field. You can sort by a single field, multiple fields, or computed values.
-- Sort by single field
ORDER BY created DESC

-- Sort by multiple fields
ORDER BY scores.Factuality DESC, created ASC

-- Sort by computed values
ORDER BY len(tags) DESC

pivot and unpivot

pivot and unpivot are advanced clauses that transform your results for easier analysis and comparison.
These clauses are currently only available in BTQL syntax.

pivot

The pivot clause transforms your results to make comparisons easier by converting rows into columns. This is useful when comparing metrics across different categories or time periods. Syntax:
pivot: <measure1>, <measure2>, ...
-- Compare model performance metrics across models
dimensions: day(created) as date
measures:
  avg(scores.Factuality) as avg_factuality,
  avg(metrics.tokens) as avg_tokens,
  count(1) as call_count
from: project_logs('my-project-id')
pivot: avg_factuality, avg_tokens, call_count

-- Results will look like:
-- {
--   "date": "2024-01-01",
--   "gpt-4_avg_factuality": 0.92,
--   "gpt-4_avg_tokens": 150,
--   "gpt-4_call_count": 1000,
--   "gpt-3.5-turbo_avg_factuality": 0.85,
--   "gpt-3.5-turbo_avg_tokens": 120,
--   "gpt-3.5-turbo_call_count": 2000
-- }
This query returns a record for each model with Factuality score and latency percentile across time periods.
-- Compare metrics across time periods
dimensions: metadata.model as model
measures:
  avg(scores.Factuality) as avg_score,
  percentile(latency, 0.95) as p95_latency
from: project_logs('my-project-id')
pivot: avg_score, p95_latency

-- Results will look like:
-- {
--   "model": "gpt-4",
--   "0_avg_score": 0.91,
--   "0_p95_latency": 2.5,
--   "1_avg_score": 0.89,
--   "1_p95_latency": 2.8,
--   ...
-- }
This query returns a record for each tag and aggregates the number of instances of that tag per model.
-- Compare tag distributions across models
dimensions: tags[0] as primary_tag
measures: count(1) as tag_count
from: project_logs('my-project-id')
pivot: tag_count

-- Results will look like:
-- {
--   "primary_tag": "quality",
--   "gpt-4_tag_count": 500,
--   "gpt-3.5-turbo_tag_count": 300
-- }
Pivot columns are automatically named by combining the dimension value and measure name. For example, if you pivot by metadata.model and a model named “gpt-4” to measure avg_score, the name becomes gpt-4_avg_score.

unpivot

The unpivot clause transforms columns into rows, which is useful when you need to analyze arbitrary scores and metrics without specifying each score name. This is helpful when working with dynamic sets of metrics or when you need to know all possible score names in advance.
-- Convert wide format to long format for arbitrary scores
dimensions: day(created) as date, score_name
measures: avg(score_value) as score_value
from: project_logs('my-project-id')
unpivot: scores as (score_name, score_value)

-- Results will look like:
-- {
--   "date": "2024-01-01",
--   "score_name": "Factuality",
--   "score_value": 0.92
-- },
-- {
--   "date": "2024-01-01",
--   "score_name": "Coherence",
--   "score_value": 0.88
-- }

LIMIT and cursors

LIMIT

The LIMIT clause controls the size of the result in number of records.
-- Basic limit
LIMIT 100

Cursors for pagination

Cursors are only supported in BTQL syntax, not in SQL syntax.
Cursors implement pagination in BTQL queries. Cursors are automatically returned in query responses. A default limit is applied in a query without a limit clause, and the number of returned results can be overridden by using an explicit limit. In order to implement pagination, after an initial query, provide the subsequent cursor token returned in the results in the cursor clause in follow-on queries. When a cursor has reached the end of the result set, the data array will be empty, and no cursor token will be returned by the query.
BTQL syntax
-- Pagination using cursor (only works without sort)
select: *
from: project_logs('<PROJECT_ID>')
limit: 100
cursor: '<CURSOR_TOKEN>'  -- From previous query response
Cursors can only be used for pagination when no sort clause is specified. If you need sorted results, you’ll need to implement offset-based pagination by using the last value from your sort field as a filter in the next query.
-- Offset-based pagination with sorting
-- Page 1 (first 100 results)
SELECT *
FROM project_logs('<PROJECT_ID>')
ORDER BY created DESC
LIMIT 100

-- Page 2 (next 100 results)
SELECT *
FROM project_logs('<PROJECT_ID>')
WHERE created < '2024-01-15T10:30:00Z'  -- Last created timestamp from previous page
ORDER BY created DESC
LIMIT 100

Expressions

SQL operators

You can use the following operators in your SQL queries.
-- Comparison operators
=           -- Equal to (alias for 'eq')
!=          -- Not equal to (alias for 'ne', can also use '<>')
>           -- Greater than (alias for 'gt')
<           -- Less than (alias for 'lt')
>=          -- Greater than or equal (alias for 'ge')
<=          -- Less than or equal (alias for 'le')
IN          -- Check if value exists in a list of values

-- Null operators
IS NULL     -- Check if value is null
IS NOT NULL -- Check if value is not null
ISNULL      -- Unary operator to check if null
ISNOTNULL   -- Unary operator to check if not null

-- Text matching
LIKE        -- Case-sensitive pattern matching (supports % wildcard)
NOT LIKE    -- Negated case-sensitive pattern matching
ILIKE       -- Case-insensitive pattern matching (supports % wildcard)
NOT ILIKE   -- Negated case-insensitive pattern matching
MATCH       -- Full-word semantic search (faster but requires exact word matches, e.g. 'apple' won't match 'app')
NOT MATCH   -- Negated full-word semantic search

-- Array operators
INCLUDES    -- Check if array/object contains value (alias: CONTAINS)
NOT INCLUDES -- Check if array/object does not contain value

-- Logical operators
AND         -- Both conditions must be true
OR          -- Either condition must be true
NOT         -- Unary operator to negate condition

-- Arithmetic operators
+           -- Addition (alias: add)
-           -- Subtraction (alias: sub)
*           -- Multiplication (alias: mul)
/           -- Division (alias: div)
%           -- Modulo (alias: mod)
-x          -- Unary negation (alias: neg)

SQL functions

You can use the following functions in SELECT, WHERE, GROUP BY clauses, and aggregate measures.
-- Date/time functions
second(timestamp)          -- Extract second from timestamp
minute(timestamp)         -- Extract minute from timestamp
hour(timestamp)          -- Extract hour from timestamp
day(timestamp)           -- Extract day from timestamp
week(timestamp)          -- Extract week from timestamp
month(timestamp)         -- Extract month from timestamp
year(timestamp)          -- Extract year from timestamp
date_trunc(interval, timestamp)  -- Truncate timestamp to specified interval
                                 -- Intervals: 'second', 'minute', 'hour', 'day', 'week', 'month', 'year'
current_timestamp()      -- Get current timestamp (alias: now())
current_date()          -- Get current date

-- String functions
lower(text)                       -- Convert text to lowercase
upper(text)                       -- Convert text to uppercase
concat(text1, text2, ...)         -- Concatenate strings

-- Array functions
len(array)                        -- Get length of array
contains(array, value)            -- Check if array contains value (alias: includes)

-- JSON functions
json_extract(json_str, path)      -- Extract value from JSON string using a path expression

-- Null handling functions
coalesce(val1, val2, ...)        -- Return first non-null value
nullif(val1, val2)               -- Return null if val1 equals val2
least(val1, val2, ...)           -- Return smallest non-null value
greatest(val1, val2, ...)        -- Return largest non-null value

-- Type conversion
round(number, precision)          -- Round to specified precision

-- Cast functions
to_string(value)                 -- Cast value to string
to_boolean(value)                -- Cast value to boolean
to_integer(value)                -- Cast value to integer
to_number(value)                 -- Cast value to number
to_date(value)                   -- Cast value to date
to_datetime(value)               -- Cast value to datetime
to_interval(value)               -- Cast value to interval

-- Aggregate functions (only in measures/with GROUP BY)
count(expr)                       -- Count number of rows
count_distinct(expr)              -- Count number of distinct values
sum(expr)                        -- Sum numeric values
avg(expr)                        -- Calculate mean of numeric values
min(expr)                        -- Find minimum value
max(expr)                        -- Find maximum value
percentile(expr, p)              -- Calculate percentile (p between 0 and 1)

Field access

SQL provides flexible ways to access nested data in arrays and objects:
-- Object field access
metadata.model             -- Access nested object field  e.g. {"metadata": {"model": "value"}}
metadata."field name"      -- Access field with spaces	  e.g. {"metadata": {"field name": "value"}}
metadata."field-name"      -- Access field with hyphens   e.g. {"metadata": {"field-name": "value"}}
metadata."field.name"      -- Access field with dots	  e.g. {"metadata": {"field.name": "value"}}

-- Array access (0-based indexing)
tags[0]                    -- First element
tags[-1]                   -- Last element

-- Combined array and object access
metadata.models[0].name    -- Field in first array element
responses[-1].tokens       -- Field in last array element
spans[0].children[-1].id   -- Nested array traversal
Array indices are 0-based, and negative indices count from the end (-1 is the last element).
When you have JSON data stored as a string field (rather than as native SQL objects), use the json_extract function to access values within it. The path parameter is treated as a literal string key name:
-- Extract from JSON string fields
json_extract(metadata.config, 'api_key')           -- Extract the "api_key" field
json_extract(metadata.config, 'user_id')           -- Extract the "user_id" field
json_extract(output, 'result')                     -- Extract the "result" field

Conditional expressions

SQL supports conditional logic using the ternary operator (? :):
-- Basic conditions
SELECT
  (scores.Factuality > 0.8 ? "high" : "low") AS quality,
  (error IS NOT NULL ? -1 : metrics.tokens) AS valid_tokens
FROM project_logs('my-project-id')
-- Nested conditions
SELECT
  (scores.Factuality > 0.9 ? "excellent" :
   scores.Factuality > 0.7 ? "good" :
   scores.Factuality > 0.5 ? "fair" : "poor") AS rating
FROM project_logs('my-project-id')
-- Use in calculations
SELECT
  (metadata.model = "gpt-4" ? metrics.tokens * 2 : metrics.tokens) AS adjusted_tokens,
  (error IS NULL ? metrics.latency : 0) AS valid_latency
FROM project_logs('my-project-id')

Examples

Track token usage

This query helps you monitor token consumption across your application.
SELECT
  day(created) AS time,
  sum(metrics.total_tokens) AS total_tokens,
  sum(metrics.prompt_tokens) AS input_tokens,
  sum(metrics.completion_tokens) AS output_tokens
FROM project_logs('<YOUR_PROJECT_ID>')
WHERE created > '<ISO_8601_TIME>'
GROUP BY 1
ORDER BY time ASC
The response shows daily token usage:
{
  "time": "2024-11-09T00:00:00Z",
  "total_tokens": 100000,
  "input_tokens": 50000,
  "output_tokens": 50000
}

Monitor model quality

Track model performance across different versions and configurations.
-- Compare factuality scores across models
SELECT
  metadata.model AS model,
  day(created) AS date,
  avg(scores.Factuality) AS avg_factuality,
  percentile(scores.Factuality, 0.05) AS p05_factuality,
  percentile(scores.Factuality, 0.95) AS p95_factuality,
  count(1) AS total_calls
FROM project_logs('<PROJECT_ID>')
WHERE created > '2024-01-01'
GROUP BY 1, 2
ORDER BY date DESC, model ASC
-- Find potentially problematic responses
SELECT *
FROM project_logs('<PROJECT_ID>')
WHERE scores.Factuality < 0.5
  AND metadata.is_production = true
  AND created > now() - interval 1 day
ORDER BY scores.Factuality ASC
LIMIT 100
-- Compare performance across specific models
SELECT *
FROM project_logs('<PROJECT_ID>')
WHERE metadata.model IN ('gpt-4', 'gpt-4-turbo', 'claude-3-opus')
  AND scores.Factuality IS NOT NULL
  AND created > now() - interval 7 day
ORDER BY scores.Factuality DESC
LIMIT 500

Analyze errors

Identify and investigate errors in your application.
-- Error rate by model
SELECT
  metadata.model AS model,
  hour(created) AS hour,
  count(1) AS total,
  count(error) AS errors,
  count(error) / count(1) AS error_rate
FROM project_logs('<PROJECT_ID>')
WHERE created > now() - interval 1 day
GROUP BY 1, 2
ORDER BY error_rate DESC
-- Find common error patterns
SELECT
  error.type AS error_type,
  metadata.model AS model,
  count(1) AS error_count,
  avg(metrics.latency) AS avg_latency
FROM project_logs('<PROJECT_ID>')
WHERE error IS NOT NULL
  AND created > now() - interval 7 day
GROUP BY 1, 2
ORDER BY error_count DESC
-- Exclude known error types from analysis
SELECT *
FROM project_logs('<PROJECT_ID>')
WHERE error IS NOT NULL
  AND error.type NOT IN ('rate_limit', 'timeout', 'network_error')
  AND metadata.is_production = true
  AND created > now() - interval 1 day
ORDER BY created DESC
LIMIT 100

Analyze latency

Monitor and optimize response times.
-- Track p95 latency by endpoint
SELECT
  metadata.endpoint AS endpoint,
  hour(created) AS hour,
  percentile(metrics.latency, 0.95) AS p95_latency,
  percentile(metrics.latency, 0.50) AS median_latency,
  count(1) AS requests
FROM project_logs('<PROJECT_ID>')
WHERE created > now() - interval 1 day
GROUP BY 1, 2
ORDER BY hour DESC, p95_latency DESC
-- Find slow requests
SELECT
  metadata.endpoint,
  metrics.latency,
  metrics.tokens,
  input,
  created
FROM project_logs('<PROJECT_ID>')
WHERE metrics.latency > 5000  -- Requests over 5 seconds
  AND created > now() - interval 1 hour
ORDER BY metrics.latency DESC
LIMIT 20

Analyze prompts

Analyze prompt effectiveness and patterns.
-- Track prompt token efficiency
SELECT
  metadata.prompt_template AS template,
  day(created) AS date,
  avg(metrics.prompt_tokens) AS avg_prompt_tokens,
  avg(metrics.completion_tokens) AS avg_completion_tokens,
  avg(metrics.completion_tokens) / avg(metrics.prompt_tokens) AS token_efficiency,
  avg(scores.Factuality) AS avg_factuality
FROM project_logs('<PROJECT_ID>')
WHERE created > now() - interval 7 day
GROUP BY 1, 2
ORDER BY date DESC, token_efficiency DESC
-- Find similar prompts
SELECT *
FROM project_logs('<PROJECT_ID>')
WHERE input MATCH 'explain the concept of recursion'
  AND scores.Factuality > 0.8
ORDER BY created DESC
LIMIT 10

Analyze based on tags

Use tags to track and analyze specific behaviors.
-- Monitor feedback patterns
SELECT
  tags[0] AS primary_tag,
  metadata.model AS model,
  count(1) AS feedback_count,
  avg(scores.Factuality > 0.8 ? 1 : 0) AS high_quality_rate
FROM project_logs('<PROJECT_ID>')
WHERE tags INCLUDES 'feedback'
  AND created > now() - interval 30 day
GROUP BY 1, 2
ORDER BY feedback_count DESC
-- Track issue resolution
SELECT
  created,
  tags,
  metadata.model,
  scores.Factuality,
  response
FROM project_logs('<PROJECT_ID>')
WHERE tags INCLUDES 'needs-review'
  AND NOT tags INCLUDES 'resolved'
  AND created > now() - interval 1 day
ORDER BY scores.Factuality ASC

Extract data from JSON strings

Use json_extract to extract values from a JSON string using a key name. This is useful when you have JSON data stored as a string field and need to access specific values within it. The path parameter is treated as a literal key name (not a path expression with traversal).
-- Extract a simple field
SELECT
  id,
  json_extract(metadata.config, 'api_key') AS api_key
FROM project_logs('my-project-id')
-- Extract fields with special characters in the key name
SELECT
  id,
  json_extract(metadata.settings, 'user.preferences.theme') AS theme_key
FROM project_logs('my-project-id')
-- Note: This extracts a key literally named "user.preferences.theme", not a nested path
-- Extract and filter
SELECT *
FROM project_logs('my-project-id')
WHERE json_extract(metadata.config, 'environment') = 'production'
  AND json_extract(metadata.config, 'version') > 2.0
json_extract returns null for invalid JSON or missing keys rather than raising an error, making it safe to use in filters and aggregations. The path parameter is a literal key name, not a path expression - characters like dots, brackets, etc. are treated as part of the key name itself.