SQL queries in Braintrust provide a precise, standard syntax for querying Braintrust experiments, logs, and datasets. Use SQL to better analyze and understand your data.Braintrust supports two syntax styles: standard SQL syntax, and the native BTQL syntax with pipe-delimited clauses. The parser automatically detects which style you’re using.
Use SQL to filter logs and experiments based on specific criteria. You can filter logs by tags, metadata, or any other relevant fields. Filtering in logs and experiments only supports WHERE clauses.At the top of your experiment or log view, select Filter to open the filter editor and select SQL to switch to SQL mode.
To test SQL with autocomplete, validation, and a table of results, use the SQL sandbox in the dashboard. In your project, select SQL sandbox at the bottom of the sidebar.
SQL queries follow a familiar structure that lets you define what data you want, how you want it returned, and how to analyze it.This example returns every log from a project where Factuality is greater than 0.8, sorts by created date descending, and limits the results to 100.
Report incorrect code
Copy
Ask AI
SELECT *FROM project_logs('<PROJECT_ID>', shape => 'spans')WHERE scores.Factuality > 0.8ORDER BY created DESCLIMIT 100
SELECT / select:: choose which fields to retrieve
FROM / from:: specify the data source. Has an optional designator for the shape of the data: spans, traces, summary. If not specified, defaults to spans
WHERE / filter:: define conditions to filter the data
sample:: (BTQL-only) randomly sample a subset of the filtered data (rate or count-based)
ORDER BY / sort:: set the order of results (ASC/DESC or asc/desc)
Braintrust also supports BTQL, an alternative pipe-delimited clause syntax. The parser automatically detects whether your query is SQL or BTQL:
SQL queries start with SELECT, WITH, etc. followed by whitespace
BTQL queries use clause syntax like select:, filter:, etc.
SQL Clause
BTQL Clause
SELECT ...
select: ...
FROM table('id', shape => 'traces')
from: table('id') traces
WHERE ...
filter: ...
GROUP BY ...
dimensions: ...
ORDER BY ...
sort: ...
LIMIT n
limit: n
SQL syntax specifies the shape with a named parameter (e.g., FROM experiment('id', shape => 'traces')), while BTQL uses a trailing token (e.g., from: experiment('id') traces). Table aliases (e.g., AS t) are reserved for future use.
Full-text search: Use MySQL’s MATCH...AGAINST syntax for BTQL’s MATCH operator:
WHERE MATCH(input) AGAINST ('search term') → filter: input MATCH 'search term'
Multiple columns are OR’d: MATCH(col1, col2) AGAINST ('x') → col1 MATCH 'x' OR col2 MATCH 'x'
Unsupported SQL features: The SQL parser does not support JOIN, subqueries, UNION/INTERSECT/EXCEPT, HAVING, or window functions. Use BTQL’s native syntax for queries that would require these features.
When retrieving records with SQL, you can either use SELECT or SELECT ... GROUP BY. You can use most tools when using either method, but you must use GROUP BY if you want to aggregate functions to retrieve results.Both retrieval methods work with all data shapes (spans, traces, and summary). Using GROUP BY with the summary shape enables trace-level aggregations.
SELECT in SQL lets you choose specific fields, compute values, or use * to retrieve every field.
Report incorrect code
Copy
Ask AI
-- Get specific fieldsSELECT metadata.model AS model, scores.Factuality AS score, created AS timestampFROM project_logs('my-project-id')
SQL allows you to transform data directly in the SELECT clause. This query returns metadata.model, whether metrics.tokens is greater than 1000, and a quality indicator of either “high” or “low” depending on whether or not the Factuality score is greater than 0.8.
Report incorrect code
Copy
Ask AI
SELECT -- Simple field access metadata.model, -- Computed values metrics.tokens > 1000 AS is_long_response, -- Conditional logic (scores.Factuality > 0.8 ? "high" : "low") AS qualityFROM project_logs('my-project-id')
You can also use functions in the SELECT clause to transform values and create meaningful aliases for your results. This query extracts the day the log was created, the hour, and a Factuality score rounded to 2 decimal places.
Report incorrect code
Copy
Ask AI
SELECT -- Date/time functions day(created) AS date, hour(created) AS hour, -- Numeric calculations round(scores.Factuality, 2) AS rounded_scoreFROM project_logs('my-project-id')
Instead of a simple SELECT, you can use SELECT ... GROUP BY to group and aggregate data. This query returns a row for each distinct model with the day it was created, the total number of calls, the average Factuality score, and the latency percentile.
Report incorrect code
Copy
Ask AI
-- Analyze model performance over timeSELECT metadata.model AS model, day(created) AS date, count(1) AS total_calls, avg(scores.Factuality) AS avg_score, percentile(latency, 0.95) AS p95_latencyFROM project_logs('my-project-id')GROUP BY metadata.model, day(created)
The available aggregate functions are:
count(expr): number of rows
count_distinct(expr): number of distinct values
sum(expr): sum of numeric values
avg(expr): mean (average) of numeric values
min(expr): minimum value
max(expr): maximum value
percentile(expr, p): a percentile where p is between 0 and 1
The FROM clause identifies where the records are coming from. This can be an identifier like project_logs or a function call like experiment("id").You can add an optional parameter to the FROM clause that defines how the data is returned. The options are spans (default), traces, and summary.
spans returns individual spans that match the filter criteria. This example returns 10 LLM call spans that took more than 0.2 seconds to use the first token.
traces returns all spans from traces that contain at least one matching span. This is useful when you want to see the full context of a specific event or behavior, for example if you want to see all spans in traces where an error occurred.This example returns all spans for a specific trace where one span in the trace had an error.
Report incorrect code
Copy
Ask AI
SELECT *FROM project_logs('my-project-id', shape => 'traces')WHERE root_span_id = 'trace-id' AND error IS NOT NULL
The response is an array of spans. Check out the Extend traces page for more details on span structure.
summary provides trace-level views of your data by aggregating metrics across all spans in a trace. This shape is useful for analyzing overall performance and comparing results across experiments.The summary shape can be used in two ways:
Individual trace summaries (using SELECT): Returns one row per trace with aggregated span metrics. Use this to see trace-level details. Example: “What are the details of traces with errors?”
Aggregated trace analytics (using GROUP BY): Groups multiple traces and computes statistics. Use this to analyze patterns across many traces. Example: “What’s the average cost per model per day?”
Use SELECT with the summary shape to retrieve individual traces with aggregated metrics. This is useful for inspecting specific trace details, debugging issues, or exporting trace-level data.This example returns 10 summary rows from the project logs for ‘my-project-id’.
Summary rows include some aggregated metrics and some preview fields that show data from the root span of the trace.The following fields are aggregated metrics across all spans in the trace.
scores: an object with all scores averaged across all spans
metrics: an object with aggregated metrics across all spans
prompt_tokens: total number of prompt tokens used
completion_tokens: total number of completion tokens used
prompt_cached_tokens: total number of cached prompt tokens used
prompt_cache_creation_tokens: total number of tokens used to create cache entries
total_tokens: total number of tokens used (prompt + completion)
estimated_cost: total estimated cost of the trace in US dollars (prompt + completion costs)
llm_calls: total number of LLM calls
tool_calls: total number of tool calls
errors: total number of errors (LLM + tool errors)
llm_errors: total number of LLM errors
tool_errors: total number of tool errors
start: Unix timestamp of the first span start time
end: Unix timestamp of the last span end time
duration: maximum duration of any span in seconds. Note: this is not the total trace duration.
llm_duration: sum of all durations across LLM spans in seconds
time_to_first_token: the average time to first token across LLM spans in seconds
span_type_info: an object with span type info. Some fields in this object are aggregated across all spans and some reflect attributes from the root span.
cached: true only if all LLM spans were cached
has_error: true if any span had an error
Root span preview fields include input, output, expected, error, and metadata.
Use GROUP BY with the summary shape to group and aggregate traces. This is useful for analyzing patterns, monitoring performance trends, and comparing metrics across models or time periods.These examples show how to group traces by model to track performance over time, and how to compare workflows across experiments:
Report incorrect code
Copy
Ask AI
-- Group traces by model to analyze performance over timeSELECT metadata.model AS model, day(created) AS date, count(1) AS trace_count, avg(scores.Factuality) AS avg_score, avg(metrics.estimated_cost) AS avg_costFROM project_logs('my-project-id', shape => 'summary')GROUP BY 1, 2ORDER BY date DESC
Report incorrect code
Copy
Ask AI
-- Compare workflows across experimentsSELECT metadata.workflow_type AS workflow, origin.experiment_id AS experiment, count(1) AS trace_count, avg(metrics.estimated_cost) AS avg_cost, avg(scores.Success) AS success_rateFROM experiment('<EXPERIMENT_ID_1>', '<EXPERIMENT_ID_2>', shape => 'summary')GROUP BY 1, 2
The WHERE clause lets you specify conditions to narrow down results. It supports a wide range of operators and functions, including complex conditions.This example WHERE clause only retrieves data where:
Factuality score is greater than 0.8
model is “gpt-4”
tag list includes “triage”
input contains the word “question” (case-insensitive)
created date is later than January 1, 2024
more than 1000 tokens were used or the data being traced was made in production
Report incorrect code
Copy
Ask AI
SELECT *FROM project_logs('my-project-id')WHERE -- Simple comparisons scores.Factuality > 0.8 AND metadata.model = 'gpt-4' AND -- Array operations tags INCLUDES 'triage' AND -- Text search (case-insensitive) input ILIKE '%question%' AND -- Date ranges created > '2024-01-01' AND -- Complex conditions ( metrics.tokens > 1000 OR metadata.is_production = true )
By default, each returned trace includes at least one span that matches all filter conditions. Use single span filters to find traces where different spans match different conditions. This is helpful for finding errors in tagged traces where the error may not be on the root span.Wrap any filter expression with any_span() to mark it as a single span filter. This WHERE example returns traces with a “production” tag that encountered an error.
Report incorrect code
Copy
Ask AI
WHERE any_span(tags INCLUDES "production") AND any_span(error IS NOT NULL)
Single span filters work with the traces and summary data shapes.
SQL supports the % wildcard for pattern matching with LIKE (case-sensitive) and ILIKE (case-insensitive).The % wildcard matches any sequence of zero or more characters.
Report incorrect code
Copy
Ask AI
-- Match any input containing "question"WHERE input ILIKE '%question%'-- Match inputs starting with "How"WHERE input LIKE 'How%'-- Match emails ending with specific domainsWHERE metadata.email ILIKE '%@braintrust.com'-- Escape literal wildcards with backslashWHERE metadata.description LIKE '%50\% off%' -- Matches "50% off"
SQL supports intervals for time-based operations.This query returns all project logs from ‘my-project-id’ that were created in the last day.
Report incorrect code
Copy
Ask AI
SELECT *FROM project_logs('my-project-id')WHERE created > now() - interval 1 day
This query returns all project logs from ‘my-project-id’ that were created up to an hour ago.
Report incorrect code
Copy
Ask AI
SELECT *FROM project_logs('my-project-id')WHERE created > now() - interval 1 hour AND created < now()
This query returns all project logs from ‘my-project-id’ that were created last week and last month.
Report incorrect code
Copy
Ask AI
-- Examples with different unitsSELECT *FROM project_logs('my-project-id')WHERE created > now() - interval 7 day -- Last week AND created > now() - interval 1 month -- Last month
The ORDER BY clause determines the order of results. The options are DESC (descending) and ASC (ascending) on a numerical field. You can sort by a single field, multiple fields, or computed values.
Report incorrect code
Copy
Ask AI
-- Sort by single fieldORDER BY created DESC-- Sort by multiple fieldsORDER BY scores.Factuality DESC, created ASC-- Sort by computed valuesORDER BY len(tags) DESC
The pivot clause transforms your results to make comparisons easier by converting rows into columns. This is useful when comparing metrics across different categories or time periods.Syntax:
Report incorrect code
Copy
Ask AI
pivot: <measure1>, <measure2>, ...
Report incorrect code
Copy
Ask AI
-- Compare model performance metrics across modelsdimensions: day(created) as datemeasures: avg(scores.Factuality) as avg_factuality, avg(metrics.tokens) as avg_tokens, count(1) as call_countfrom: project_logs('my-project-id')pivot: avg_factuality, avg_tokens, call_count-- Results will look like:-- {-- "date": "2024-01-01",-- "gpt-4_avg_factuality": 0.92,-- "gpt-4_avg_tokens": 150,-- "gpt-4_call_count": 1000,-- "gpt-3.5-turbo_avg_factuality": 0.85,-- "gpt-3.5-turbo_avg_tokens": 120,-- "gpt-3.5-turbo_call_count": 2000-- }
This query returns a record for each model with Factuality score and latency percentile across time periods.
Report incorrect code
Copy
Ask AI
-- Compare metrics across time periodsdimensions: metadata.model as modelmeasures: avg(scores.Factuality) as avg_score, percentile(latency, 0.95) as p95_latencyfrom: project_logs('my-project-id')pivot: avg_score, p95_latency-- Results will look like:-- {-- "model": "gpt-4",-- "0_avg_score": 0.91,-- "0_p95_latency": 2.5,-- "1_avg_score": 0.89,-- "1_p95_latency": 2.8,-- ...-- }
This query returns a record for each tag and aggregates the number of instances of that tag per model.
Report incorrect code
Copy
Ask AI
-- Compare tag distributions across modelsdimensions: tags[0] as primary_tagmeasures: count(1) as tag_countfrom: project_logs('my-project-id')pivot: tag_count-- Results will look like:-- {-- "primary_tag": "quality",-- "gpt-4_tag_count": 500,-- "gpt-3.5-turbo_tag_count": 300-- }
Pivot columns are automatically named by combining the dimension value and measure name. For example, if you pivot by metadata.model and a model named “gpt-4” to measure avg_score, the name becomes gpt-4_avg_score.
The unpivot clause transforms columns into rows, which is useful when you need to analyze arbitrary scores and metrics without specifying each score name. This is helpful when working with dynamic sets of metrics or when you need to know all possible score names in advance.
Report incorrect code
Copy
Ask AI
-- Convert wide format to long format for arbitrary scoresdimensions: day(created) as date, score_namemeasures: avg(score_value) as score_valuefrom: project_logs('my-project-id')unpivot: scores as (score_name, score_value)-- Results will look like:-- {-- "date": "2024-01-01",-- "score_name": "Factuality",-- "score_value": 0.92-- },-- {-- "date": "2024-01-01",-- "score_name": "Coherence",-- "score_value": 0.88-- }
Cursors are only supported in BTQL syntax, not in SQL syntax.
Cursors implement pagination in BTQL queries. Cursors are automatically returned in query responses. A default limit is applied in a query without a limit clause, and the number of returned results can be overridden by using an explicit limit. In order to implement pagination, after an initial query, provide the subsequent cursor token returned in the results in the cursor clause in follow-on queries. When a cursor has reached the end of the result set, the data array will be empty, and no cursor token will be returned by the query.
BTQL syntax
Report incorrect code
Copy
Ask AI
-- Pagination using cursor (only works without sort)select: *from: project_logs('<PROJECT_ID>')limit: 100cursor: '<CURSOR_TOKEN>' -- From previous query response
Cursors can only be used for pagination when no sort clause is specified. If you need sorted results, you’ll need to implement offset-based pagination by using the last value from your sort field as a filter in the next query.
Report incorrect code
Copy
Ask AI
-- Offset-based pagination with sorting-- Page 1 (first 100 results)SELECT *FROM project_logs('<PROJECT_ID>')ORDER BY created DESCLIMIT 100-- Page 2 (next 100 results)SELECT *FROM project_logs('<PROJECT_ID>')WHERE created < '2024-01-15T10:30:00Z' -- Last created timestamp from previous pageORDER BY created DESCLIMIT 100
You can use the following operators in your SQL queries.
Report incorrect code
Copy
Ask AI
-- Comparison operators= -- Equal to (alias for 'eq')!= -- Not equal to (alias for 'ne', can also use '<>')> -- Greater than (alias for 'gt')< -- Less than (alias for 'lt')>= -- Greater than or equal (alias for 'ge')<= -- Less than or equal (alias for 'le')IN -- Check if value exists in a list of values-- Null operatorsIS NULL -- Check if value is nullIS NOT NULL -- Check if value is not nullISNULL -- Unary operator to check if nullISNOTNULL -- Unary operator to check if not null-- Text matchingLIKE -- Case-sensitive pattern matching (supports % wildcard)NOT LIKE -- Negated case-sensitive pattern matchingILIKE -- Case-insensitive pattern matching (supports % wildcard)NOT ILIKE -- Negated case-insensitive pattern matchingMATCH -- Full-word semantic search (faster but requires exact word matches, e.g. 'apple' won't match 'app')NOT MATCH -- Negated full-word semantic search-- Array operatorsINCLUDES -- Check if array/object contains value (alias: CONTAINS)NOT INCLUDES -- Check if array/object does not contain value-- Logical operatorsAND -- Both conditions must be trueOR -- Either condition must be trueNOT -- Unary operator to negate condition-- Arithmetic operators+ -- Addition (alias: add)- -- Subtraction (alias: sub)* -- Multiplication (alias: mul)/ -- Division (alias: div)% -- Modulo (alias: mod)-x -- Unary negation (alias: neg)
You can use the following functions in SELECT, WHERE, GROUP BY clauses, and aggregate measures.
Report incorrect code
Copy
Ask AI
-- Date/time functionssecond(timestamp) -- Extract second from timestampminute(timestamp) -- Extract minute from timestamphour(timestamp) -- Extract hour from timestampday(timestamp) -- Extract day from timestampweek(timestamp) -- Extract week from timestampmonth(timestamp) -- Extract month from timestampyear(timestamp) -- Extract year from timestampdate_trunc(interval, timestamp) -- Truncate timestamp to specified interval -- Intervals: 'second', 'minute', 'hour', 'day', 'week', 'month', 'year'current_timestamp() -- Get current timestamp (alias: now())current_date() -- Get current date-- String functionslower(text) -- Convert text to lowercaseupper(text) -- Convert text to uppercaseconcat(text1, text2, ...) -- Concatenate strings-- Array functionslen(array) -- Get length of arraycontains(array, value) -- Check if array contains value (alias: includes)-- JSON functionsjson_extract(json_str, path) -- Extract value from JSON string using a path expression-- Null handling functionscoalesce(val1, val2, ...) -- Return first non-null valuenullif(val1, val2) -- Return null if val1 equals val2least(val1, val2, ...) -- Return smallest non-null valuegreatest(val1, val2, ...) -- Return largest non-null value-- Type conversionround(number, precision) -- Round to specified precision-- Cast functionsto_string(value) -- Cast value to stringto_boolean(value) -- Cast value to booleanto_integer(value) -- Cast value to integerto_number(value) -- Cast value to numberto_date(value) -- Cast value to dateto_datetime(value) -- Cast value to datetimeto_interval(value) -- Cast value to interval-- Aggregate functions (only in measures/with GROUP BY)count(expr) -- Count number of rowscount_distinct(expr) -- Count number of distinct valuessum(expr) -- Sum numeric valuesavg(expr) -- Calculate mean of numeric valuesmin(expr) -- Find minimum valuemax(expr) -- Find maximum valuepercentile(expr, p) -- Calculate percentile (p between 0 and 1)
SQL provides flexible ways to access nested data in arrays and objects:
Report incorrect code
Copy
Ask AI
-- Object field accessmetadata.model -- Access nested object field e.g. {"metadata": {"model": "value"}}metadata."field name" -- Access field with spaces e.g. {"metadata": {"field name": "value"}}metadata."field-name" -- Access field with hyphens e.g. {"metadata": {"field-name": "value"}}metadata."field.name" -- Access field with dots e.g. {"metadata": {"field.name": "value"}}-- Array access (0-based indexing)tags[0] -- First elementtags[-1] -- Last element-- Combined array and object accessmetadata.models[0].name -- Field in first array elementresponses[-1].tokens -- Field in last array elementspans[0].children[-1].id -- Nested array traversal
Array indices are 0-based, and negative indices count from the end (-1 is the last element).
When you have JSON data stored as a string field (rather than as native SQL objects), use the json_extract function to access values within it. The path parameter is treated as a literal string key name:
Report incorrect code
Copy
Ask AI
-- Extract from JSON string fieldsjson_extract(metadata.config, 'api_key') -- Extract the "api_key" fieldjson_extract(metadata.config, 'user_id') -- Extract the "user_id" fieldjson_extract(output, 'result') -- Extract the "result" field
-- Use in calculationsSELECT (metadata.model = "gpt-4" ? metrics.tokens * 2 : metrics.tokens) AS adjusted_tokens, (error IS NULL ? metrics.latency : 0) AS valid_latencyFROM project_logs('my-project-id')
This query helps you monitor token consumption across your application.
Report incorrect code
Copy
Ask AI
SELECT day(created) AS time, sum(metrics.total_tokens) AS total_tokens, sum(metrics.prompt_tokens) AS input_tokens, sum(metrics.completion_tokens) AS output_tokensFROM project_logs('<YOUR_PROJECT_ID>')WHERE created > '<ISO_8601_TIME>'GROUP BY 1ORDER BY time ASC
Track model performance across different versions and configurations.
Report incorrect code
Copy
Ask AI
-- Compare factuality scores across modelsSELECT metadata.model AS model, day(created) AS date, avg(scores.Factuality) AS avg_factuality, percentile(scores.Factuality, 0.05) AS p05_factuality, percentile(scores.Factuality, 0.95) AS p95_factuality, count(1) AS total_callsFROM project_logs('<PROJECT_ID>')WHERE created > '2024-01-01'GROUP BY 1, 2ORDER BY date DESC, model ASC
Report incorrect code
Copy
Ask AI
-- Find potentially problematic responsesSELECT *FROM project_logs('<PROJECT_ID>')WHERE scores.Factuality < 0.5 AND metadata.is_production = true AND created > now() - interval 1 dayORDER BY scores.Factuality ASCLIMIT 100
Report incorrect code
Copy
Ask AI
-- Compare performance across specific modelsSELECT *FROM project_logs('<PROJECT_ID>')WHERE metadata.model IN ('gpt-4', 'gpt-4-turbo', 'claude-3-opus') AND scores.Factuality IS NOT NULL AND created > now() - interval 7 dayORDER BY scores.Factuality DESCLIMIT 500
Identify and investigate errors in your application.
Report incorrect code
Copy
Ask AI
-- Error rate by modelSELECT metadata.model AS model, hour(created) AS hour, count(1) AS total, count(error) AS errors, count(error) / count(1) AS error_rateFROM project_logs('<PROJECT_ID>')WHERE created > now() - interval 1 dayGROUP BY 1, 2ORDER BY error_rate DESC
Report incorrect code
Copy
Ask AI
-- Find common error patternsSELECT error.type AS error_type, metadata.model AS model, count(1) AS error_count, avg(metrics.latency) AS avg_latencyFROM project_logs('<PROJECT_ID>')WHERE error IS NOT NULL AND created > now() - interval 7 dayGROUP BY 1, 2ORDER BY error_count DESC
Report incorrect code
Copy
Ask AI
-- Exclude known error types from analysisSELECT *FROM project_logs('<PROJECT_ID>')WHERE error IS NOT NULL AND error.type NOT IN ('rate_limit', 'timeout', 'network_error') AND metadata.is_production = true AND created > now() - interval 1 dayORDER BY created DESCLIMIT 100
-- Track p95 latency by endpointSELECT metadata.endpoint AS endpoint, hour(created) AS hour, percentile(metrics.latency, 0.95) AS p95_latency, percentile(metrics.latency, 0.50) AS median_latency, count(1) AS requestsFROM project_logs('<PROJECT_ID>')WHERE created > now() - interval 1 dayGROUP BY 1, 2ORDER BY hour DESC, p95_latency DESC
Report incorrect code
Copy
Ask AI
-- Find slow requestsSELECT metadata.endpoint, metrics.latency, metrics.tokens, input, createdFROM project_logs('<PROJECT_ID>')WHERE metrics.latency > 5000 -- Requests over 5 seconds AND created > now() - interval 1 hourORDER BY metrics.latency DESCLIMIT 20
-- Track prompt token efficiencySELECT metadata.prompt_template AS template, day(created) AS date, avg(metrics.prompt_tokens) AS avg_prompt_tokens, avg(metrics.completion_tokens) AS avg_completion_tokens, avg(metrics.completion_tokens) / avg(metrics.prompt_tokens) AS token_efficiency, avg(scores.Factuality) AS avg_factualityFROM project_logs('<PROJECT_ID>')WHERE created > now() - interval 7 dayGROUP BY 1, 2ORDER BY date DESC, token_efficiency DESC
Report incorrect code
Copy
Ask AI
-- Find similar promptsSELECT *FROM project_logs('<PROJECT_ID>')WHERE input MATCH 'explain the concept of recursion' AND scores.Factuality > 0.8ORDER BY created DESCLIMIT 10
-- Monitor feedback patternsSELECT tags[0] AS primary_tag, metadata.model AS model, count(1) AS feedback_count, avg(scores.Factuality > 0.8 ? 1 : 0) AS high_quality_rateFROM project_logs('<PROJECT_ID>')WHERE tags INCLUDES 'feedback' AND created > now() - interval 30 dayGROUP BY 1, 2ORDER BY feedback_count DESC
Report incorrect code
Copy
Ask AI
-- Track issue resolutionSELECT created, tags, metadata.model, scores.Factuality, responseFROM project_logs('<PROJECT_ID>')WHERE tags INCLUDES 'needs-review' AND NOT tags INCLUDES 'resolved' AND created > now() - interval 1 dayORDER BY scores.Factuality ASC
Use json_extract to extract values from a JSON string using a key name. This is useful when you have JSON data stored as a string field and need to access specific values within it. The path parameter is treated as a literal key name (not a path expression with traversal).
Report incorrect code
Copy
Ask AI
-- Extract a simple fieldSELECT id, json_extract(metadata.config, 'api_key') AS api_keyFROM project_logs('my-project-id')
Report incorrect code
Copy
Ask AI
-- Extract fields with special characters in the key nameSELECT id, json_extract(metadata.settings, 'user.preferences.theme') AS theme_keyFROM project_logs('my-project-id')-- Note: This extracts a key literally named "user.preferences.theme", not a nested path
Report incorrect code
Copy
Ask AI
-- Extract and filterSELECT *FROM project_logs('my-project-id')WHERE json_extract(metadata.config, 'environment') = 'production' AND json_extract(metadata.config, 'version') > 2.0
json_extract returns null for invalid JSON or missing keys rather than raising an error, making it safe to use in filters and aggregations. The path parameter is a literal key name, not a path expression - characters like dots, brackets, etc. are treated as part of the key name itself.