Export to S3
Periodically export logs, experiments, or datasets to AWS S3 buckets.Create S3 export
- Navigate to Configuration > Automations.
- Click + Automation.
- Select S3 export type.
- Configure export settings:
- Name: Identify the export.
- Data to export: Logs (traces), Logs (spans), or Custom BTQL query.
- S3 path: Target bucket and prefix (e.g.,
s3://my-bucket/braintrust/logs). Once the automation is created, this path cannot be changed. - Role ARN: IAM role ARN that Braintrust will assume.
- Format: JSON Lines or Parquet.
- Interval: How often to export (5 min to 24 hr).
- Click Test automation to verify S3 access.
- Click Save.
Configure AWS IAM
Create an IAM role that Braintrust can assume:- In AWS IAM, create a new role with Custom trust policy.
- Use this trust policy:
bt:<your organization ID>:*.
- Attach a policy with S3 write permissions:
- Copy the role ARN and paste it into the Braintrust export configuration.
Export data types
Logs (traces): One row per trace with scores, metrics, and metadata. Use this for high-level analysis. Logs (spans): One row per span for detailed execution traces. Use this for debugging or fine-grained analysis. Custom query: Define exactly what data to export using SQL or BTQL:S3 folder structure
Exported files are organized by export run date:Export throughput
Each export interval can process up to 100,000 rows:- For traces: 100,000 traces per interval
- For spans: 100,000 spans per interval
- For custom queries: 100,000 result rows per interval
Historical data
New S3 exports start from the beginning of your data, not from creation time. The automation processes all historical records before catching up to current data. For large datasets, initial catch-up may take multiple intervals. This is expected behavior.Monitor exports
View export status and history:- Navigate to Configuration > Automations.
- Click the status icon next to your export.
- View run history, rows processed, data size, and timing.
- Run once: Manually trigger an immediate export.
- Reset automation: Clear history and restart from the beginning.
- View errors: See failure details and troubleshoot issues.
Troubleshooting
Export falling behind: If you see “Max iterations reached” warnings:- This is normal during initial historical data processing.
- If it persists after catch-up, decrease the interval to run more frequently.
- Consider splitting into multiple exports with BTQL filters.
- Ensure you’re on data plane v1.1.27 or later.
- Use the Reset automation button to restart.
- If problems persist, create a new trace export automation.
- Verify the IAM role ARN is correct.
- Check the trust policy includes the correct external ID.
- Ensure S3 policy grants required permissions.
- Confirm the bucket and prefix exist.
Create S3 exports via API
When creating an S3 export automation via the API, you must perform two steps:- Create the automation using
POST /v1/project_automation. - Register the cron job using
POST /automation/cron.
<YOUR_AUTOMATION_ID> with the ID returned from the POST /v1/project_automation call, and <YOUR_API_KEY> with your API key or service token. Use your API key for the Authorization header to authenticate the API call. For the service_token field in the request body, you can use either an API key (sk-*) or a service token (bt-st-*) that has read permission on the project containing the automation. This can be the same API key used for authentication.
Configure retention
Automatically delete old data to manage storage and comply with regulations.Create retention policy
- Navigate to Configuration > Automations.
- Click + Automation.
- Select Data retention type.
- Configure settings:
- Object type: Logs, Experiments, or Datasets.
- Retention period: Days to keep data before deletion.
- Click Save.
How retention works
Logs: Individual logs are deleted when their creation timestamp exceeds the retention period. Experiments: Entire experiments (metadata and all rows) are deleted when the experiment creation timestamp exceeds the retention period. Datasets: Individual dataset rows are deleted when their creation timestamp exceeds the retention period. The dataset itself remains and can accept new rows.Soft deletion
For hybrid deployments (v1.1.21+), data is soft-deleted by marking it unused. A background process purges unused files within 24 hours, providing a grace period to restore accidentally deleted data.Configure a service token for your data plane to enable retention. See Data plane manager for details.
Common retention patterns
Production logs: 90 daysNext steps
- Set up alerts to monitor data quality
- View logs to understand what gets exported
- Export data via the API for one-time exports
- Self-hosting advanced for data plane configuration