Skip to main content
Applies to:


Summary When iterating over large datasets with Python SDK versions before 0.3.8, users encounter a TooManyRequestsError because the SDK makes individual BTQL queries for each page of results, exceeding the 20 requests per 60 seconds rate limit. This issue is resolved by upgrading to SDK version 0.3.8 or later, which includes automatic rate limit handling for dataset operations.

Error Message

braintrust.util.AugmentedHTTPError: {
  "Code": "TooManyRequestsError",
  "Message": "Too many requests. Source: checkBtqlOrgRateLimit. Rate limit: 20 requests per 60 seconds. Consumed: 21..."
}

Resolution Steps

Step 1: Check current version

Verify your current SDK version.
import braintrust
print(braintrust.__version__)

Step 2: Upgrade SDK

Install the latest version with automatic rate limit handling.
pip install --upgrade braintrust

Step 3: Resume normal iteration

Dataset iteration will now handle rate limits automatically.
dataset = braintrust.init_dataset(project="my-project", name="my-dataset")
for row in dataset:
    # SDK handles rate limiting automatically
    process_row(row)

Solution 2: Use fetch() method (immediate workaround)

Step 1: Fetch all data at once

Use fetch() to retrieve all rows in a single API call.
dataset = braintrust.init_dataset(project="my-project", name="my-dataset")
all_rows = dataset.fetch()  # Single operation, no pagination

Step 2: Process locally

Iterate through the fetched data without additional API calls.
for row in all_rows:
    process_row(row)