Skip to main content
Applies to:


Summary

Issue: Traces with large conversational data (ie hundreds of runs with hundreds of thousands of tokens each) can cause slow UI loading and poor browsing experience. Cause: Large JSON payloads render inline, resulting in slow performance when opening traces. Resolution: Use JSONAttachment to move large data out of inline rendering while keeping essential metadata searchable.

Implementation

Use JSONAttachment for large conversational data and keep essential metadata inline for indexing and filtering: Python:
from braintrust import JSONAttachment

span.log(
    input={
        # Large data stored as attachment
        "transcript": JSONAttachment(large_conversation, filename="chat.json")
    },
    # Essential metadata stays inline for filtering/search
    metadata={
        "model": "gpt-4",
        "turns": 156,
        "summary": "Customer support chat about billing"
    }
)

TypeScript:
import { JSONAttachment } from "braintrust";

span.log({
  input: {
    // Large data stored as attachment
    transcript: new JSONAttachment(largeConversation, { filename: "chat.json" })
  },
  // Essential metadata stays inline for filtering/search
  metadata: {
    model: "gpt-4",
    turns: 156,
    summary: "Customer support chat about billing"
  }
});

Key benefits:
  • UI for trace loads quickly, since large data as attachments don’t render inline
  • Full conversation still accessible via attachment viewer in UI
  • Essential metadata visible at a glance for filtering and search