Skip to main content

Documentation Index

Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Applies to:
  • Plan -
  • Deployment -

Summary

When OTel spans are ingested into Braintrust, attributes such as ai.prompt, gen_ai.input.messages, and llm.input_messages are parsed into the span’s structured input and output fields. Historically, those same raw attributes were also kept on the span’s metadata, which duplicated large payloads. Braintrust now supports an option that strips parsed attributes from metadata so each payload is stored only once, on the structured field it was parsed into.
  • Braintrust-hosted: this option is enabled. Parsed OTel attributes no longer appear in metadata.
  • Self-hosted: the option is off by default. There is no change to your data until you opt in.
  • Per-span escape hatch: set the braintrust.otel.preserve_attributes attribute to true on any span where you need the raw attributes preserved on metadata for forensics or debugging.

What changes for Braintrust-hosted customers

For spans ingested via OTel, attribute keys that Braintrust successfully parses into structured fields are removed from metadata. The data is not lost. It is available on the parsed field (input, output, metrics, expected, tags, span_attributes, or origin). Examples of attributes that are parsed and therefore stripped:
  • Vercel AI SDK: ai.prompt, ai.prompt.messages, ai.prompt.format, ai.response.text
  • OpenInference / generic Gen AI: gen_ai.input.messages, gen_ai.output.messages
  • LlamaIndex / OpenInference: llm.input_messages, llm.output_messages
  • Traceloop, CrewAI, Pydantic AI, LiveKit, and other supported integrations: their parsed attributes are stripped as well.
Attributes that Braintrust does not parse are unaffected and remain on metadata as before.

Escape hatch: preserve raw attributes per span

If you need the raw OTel attributes on metadata for a specific code path, for example to compare what an upstream library actually emitted against what Braintrust parsed, set the attribute braintrust.otel.preserve_attributes to true on that span. The override is evaluated per-span, so you can scope it to one suspect call without changing the rest of your tracing. Both the boolean true and the string "true" are accepted.
from opentelemetry import trace

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("debug-llm-call") as span:
    span.set_attribute("braintrust.otel.preserve_attributes", True)
    # ... your instrumented LLM call ...
When this attribute is set, Braintrust skips the strip step for that span and writes all raw attributes to metadata exactly as before.

Self-hosted: enabling the feature

Self-hosted deployments default to the prior behavior (raw attributes are kept on metadata). To opt in, set STRIP_OTEL_ATTRIBUTES_FROM_METADATA=true on the api-ts service through your deployment configuration. The per-span braintrust.otel.preserve_attributes escape hatch works the same way once the env var is enabled.

Verifying the behavior

  1. Send a known OTel span (for example, a Vercel AI SDK call producing ai.prompt) to Braintrust.
  2. Open the resulting log row. The parsed payload should appear in input / output, and the corresponding attribute keys should be absent from metadata.
  3. Re-run the same call after setting the braintrust.otel.preserve_attributes attribute to true on the span. The raw attribute keys should reappear in metadata while still being parsed into input / output.