AI observability beyond Python and TypeScript
Most AI observability tools only support Python and TypeScript. That's a problem for banks adding LLM features to Java backends, infrastructure teams building AI tooling in Go, and startups shipping products in Ruby or C#. These developers are left with two options: build custom instrumentation from scratch, or bolt on generic tracing tools that don't understand AI-specific concepts like token usage, costs, or prompt/completion pairs.
We've heard this directly from customers, so we built native SDKs for Java, Go, Ruby, and C#, all built on OpenTelemetry for vendor-neutral observability.
Every SDK provides:
- Automatic tracing that fits into existing infrastructure. Export traces to Braintrust, Datadog, Honeycomb, or any OTLP-compatible backend.
- Client wrappers for OpenAI, Anthropic, and other providers that automatically capture inputs, outputs, latency, token usage, and costs.
- Evaluation frameworks for running evals in CI/CD with custom scorers.
- Prompt management for fetching prompts from Braintrust at runtime.
Get started
Each SDK is open source and available now:
- Java: github.com/braintrustdata/braintrust-sdk-java (Maven Central)
- Go: github.com/braintrustdata/braintrust-sdk-go
- Ruby: github.com/braintrustdata/braintrust-sdk-ruby (RubyGems)
- C#: github.com/braintrustdata/braintrust-sdk-dotnet (NuGet)
Here's how to instrument an OpenAI client in each language:
import (
traceopenai "github.com/braintrustdata/braintrust-sdk-go/trace/contrib/openai"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
// Wrap your OpenAI client with automatic tracing
client := openai.NewClient(
option.WithMiddleware(traceopenai.NewMiddleware()),
)
// Use the client as normal - all calls automatically traced
resp, _ := client.Chat.Completions.New(ctx, openai.ChatCompletionNewParams{
Messages: []openai.ChatCompletionMessageParamUnion{
openai.UserMessage("Explain quantum computing"),
},
Model: openai.ChatModelGPT4oMini,
})
Check out the README for each SDK for full documentation and examples. If you have questions or run into issues, reach out on Discord.