Skip to main content
AgentScope is a framework for building multi-agent systems. Braintrust traces AgentScope agents, pipelines, tool calls, and model requests so you can follow complete agent workflows in one trace.
This guide covers manual instrumentation. For quicker setup, use auto-instrumentation.

Setup

Install Braintrust and AgentScope:
pip install braintrust agentscope
Set your API keys before you run your app:
.env
BRAINTRUST_API_KEY=<your-braintrust-api-key>
OPENAI_API_KEY=<your-openai-api-key>

Trace with AgentScope

Use setup_agentscope() when you want to trace AgentScope explicitly.
trace-agentscope.py
import asyncio

from braintrust.integrations.agentscope import setup_agentscope

setup_agentscope(project_name="agentscope-example")

from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter
from agentscope.memory import InMemoryMemory
from agentscope.message import Msg
from agentscope.model import OpenAIChatModel
from agentscope.tool import Toolkit


async def main():
    agent = ReActAgent(
        name="Friday",
        sys_prompt="You are a concise assistant. Answer in one sentence.",
        model=OpenAIChatModel(model_name="gpt-5-mini"),
        formatter=OpenAIChatFormatter(),
        toolkit=Toolkit(),
        memory=InMemoryMemory(),
    )
    if hasattr(agent, "set_console_output_enabled"):
        agent.set_console_output_enabled(False)
    elif hasattr(agent, "disable_console_output"):
        agent.disable_console_output()

    response = await agent(
        Msg(name="user", content="Say hello in exactly two words.", role="user")
    )
    print(response)


if __name__ == "__main__":
    asyncio.run(main())
setup_agentscope() initializes the Braintrust logger if one is not already active, then patches AgentScope.

What Braintrust traces

Braintrust creates spans for:
  • Agent runs: each agent invocation appears as a named task span
  • Sequential pipelines: the pipeline run becomes a parent span, with child spans for each agent step
  • Tool execution: tool calls appear as child tool spans under the active agent run
  • Underlying model calls: LLM requests keep their normal model metadata, token metrics, and outputs

Supported versions

The integration is tested against AgentScope 1.0.0 and the latest released version.

Resources