Skip to main content
Functions in Braintrust are atomic, reusable building blocks for executing AI-related logic. Functions are hosted and remotely executed in a performant serverless environment and are fully intended for production use. Functions can be invoked through the REST API, SDK, or UI, and have built-in support for streaming and structured outputs. Function types:
  • Prompts: LLM prompts with model configuration and templating (see Deploy prompts)
  • Tools: General-purpose code that LLMs can invoke to perform operations or access external data
  • Scorers: Functions for evaluating LLM output quality (returning a number from 0 to 1)
  • Workflows: Chains of two or more prompts for multi-step workflows
Security: For Braintrust-hosted deployments and self-hosted deployments on AWS, run in isolated AWS Lambda environments within a dedicated VPC that has no access to internal infrastructure. See code execution security for details.

Composability

Functions can be composed together to produce sophisticated applications without complex orchestration logic. Functions flow In this diagram, a prompt is being invoked with an input and calls two different tools and scorers to ultimately produce a streaming output. Out of the box, you also get automatic tracing, including the tool calls and scores. Any function can be used as a tool. For example, a RAG agent can be defined as just two components:
  • A vector search tool that embeds a query, searches for relevant documents, and returns them
  • A system prompt with instructions for how to retrieve content and synthesize answers using the tool
For a complete example, see the cookbook for Using functions to build a RAG agent.

Deploy tools

Tools are functions that LLMs can call to perform complex operations or access external data. Create tools in code and push them to Braintrust:
calculator.ts
import * as braintrust from "braintrust";
import { z } from "zod";

const project = braintrust.projects.create({ name: "calculator" });

project.tools.create({
  handler: ({ op, a, b }) => {
    switch (op) {
      case "add":
        return a + b;
      case "subtract":
        return a - b;
      case "multiply":
        return a * b;
      case "divide":
        return a / b;
    }
  },
  name: "Calculator",
  slug: "calculator",
  description: "A simple calculator that can add, subtract, multiply, and divide.",
  parameters: z.object({
    op: z.enum(["add", "subtract", "multiply", "divide"]),
    a: z.number(),
    b: z.number(),
  }),
  returns: z.number(),
  ifExists: "replace",
});
Push to Braintrust:
npx braintrust push calculator.ts

View and manage tools

Go to Tools to view all deployed tools in your project. Use Filter or the search bar to find specific tools. Click a tool to view its code. To test the tool, enter input data and click Test.

Add tools to prompts

Once deployed, you can add tools to prompts in the UI or via code. See Add tools for more details.

Call tools directly

Call tools via the API without going through a prompt:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "calculator",
  slug: "calculator",
  input: {
    op: "add",
    a: 5,
    b: 3,
  },
});

console.log(result); // 8

Deploy scorers

Scorers evaluate the quality of LLM outputs. See Write scorers for details on creating scorers in the UI or via code.

Deploy workflows

Workflows chain multiple prompts together into workflows. Create workflows in playgrounds:
  1. Navigate to Playgrounds
  2. Click + Workflow
  3. Add prompt nodes by selecting + in the comparison pane
  4. Use template variables to pass data between prompts:
    • {{dataset.input}} - Access dataset inputs
    • {{input}} - Access previous prompt’s output
    • {{input.field}} - Access structured output fields
  5. Save the agent
Workflows automatically chain prompts and pass outputs between them. View deployed workflows in the Workflows library.
Workflows are in beta and currently work only in playgrounds. Agent deployment via SDK is coming soon.

Invoke functions

Functions can be invoked through the REST API, SDK, or UI. When invoking a function, you can reference it by:
  • Slug: The unique identifier within a project for any function type (e.g., slug: "calculator")
  • Global function name: Built-in Braintrust scorers only - globally unique functions like Factuality from autoevals
Reference a function by its slug within a specific project:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "my-project",
  slug: "my-function",
  input: { query: "hello" },
});

Handle dependencies

Braintrust automatically bundles dependencies for your functions:
  • TypeScript: Uses esbuild to bundle code and dependencies (excludes native libraries like SQLite)
  • Python: Uses uv to cross-bundle dependencies to Linux (supports most binary dependencies)
If you encounter bundling issues, file an issue on GitHub.

Version functions

Like prompts, functions are automatically versioned. Pin specific versions in code:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "calculator",
  slug: "calculator",
  version: "a1b2c3d4", // Pin to specific version
  input: { op: "add", a: 5, b: 3 },
});

Use the REST API

Call any function via HTTP:
curl https://api.braintrust.dev/v1/function \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $BRAINTRUST_API_KEY" \
  -d '{
    "project_name": "calculator",
    "slug": "calculator",
    "input": {
      "op": "add",
      "a": 5,
      "b": 3
    }
  }'
See the Data API reference for complete details.

Function features

All functions in Braintrust support:
  • Well-defined parameters and return types: Type-safe interfaces using Zod (TypeScript) or Pydantic (Python)
  • Streaming and non-streaming invocation: Handle real-time and batch operations
  • Automatic tracing and logging: All function calls are traced in Braintrust
  • OpenAI argument format: Prompts can be loaded directly in OpenAI-compatible format
  • Version control: Functions are automatically versioned with each deployment

Organize functions

Functions are organized into projects using the projects.create() method. This method returns a handle to the project (creating it if it doesn’t exist) that you can use to create tools, prompts, and scorers:
import * as braintrust from "braintrust";

// Get a handle to the project (creates if it doesn't exist)
const project = braintrust.projects.create({ name: "my-project" });

// Use the project to create functions
project.tools.create({...});
project.prompts.create({...});
project.scorers.create({...});
If a project already exists, projects.create() returns a handle to it. There is no separate .get() method.

Next steps