Skip to main content
Prompts created in Braintrust can be called directly from your application code. Changes made in the UI immediately affect production behavior, enabling rapid iteration without redeployment.

Call a prompt

Use invoke() to call a deployed prompt by its slug:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "My Project",
  slug: "summarizer",
  input: {
    text: "Long text to summarize...",
  },
});

console.log(result);
The input parameter values map to template variables in your prompt. For example, {{text}} in your prompt gets replaced with the text value from input.

Use within a trace

When calling prompts from instrumented code, they automatically nest within your parent trace:
import { initLogger, traced } from "braintrust";

const logger = initLogger({ projectName: "My Project" });

const summarize = traced(async (text: string) => {
  return await logger.invoke("summarizer", { input: { text } });
});

// This creates a trace with "summarize" as parent
const result = await summarize("Long text to summarize...");
This creates a hierarchical trace where the prompt execution appears as a child span of your function.

Handle tool calls

When a prompt includes tools, the response contains tool calls that your code must handle:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "RAG App",
  slug: "document-search",
  input: { question: "What is Braintrust?" },
});

// Handle tool calls
if (result.toolCalls) {
  for (const toolCall of result.toolCalls) {
    console.log(`Tool: ${toolCall.function.name}`);
    console.log(`Arguments: ${toolCall.function.arguments}`);
    // Execute tool and return results...
  }
}
See Deploy functions for details on deploying tools alongside prompts.

Version prompts

Every prompt save creates a new version with a unique ID. Pin specific versions in production code:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "My Project",
  slug: "summarizer",
  version: "5878bd218351fb8e", // Pin to specific version
  input: { text: "Long text to summarize..." },
});
Without a version parameter, invoke() uses the latest version.

Use environments

Environments separate dev, staging, and production configurations. Set the environment when calling prompts:
import { invoke } from "braintrust";

const result = await invoke({
  projectName: "My Project",
  slug: "summarizer",
  environment: "production",
  input: { text: "Long text to summarize..." },
});
This uses the prompt version assigned to the production environment. See Manage environments for details.

Build prompts locally

Use build() to compile a prompt’s template without making an API call. This is useful for testing or generating messages to pass to your own LLM client:
import { loadPrompt } from "braintrust";

const prompt = await loadPrompt({
  projectName: "My Project",
  slug: "summarizer",
});

const { messages, model, temperature } = prompt.build({
  text: "Long text to summarize...",
});

console.log(messages);
// Use messages with your own LLM client
The build() method returns the compiled messages, model, and parameters without executing the prompt.

Stream responses

Enable streaming to receive responses incrementally:
import { invoke } from "braintrust";

const stream = await invoke({
  projectName: "My Project",
  slug: "summarizer",
  input: { text: "Long text to summarize..." },
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk);
}
Streaming works through the AI Proxy and automatically logs the complete response to Braintrust.

Use the REST API

Call prompts directly via HTTP:
curl https://api.braintrust.dev/v1/function \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $BRAINTRUST_API_KEY" \
  -d '{
    "project_name": "My Project",
    "slug": "summarizer",
    "input": {
      "text": "Long text to summarize..."
    }
  }'
The REST API supports all the same parameters as the SDK, including versioning, environments, and streaming.

Next steps