Create prompts
- UI
- SDK
Create prompts directly in the Braintrust UI:
- Go to Prompts and click + Prompt
-
Configure the prompt:
- Name: Descriptive display name
- Slug: Unique identifier for code references (remains constant across updates)
- Model and parameters: Model selection, temperature, max tokens, etc.
- Messages: System, user, assistant, or tool messages with text or images
- Templating syntax: Mustache or Nunjucks for variable substitution
- Response format: Freeform text, JSON object, or structured JSON schema
- Description: Optional context about the prompt’s purpose
- Metadata: Optional additional information
- Click Save as custom prompt
Use templating
Use templates to inject variables into prompts at runtime. Braintrust supports Mustache and Nunjucks templating:- Mustache (default): Simple variable substitution and basic logic
- Nunjucks: Advanced templating with loops, conditionals, and filters
Nunjucks is a UI-only feature. It works in playgrounds but not when invoked via SDK.
Mustache
Mustache is the default templating language.Basic variable substitution
Basic variable substitution
Use
{{variable}} to insert values:Nested properties
Nested properties
Access nested object properties with dot notation:
Sections and iteration
Sections and iteration
Use sections to iterate over arrays or conditionally show content:
Inverted sections
Inverted sections
Use
^ to show content when a value is falsy or empty:Comments
Comments
Preserve special characters
Preserve special characters
If you want to preserve double curly brackets
{{ and }} as plain text when using Mustache, change the delimiter tags:Strict mode
Strict mode
Mustache supports strict mode, which throws an error when required template variables are missing:
Nunjucks
For more complex templating needs, use Nunjucks, which implements Jinja2 syntax in JavaScript.Nunjucks is a UI-only feature. It works in playgrounds but not when invoked via SDK.
Loops
Loops
Process arrays and iterate over data:Loop variables provide useful metadata:Available loop variables:
loop.index (1-indexed), loop.index0 (0-indexed), loop.first, loop.last, loop.lengthConditionals
Conditionals
Add logic to your prompts:Combine conditionals with loops:
Filters
Filters
Transform data with built-in filters:Common filters:
upper,lower: Change casetitle,capitalize: Capitalize textjoin(separator): Join array elementslength: Get array or string lengthdefault(value): Provide default valuereplace(old, new): Replace text
String operations
String operations
Concatenate strings with
~:Nested data access
Nested data access
Access nested properties and array elements:
Add tools
Tools extend your prompt’s capabilities by allowing the LLM to call functions during execution:- Query external APIs or databases
- Perform calculations or data transformations
- Retrieve information from vector stores or search engines
- Execute custom business logic
- UI
- SDK
To add tools to a prompt in the UI:
- When creating or editing a prompt, click Tools.
- Select tool functions from your library or add raw tools as JSON.
- Click Save tools.
Add MCP servers
Use public MCP (Model Context Protocol) servers to give your prompts access to external tools and data:- Evaluate complex tool calling workflows
- Experiment with external APIs and services
- Tune public MCP servers
MCP servers are a UI-only feature. They work in playgrounds and experiments but not when invoked via SDK.
Add to a prompt
To add an MCP server to a prompt:- When creating or editing a prompt, click MCP.
- Enable any available project-wide servers.
- To add a prompt-specific MCP server, click + MCP server:
- Provide a name, the public URL of the server, and an optional description.
- Click Add server.
- Authenticate the MCP server in your browser.
Add to a project
Project-wide MCP servers are accessible across all projects in your organization:- Go to Configuration > MCP.
- Click + MCP server and provide a name, the public URL of the server, and an optional description.
- Click Authenticate to authenticate the MCP server in your browser.
- Click Save.
Use in code
Reference prompts by slug to use them in your application:- Automatically logs inputs and outputs
- Tracks which prompt version was used
- Enables A/B testing different prompt versions
- Lets you update prompts without code changes
Load a prompt
TheloadPrompt()/load_prompt() function loads a prompt with caching support:
Pin a specific version
Reference a specific version when loading prompts:Stream results
Stream prompt responses for real-time output:Add extra messages
Append additional messages to prompts for multi-turn conversations:Test in playgrounds
Playgrounds provide a no-code environment for rapid prompt iteration:- Create or select a prompt
- Add a dataset or enter test inputs
- Run the prompt and view results
- Adjust parameters or messages
- Compare different versions side-by-side
Version prompts
Every prompt change creates a new version automatically. This lets you:- Compare performance across versions
- Roll back to previous versions
- Pin experiments to specific versions
- Track which version is used in production
Optimize with Loop
Use Loop to generate and improve prompts: Example queries:- “Generate a prompt for a chatbot that can answer questions about the product”
- “Add few-shot examples based on project logs”
- “Optimize this prompt to be friendlier and more engaging”
- “Improve this prompt based on the experiment results”
Best practices
Start simple: Begin with clear, direct instructions. Add complexity only when needed. Use few-shot examples: Include 2-3 examples in your prompt to guide model behavior. Be specific: Define exactly what you want, including format, tone, and constraints. Test with real data: Use production logs to build test datasets that reflect actual usage. Iterate systematically: Change one thing at a time and measure impact with experiments. Version everything: Save prompt changes so you can track what works and roll back if needed.Next steps
- Write scorers to evaluate prompt quality
- Run evaluations to compare prompt versions
- Use playgrounds for rapid iteration
- Use the Loop to optimize prompts
{{! comment }}for comments that won’t appear in output: