Prompts
Prompt engineering is a core activity in AI engineering. Braintrust allows you to create prompts, test them out in the playground, use them in your code, update them, and track their performance over time. Our goal is to provide a world-class authoring experience in Braintrust, seamlessly, securely, and reliably integrate them into your code, and debug issues as they arise.
Creating a prompt
To create a prompt, navigate to your Library in the top menu bar and select Prompts, then Create prompt. Pick a name and unique slug for your prompt. The slug is an identifier that you can use to reference it in your code. As you change the prompt's name, description, or contents, its slug stays constant.
Prompts can use mustache templating syntax to refer to variables. These variables are substituted
automatically in the API, playground, and using the .build()
function in your code. More on that below.
In code
To create a prompt in code, you can write a script and push
it to Braintrust:
Each prompt change is versioned, e.g. 5878bd218351fb8e
. You can use this identifier to pin a specific
version of the prompt in your code.
You can use this identifier to refer to a specific version of the prompt in your code.
Testing in the playground
While developing a prompt, it can be useful to test it out on real-world data in the Playground. You can open a prompt in the playground, tweak it, and save a new version once you're ready.
Using tools
You can use any custom tools you've created during prompt execution. To reference a tool when creating a prompt via the SDK, add the names of the tools you want to use to the tools
parameter:
In Python, the prompt and the tool need to be defined in the same file and pushed to Braintrust together. In TypeScript, they can be defined and pushed separately.
To add a tool to a prompt via the UI, select the Tools dropdown in the prompt creation window and select a tool from your library, then save the prompt.
For more information about creating and using tools, check out the Tools guide.
Using prompts in your code
Executing directly
In Braintrust, a prompt is a simple function that can be invoked directly through the SDK and REST API. When invoked, prompt functions leverage the proxy to access a wide range of providers and models with managed secrets, and are automatically traced and logged to your Braintrust project. All functions are fully managed and versioned via the UI and API.
Functions are a broad concept that encompass prompts, code snippets, HTTP endpoints, and more. When using the functions API, you can use a prompt's slug or ID as the function's slug or ID, respectively. To learn more about functions, see the functions reference.
The return value, result
, is a string unless you have tool calls, in which case it returns the arguments
of the first tool call. In TypeScript, you can assert this by using the schema
argument, which ensures your
code matches a particular zod schema:
Adding extra messages
If you're building a chat app, it's often useful to send back additional messages of context as you gather them. You can provide
OpenAI-style messages to the invoke
function by adding messages
, which are appended to the end of the built-in messages:
Streaming
You can also stream results in an easy-to-parse format.
Vercel AI SDK
If you're using Next.js and the Vercel AI SDK, you can use the Braintrust
adapter by installing the @braintrust/vercel-ai-sdk
package and converting the stream to Vercel's format:
Logging
Any invoke
requests you make will be logged using the active logging state, just like a function decorated
with @traced
or wrapTraced
. You can also pass in the parent
argument, which is a string that you can
derive from span.export()
while doing distributed tracing.
Fetching in code
If you'd like to run prompts directly, you can fetch them using the Braintrust SDK. The loadPrompt()
/load_prompt()
function loads a prompt into a simple format that you can pass along to the OpenAI client. Prompts
are cached upon initial load for fast subsequent retrieval operations.
If you need to use another model provider, then you can use the Braintrust
proxy to access a wide range of models using the OpenAI
format. You can also grab the messages
and other parameters directly from
the returned object to use a model library of your choice.
Pinning a specific version
To pin a specific version of a prompt, use the loadPrompt()
/load_prompt()
function with the version identifier.
Pulling prompts locally
You can also download prompts to your local filesystem and ensure a specific version is used via version control. You should
use the pull
command to:
- Download prompts to public projects so others can use them
- Pin your production environment to a specific version without running them through Braintrust on the request path
- Review changes to prompts in pull requests
Currently, braintrust pull
only supports TypeScript.
When you run braintrust pull
, you can specify a project name, prompt slug, or version to pull. If you don't specify
any of these, all prompts across projects will be pulled into a separate file per project. For example, if you have a
project named Summary
will generate the following file:
To pin your production environment to a specific version, you can run braintrust pull
with the --version
flag.
Using a pulled prompt
The prompts.create
function generates the same Prompt
object as the loadPrompt
function.
This means you can use a pulled prompt in the same way you would use a normal prompt, e.g. by
running prompt.build()
and passing the result to client.chat.completions.create()
call.
Pushing prompts
Just like with tools, you can push prompts to Braintrust using the push
command. Simply change
the prompt definition, and then run braintrust push
from the command line. Braintrust automatically generates
a new version for each pushed prompt.
When you run braintrust push
, you can specify one or more files or directories to push. If you specify a directory, all .ts
files under that directory are pushed.
Deployment strategies
It is often useful to use different versions of a prompt in different environments. For example, you might want to use the latest
version locally and in staging, but pin a specific version in production. This is simple to setup by conditionally passing a version
to loadPrompt()
/load_prompt()
based on the environment.
Chat vs. completion format
In Python, prompt.build()
returns a dictionary with chat or completion parameters, depending on the prompt type. In TypeScript, however,
prompt.build()
accepts an additional parameter (flavor
) to specify the format. This allows prompt.build
to be used in a more type-safe
manner. When you specify a flavor, the SDK also validates that the parameters are correct for that format.
Opening from traces
When you use a prompt in your code, Braintrust automatically links spans to the prompt used to generate them. This allows you to click to open a span in the playground, and see the prompt that generated it alongside the input variables. You can even test and save a new version of the prompt directly from the playground.
This workflow is very powerful. It effectively allows you to debug, iterate, and publish changes to your prompts directly within Braintrust. And because Braintrust flexibly allows you to load the latest prompt, a specific version, or even a version controlled artifact, you have a lot of control over how these updates propagate into your production systems.
Using the API
The full lifecycle of prompts - creating, retrieving, modifying, etc. - can be managed through the REST API. See the API docs for more details.