Call a prompt
Useinvoke() to call a deployed prompt by its slug:
input parameter values map to template variables in your prompt. For example, {{text}} in your prompt gets replaced with the text value from input.
Use within a trace
When calling prompts from instrumented code, they automatically nest within your parent trace:Handle tool calls
When a prompt includes tools, the response contains tool calls that your code must handle:Version prompts
Every prompt save creates a new version with a unique ID. Pin specific versions in production code:invoke() uses the latest version.
Use environments
Environments separate dev, staging, and production configurations. Set the environment when calling prompts:Build prompts locally
Usebuild() to compile a prompt’s template without making an API call. This is useful for testing or generating messages to pass to your own LLM client:
build() method returns the compiled messages, model, and parameters without executing the prompt.
Stream responses
Enable streaming to receive responses incrementally:Use the REST API
Call prompts directly via HTTP:Next steps
- Deploy functions to deploy tools and agents alongside prompts
- Manage environments to separate dev and production prompts
- Monitor deployments to track prompt performance in production
- Write prompts to create and test prompts before deployment