) next to any chat-based prompt to get code snippets in TypeScript, Python, or cURL.
The generated code includes all the prompt configuration, including the model, messages, and any additional parameters you've set.
## Custom models
To configure custom models, see the [Custom models](/docs/guides/proxy#custom-models) section of the proxy docs.
Endpoint configurations, like custom models, are automatically picked up by the playground.
## Advanced options
### Appended dataset messages
You may sometimes have additional messages in a dataset that you want to append to a prompt. This option lets you specify a path to a messages array in the dataset. For example, if `input` is specified as the appended messages path and a dataset row has the following input, all prompts in the playground will run with additional messages.
```json
[
{
"role": "assistant",
"content": "Is there anything else I can help you with?"
},
{
"role": "user",
"content": "Yes, I have another question."
}
]
```
### Max concurrency
The maximum number of tasks/scorers that will be run concurrently in the playground. This is useful for avoiding rate limits (429 - Too many requests) from AI providers.
### Strict variables
When this option is enabled, evaluations will fail if the dataset row does not include all of the variables referenced in prompts.
---
file: ./content/docs/guides/projects.mdx
meta: {
"title": "Projects",
"description": "Create and configure projects"
}
# Projects
A project is analogous to an AI feature in your application. Some customers create separate projects for development and production to help track workflows. Projects contain all [experiments](/docs/guides/evals), [logs](/docs/guides/logging), [datasets](/docs/guides/datasets) and [playgrounds](/docs/guides/playground) for the feature.
For example, a project might contain:
* An experiment that tests the performance of a new version of a chatbot
* A dataset of customer support conversations
* A prompt that guides the chatbot's responses
* A tool that helps the chatbot answer customer questions
* A scorer that evaluates the chatbot's responses
* Logs that capture the chatbot's interactions with customers
## Project configuration
Projects can also house configuration settings that are shared across the project.
### Tags
Braintrust supports tags that you can use throughout your project to curate logs, datasets, and even experiments. You can filter based on tags in the UI to track various kinds of data across your application, and how they change over time. Tags can be created in the **Configuration** tab by selecting **Add tag** and entering a tag name, selecting a color, and adding an optional description.