Setup Prompts and Datasets
Creating a prompt and dataset is extremely important to running evaluations on Confident AI. This is because by editing prompts on Confident AI we’ll be able to tell you which version performs best, while annotating datasets on Confident AI allows you to use them for evaluation much more effectively.
This page will walk you through how to quickly create one of each before moving onto running an evaluation.
Prompts
Full walkthrough on prompt studio available here.
How to create and version a prompt
Create a Prompt
Navigate to your project, and go to the Prompt Studio tab on the left navigation drawer. Click on Create a Prompt, and provide your prompt with an alias. For example, the alias of your first prompt can be something like "System Prompt"
, which is where you’ll be storing your system prompt, instead of using something like CSV files or GitHub .txt
files.
Create a Prompt Version
Now that you have a prompt, create a first version of your prompt. You should paste in the prompt that you are currently using in your LLM application. For example, you can paste in your entire system prompt and click Save.
If this is a bit too much information, don’t worry. You’ll have the opportunity to dig deeper into all the prompt management features Confident AI offers in later sections.
Using Variables in Prompts
When your prompt needs to include dynamic content, you can use variables by wrapping them in double curly braces. Here’s how to format them correctly:
# ✅ Correct usage:
"Hi, my name is {{ name }}."
"The temperature is {{ temperature }} degrees."
"User input: {{ user_input }}"
# ❌ Incorrect usage:
"Hi, my name is {{ variable name }}." # Spaces in variable name
"Hi, my name is {{ variable_name }}." # Extra spaces around variable name
Variables must follow these rules: 1. No spaces in the variable name 2. Exactly one space between the braces and the variable name
You can also create a list of prompt messages, or which is a list of prompts with the "role"
and "content"
field present. If you decide to create a prompt message instead, you can supply it directly to your LLM provider once pulled from Confident AI (more info on this later). More on prompt messages can be found here.
Let’s create a dataset next.
Datasets
Full walkthrough on the datasets feature are available here.
How to create and annotate a dataset
Create a Dataset
Navigate to your project, and go to the Datasets tab on the left navigation drawer. Click on Create a dataset, and provide your dataset with an alias. For example, the alias of your first dataset can be something like "My Evals Dataset"
, which is where you’ll be storing the dataset you’ll be using for evaluation.
Create or Upload Golden(s)
Once you’ve created a dataset, you’ll be directed to the Dataset Editor page for your newly created dataset. Depending on whether you already have goldens, you should either:
- Upload a CSV file of your goldens to your dataset, only if you already have a dataset prepared
- Create a golden, with the input as the text you typically prompt your LLM application with for testing
If you fall into category 2 where you don’t already have a dataset, simply creating a golden very quickly with the input text you usually test your LLM application with is the best approach to get LLM evaluation setup.
You can always add more goldens later on, but that can be done at a later time once you’ve gone through this quickstart guide.
What’s Next?
In the next section, we’ll use the prompt and dataset we’ve setup to run our first LLM evaluation.