Iterate On Your LLM Application
There are many ways you can improve your LLM application, but for the sake of simplicity the most straightforward of all is to modify the existing "System Prompt"
you’ve already created on Confident AI.
Even if all your test cases has passed in the previous section, you should still work through this page.
Creating a New Prompt Version
On Confident AI, you CANNOT edit prompt versions that have already been used in evaluations. This ensures data integrity by maintaining the exact prompt version associated with each evaluation result. Instead of editing an existing version, you’ll need to create a new version of your prompt template to make changes.
Go to your "System Prompt"
in Prompt Studio, and click Create new version. Paste in the template of your new version, and click Save.
It’s important to note that although it is our good intention to improve our "System Prompt"
, you might end up with a regression instead. A regression happens when changes to your LLM application causes a decrease in performance, and this can be either done intentionally or unintentionally (much more commonly the case).
Pulling Your New Version
To see if your new prompt version works as expected, try pulling it and printing the contents of it on Confident AI:
from deepeval.prompt import Prompt
# Replace this with your alias if required
prompt = Prompt(alias="System Prompt")
# Pull latest version
prompt.pull()
# Adjust interpolate() to your dynamic variables
prompt_to_llm = prompt.interpolate(...)
print(prompt_to_llm)
By defauly, the pull()
method will pull the latest version of your prompt. If you wish to pull an older version, simply provide the version number you wish to pull:
...
prompt.pull(version="00.00.01")
For more information on pulling prompts, click here.
What’s Next?
Now that we’ve changed our "System Prompt"
, it’s time to run another evaluation to see how the performance has changed.