Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!
Testing LLM applications features - before & after production

Tips to experiment with your LLM related prompts

Written by
Reviewed by:
No items found.

Introduction

This latest blog from us discusses the importance of experimentation and testing for LLM features both before and after they are put into production. LLMs are probabilistic, and therefore, need to be tested with various prompts and scenarios/test cases to ensure their reliability and performance. Creating a unit test bank, regression testing, and A/B testing are some of the methods that can be used to evaluate the quality of LLM features. Once in production, improvement to cost/latency is possible by collecting high quality input-output pairs and using them to fine-tune models.

Pre-production

Tracking prompt templates

Tracking variations while iterating on prompts before sending them to production is essential for maintaining control over your LLM feature development process. By keeping a record of prompt templates, choice of foundation model and model parameters, you can easily revert to a previous version if needed, as even minor changes in wording or parameters can significantly impact the model's performance on your test cases.

This is particularly helpful when multiple people are collaborating on prompt development. A well-documented history of prompt iterations ensures that everyone stays informed about the changes made and their effects on the application's performance. It also facilitates effective communication among team members, enabling them to understand the rationale behind each modification and learn from past experiences. We’ve usually seen companies do this in Excel spreadsheets and Notion documents.

Unit test bank

Creating a unit test bank before deploying LLMs to production is a proactive approach to ensure prompt reliability in production. The test bank should comprise scenarios anticipated in production, think of this as QAing your feature before it goes to production. The prompts should "pass" these test cases based on your evaluation criteria. We wrote a blog about how to evaluate quality of LLM features a few weeks ago, but in summary, the evaluation approach depends on type of use case

  • Classification: accuracy, recall, precision, F score and confusion matrices for a deeper evaluation
  • Data extraction: Validate that the output is syntactically valid and the expected keys are present in the generated response
  • SQL/Code generation: Validate that the output is syntactically valid and running it will return the expected values
  • Creative output: Semantic similarity between model generated response and target response using cross-encoders

Post-production

Regression testing

As you modify prompts, it's essential to verify that existing functionality remains intact. One approach is to replay historical requests against the updated prompt or model. Run a script to take the inputs sent to the original prompt/model and pass them to your updated prompt/model. Do a side by side comparison. By comparing the outcomes, you can ensure that your changes haven't introduced any unexpected behavior, thus preserving the overall performance and stability of your LLM applications.

A/B testing

If you’re not sure about which of your final prompts to put in production, A/B testing them might be a good idea. You could A/B test prompts from different model providers too! By running multiple prompts side by side, you can gather valuable user feedback to determine which prompt performs better in real-world scenarios. This feedback can be collected implicitly, through observing user interactions and engagement, or explicitly, by directly asking users for their input via thumbs up/thumbs down. When this is run for sufficient time it should be clear which prompts are performing better. It’s important to set up the A/B testing infrastructure correctly: make sure you correctly track which users get which prompts and measure their feedback too.

Keep building your unit test bank

You already have a unit test bank before your features were sent to production. As prompts encounter unexpected inputs or scenarios that produce undesirable behavior, these new cases should be documented and added to the unit test bank. When future prompt iterations are done, they must pass these additional tests/edge cases before deploying to production. This approach ensures your features remain robust, don’t have regressions and enhance performance in handling real-world edge cases.

Experiment with fine tuning once you have enough training data

As open source models continue to get better, fine-tuning open source models becomes a viable option once your LLM application has accumulated enough training data in production. This technique is called model distillation: you have enough ground truth data through closed source models (like GPT-4 and Claude) and you can use that to train your own models. Using open source models can be hugely beneficial: they could be cheaper, faster, more customizable and more privacy compliant since your data is not sent to external model providers. While experimenting with open source models, it’s important to maintain quality and only then look for other improvements.

Want to experiment with your LLM powered features?

Building the infrastructure for unit testing, regression testing, A/B testing & open source model fine tuning takes a lot of engineering capacity for internal tooling, time that can be spent on building your end user features.

We provide the tooling layer to experiment with prompts and models, evaluate their quality, and make changes with confidence once in production. Request a demo here, join our Discord or reach out to us at support@vellum.ai if you have any questions!

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Jun 12, 2023
Share Post
Expert verified
Related Posts
All
September 16, 2025
12 min
MCP UI & The Future of Agentic Commerce
Guides
September 16, 2025
4 min
Google's AP2: A new protocol for AI agent payments
Guides
September 15, 2025
6 min
We don’t speak JSON
LLM basics
September 12, 2025
10 min
Top 13 AI Agent Builder Platforms for Enterprises in 2025
LLM basics
September 12, 2025
8 min
Top 12 AI Workflow Platforms in 2025
Customer Stories
September 8, 2025
8
How Marveri enabled lawyers to shape AI products without blocking developers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Email Signup
Sorts the trigger and email categories
Come to our next webinar
Description for our webinar
New CTA
Sorts the trigger and email categories

Start with some of these healthcare examples

Personalized healthcare explanations of a patient-doctor match
An AI workflow that extracts PII data and match evidence then summarizes to the user why a patient was matched with a specific provider, highlighting factors like insurance, condition, and symptoms.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.

Start with some of these insurance examples

No items found.

Start with some of these agents

Personalized healthcare explanations of a patient-doctor match
An AI workflow that extracts PII data and match evidence then summarizes to the user why a patient was matched with a specific provider, highlighting factors like insurance, condition, and symptoms.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.
Synthetic Dataset Generator
This agent generates a synthetic dataset for testing an AI pipeline by creating test cases based on user-defined parameters. It allows users to specify the purpose of the AI pipeline, the number of test cases to generate, and any additional context, then outputs the test cases formatted as an API request body.