Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

Testing LLM applications features - before & after production

Tips to experiment with your LLM related prompts

Written by
Reviewed by
No items found.

Introduction

This latest blog from us discusses the importance of experimentation and testing for LLM features both before and after they are put into production. LLMs are probabilistic, and therefore, need to be tested with various prompts and scenarios/test cases to ensure their reliability and performance. Creating a unit test bank, regression testing, and A/B testing are some of the methods that can be used to evaluate the quality of LLM features. Once in production, improvement to cost/latency is possible by collecting high quality input-output pairs and using them to fine-tune models.

Pre-production

Tracking prompt templates

Tracking variations while iterating on prompts before sending them to production is essential for maintaining control over your LLM feature development process. By keeping a record of prompt templates, choice of foundation model and model parameters, you can easily revert to a previous version if needed, as even minor changes in wording or parameters can significantly impact the model's performance on your test cases.

This is particularly helpful when multiple people are collaborating on prompt development. A well-documented history of prompt iterations ensures that everyone stays informed about the changes made and their effects on the application's performance. It also facilitates effective communication among team members, enabling them to understand the rationale behind each modification and learn from past experiences. We’ve usually seen companies do this in Excel spreadsheets and Notion documents.

Unit test bank

Creating a unit test bank before deploying LLMs to production is a proactive approach to ensure prompt reliability in production. The test bank should comprise scenarios anticipated in production, think of this as QAing your feature before it goes to production. The prompts should "pass" these test cases based on your evaluation criteria. We wrote a blog about how to evaluate quality of LLM features a few weeks ago, but in summary, the evaluation approach depends on type of use case

  • Classification: accuracy, recall, precision, F score and confusion matrices for a deeper evaluation
  • Data extraction: Validate that the output is syntactically valid and the expected keys are present in the generated response
  • SQL/Code generation: Validate that the output is syntactically valid and running it will return the expected values
  • Creative output: Semantic similarity between model generated response and target response using cross-encoders

Post-production

Regression testing

As you modify prompts, it's essential to verify that existing functionality remains intact. One approach is to replay historical requests against the updated prompt or model. Run a script to take the inputs sent to the original prompt/model and pass them to your updated prompt/model. Do a side by side comparison. By comparing the outcomes, you can ensure that your changes haven't introduced any unexpected behavior, thus preserving the overall performance and stability of your LLM applications.

A/B testing

If you’re not sure about which of your final prompts to put in production, A/B testing them might be a good idea. You could A/B test prompts from different model providers too! By running multiple prompts side by side, you can gather valuable user feedback to determine which prompt performs better in real-world scenarios. This feedback can be collected implicitly, through observing user interactions and engagement, or explicitly, by directly asking users for their input via thumbs up/thumbs down. When this is run for sufficient time it should be clear which prompts are performing better. It’s important to set up the A/B testing infrastructure correctly: make sure you correctly track which users get which prompts and measure their feedback too.

Keep building your unit test bank

You already have a unit test bank before your features were sent to production. As prompts encounter unexpected inputs or scenarios that produce undesirable behavior, these new cases should be documented and added to the unit test bank. When future prompt iterations are done, they must pass these additional tests/edge cases before deploying to production. This approach ensures your features remain robust, don’t have regressions and enhance performance in handling real-world edge cases.

Experiment with fine tuning once you have enough training data

As open source models continue to get better, fine-tuning open source models becomes a viable option once your LLM application has accumulated enough training data in production. This technique is called model distillation: you have enough ground truth data through closed source models (like GPT-4 and Claude) and you can use that to train your own models. Using open source models can be hugely beneficial: they could be cheaper, faster, more customizable and more privacy compliant since your data is not sent to external model providers. While experimenting with open source models, it’s important to maintain quality and only then look for other improvements.

Want to experiment with your LLM powered features?

Building the infrastructure for unit testing, regression testing, A/B testing & open source model fine tuning takes a lot of engineering capacity for internal tooling, time that can be spent on building your end user features.

We provide the tooling layer to experiment with prompts and models, evaluate their quality, and make changes with confidence once in production. Request a demo here, join our Discord or reach out to us at support@vellum.ai if you have any questions!

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Jun 12, 2023
share post
Expert verified
Related Posts
LLM basics
October 20, 2025
8 min
The Top Enterprise AI Automation Platforms (Guide)
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
Guides
October 6, 2025
15
A practical guide to AI automation
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.
Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Q&A RAG Chatbot with Cohere reranking

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.
Synthetic Dataset Generator
Generate a synthetic dataset for testing your AI engineered logic.
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Financial Statement Review Workflow
Extract and review financial statements and their corresponding footnotes from SEC 10-K filings.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.