Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

My prompt is in production: now what should I do?

Tips on how to monitor your in-production LLM traffic

Written by
Reviewed by
No items found.

After extensive unit testing & finding the right prompt across model providers, you’ve finally brought your prompts into production. Your users are interacting with it and experiencing the magical UX you hoped to create. However, given the non-deterministic nature of responses generated by LLMs, it’s a good idea to keep an eye on how the prompt is performing in production.

Vellum has seen more than a million LLM requests made through our endpoints and in this blog post, we’ll share successful approaches we’ve seen our customers take to monitor and improve their prompts in production.

Monitoring production traffic

The observability layer for LLMs is being defined in the industry as we speak, but here are some metrics we’ve seen to be quite valuable to our customers:

  • Completion quality: Was the model generated response of good quality?
  • Completion latency: How long did the response take to generate?
  • Number of provider errors: How frequently was there an error because of the model provider?
  • Cost: How much did you end up spending with model providers?

LLM completion quality

As outlined in our blog here, user feedback is the ultimate source of truth for model quality. Explicit feedback is collected through 👍 or 👎 in your UI, but may not have enough volume to measure overall quality. Implicit feedback is based on users' reactions to the LLM-generated output and can be a powerful gauge of quality. If explicit feedback collection rates are low, it is suggested to use implicit feedback when possible. You can also manually label completions generated by the LLM as good quality or not.

If your average quality over time is low, here are some questions you should ask:

  • Have your user expectations from the prompt changed?
  • Has your model been hallucinating? Does it need more examples to provide good responses?
  • Is the unit test bank robust enough to cover edge cases in production?

You should examine the low quality completions and modify your prompts / apply fine-tuning to increase quality in production.

LLM completion latency

Maintaining low latency for your completions is important for a good user experience. As we’ve written in our blog with Activeloop here, latency is determined at the generated token level. When evaluating model latency, you should be looking at the time taken to generate the first token and the time taken for the full completion. If latency is a challenge you can start by streaming your responses in production (gives the perception of low latency), if it continues to be a problem we recommend exploring other providers or open source models.

LLM provider errors

Tracking provider errors is another thing we’ve seen our customers do. Model providers could have some downtime which results in failed completions and ultimately a poor user experience. If you see too many provider errors in your application then it would be a good idea to implement provider retry logic, where if a request fails from one provider it could be routed to another one.

LLM costs

Finally, cost is important to measure and potentially control. We’ve seen costs for LLMs rise quickly, specially if there are some user segments that tend to use your LLM features quite a bit. Providers usually charge per token, so your total costs are a product of number of completions and tokens per prompt/completion. If costs need to be controlled then you should consider moving to cheaper models or fine tuning small open source models to provide similar results.

Tips to consider when changing prompts in production

Now that you have observability on your production traffic, you may occasionally want to make changes to improve quality, reliability or lower cost/latency. While making changes, there are several factors to consider, here’s a quick guide.

  • Regression testing to validate updates to prompts once in production: Make sure to run backtesting scripts when changing prompts in production, you don’t want to break any existing behavior! Back-testing is possible if you’re capturing the inputs/outputs of production requests. You can “replay” those inputs through your new prompt to see a before and after.
  • Add edge cases noticed in production to your test bank: Your unit test bank shouldn’t be a static list. If you notice an input that produced an undesirable result in production, you should add it to your test bank! Each time you edit a prompt you should be able to make changes with confidence.
  • When upgrading models (from GPT 3.5 to GPT 4, for example) or changing providers, you would need to change your prompt because the same prompt works differently on different foundation models. It’s best practice to pass all test cases each time a prompt is being sent to production
  • Versioning prompts is important because if you modify a prompt and observe worse performance (either in quality, latency, costs or provider errors), you should be able to revert back to an old version immediately, ideally without too many code changes.

Need help with observability & changes in production?

As you can see, the internal tooling needed to observe the performance of your LLM powered features and making changes with confidence is non-trivial. Vellum’s platform for building production LLM application aims to solve just that. We provide the tooling layer to experiment with prompts and models, monitor them in production, and make changes with confidence if needed. If you’re interested, you can sign up here! You can also subscribe to our blog and stay tuned for updates from us.

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Jun 19, 2023
share post
Expert verified
Related Posts
Guides
October 21, 2025
15 min
AI transformation playbook
LLM basics
October 20, 2025
8 min
The Top Enterprise AI Automation Platforms (Guide)
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Healthcare explanations of a patient-doctor match
Summarize why a patient was matched with a specific provider.
Prior authorization navigator
Automate the prior authorization process for medical claims.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.
ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.

Dynamic template box for Sales, Use {{sales}}

Start with some of these sales examples

Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Q&A RAG Chatbot with Cohere reranking

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Population health insights reporter
Combine healthcare sources and structure data for population health management.
AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
Review Comment Generator for GitHub PRs
Generate a code review comment for a GitHub pull request.
AI legal research agent
Comprehensive legal research memo based on research question, jurisdiction and date range.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.