Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

How to evaluate your AI product if you don’t have ground truth data

Ground truths help build confidence, but they shouldn’t block progress.

5 min
Written by
Reviewed by
No items found.

Ground truths are often seen as the backbone of AI model validation, but they can also be a roadblock.

What happens if you don't thoroughly test your AI before shipping it? Usually, nothing good.

Companies that avoid the headlines for AI mishaps typically share one key trait: rigorous AI testing practices. The gold standard for this is using "ground truths"— high-quality examples demonstrating ideal outputs. However, not all companies have access to these datasets, and creating them can be expensive and time-consuming.

In this article, we'll cover three effective methods to evaluate your AI models without initial ground truth data, while gathering valuable ground truths over time.

By adopting smarter testing and validation practices, you'll move faster and with greater confidence.

Let’s dive in and explore the trade-offs.

The Three Stages of Ground Truths

When teams begin AI validation, they often ask:

  • "How do we quickly create ground truths without spending months or a fortune?"
  • "What’s good enough to move forward?"
  • "Is synthetic data trustworthy, or are we fooling ourselves?"
  • "Can we skip ground truths entirely and still have confidence?"

There's no universal answer, but understanding the ground truth spectrum helps clarify your validation options:

1/ Hand-Made Ground Truths: Ideal but Costly

This is the gold standard. You take the time to manually curate high-quality ground truths to evaluate model performance. These are painstakingly crafted, validated, and expensive in both time and resources. If you have infinite budget and patience, great. But most teams don’t.

2/ No Ground Truths: Pre-Prod Limbo

This is where many teams get stuck. They don’t have ground truths, so they can’t confidently test outputs, and without confident testing, they don’t want to go into production. It’s a Catch-22. This is also where synthetic data often gets introduced as a workaround—more on that in a second.

3/ Actuals: Real-World Feedback from Production

The best validation is actual usage. By deploying your model—even internally—you'll gather realistic, actionable data. Though initially imperfect, even small-scale human reviews significantly enhance validation quality and also help you better understand how users want to use your product in the first place.

The Synthetic Data Trap

Synthetic data sounds like a great compromise. Instead of real user inputs, you generate examples to simulate ground truths. But this can backfire.

A customer once told me our model’s outputs were bad—turns out, they were mistakenly reviewing their own synthetic data instead of our model's outputs. It was a bit awkward (and pretty funny), highlighting the risk & difficulty you take on when using synthetic data.

This happens more than people realize: you spend as much time refining your prompts to get good synthetic ground truths as you would just shipping the app and getting real data.

Practical Strategies to Reach Production Faster

Ground truths don't have to block your progress.. These two strategies can help you validate and move towards confident deployment faster..

Ground truths don't have to block your progress. These two strategies can help you rapidly validate your AI model and move towards confident deployment:

1/ Deploy Internally and Collect Feedback

Instead of stalling in pre-prod, release a version internally and start collecting actual user interactions. Even if it’s just a small batch labeled manually, real-world data will always be more useful than over-engineered synthetic examples.

2/ Use an LLM as a Judge

If you don’t have actuals or ground truths, an LLM can evaluate the quality of outputs based on predefined criteria. It’s not perfect, but it’s better than nothing, and it gives you a scalable way to check output quality without hand-labeling everything.

Ground truths help build confidence, but they shouldn’t block progress

There’s always a balance—enough validation to catch major issues, but not so much that you’re stuck in pre-prod forever.

The key is to know your product, your users, and your risk tolerance.

Shipping an internal tool isn’t the same as shipping an AI feature to end users. And the bar for quality is way higher in legal, medical, or education settings than it is in marketing or creative tools.

You don’t need perfect ground truths to start. Use lightweight checks—internal deployments, evals, or small-scale reviews—to unblock.

Ship something.

Then use real feedback to make it better.

ABOUT THE AUTHOR
Aaron Levin
Founding Solutions Architect

Aaron Levin works closely with Vellum’s customers to understand their unique challenges and then designs custom AI systems that fit their needs perfectly. He bridges the gap between complex technology and real-world applications, ensuring that the solutions he builds are not only innovative but also easy to implement and scale. Before stepping into this role, Aaron was an engineer, developing advanced AI systems that tackled tough problems. Now, he’s focused on taking these skills to the next level, helping businesses unlock the full potential of AI in their operations.

ABOUT THE reviewer

No items found.
lAST UPDATED
Mar 28, 2025
share post
Expert verified
Related Posts
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
Guides
October 6, 2025
15
A practical guide to AI automation
LLM basics
September 25, 2025
8 min
Top Low-code AI Agent Platforms for Product Managers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.
LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Q&A RAG Chatbot with Cohere reranking

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.
Retail pricing optimizer agent
Analyze product data and market conditions and recommend pricing strategies.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.
ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.
Healthcare explanations of a patient-doctor match
Summarize why a patient was matched with a specific provider.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
E-commerce shopping agent
Check order status, manage shopping carts and process returns.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.