Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

Announcing Vellum

We’re excited to publicly announce the start of our new adventure: Vellum

Written by
Reviewed by
No items found.

Hi everyone 👋

We’re excited to publicly announce the start of our new adventure: Vellum. We’re in Y Combinator’s current batch (W23) and our mission is to help companies get the best results from Large Language Models like GPT-3. Our product helps developers evaluate, manage and A/B test AI models/prompts to increase quality and reduce cost.

What problems are we trying to solve?

Since GPT-3 launched in 2020 we saw companies like Jasper and find compelling sales & marketing use cases. In the last 2 years the rate of improvement of these foundation models has been staggering, as clearly evidenced by OpenAI’s ChatGPT and models from Cohere and AI21.

With all these advances, companies around the world are looking to incorporate Large Language Models (LLMs) for generation and classification use cases both for internal applications and in their core product. However, we’ve seen 3 challenges when companies try to bring these models into production. These obstacles result in slower iteration cycles and suboptimal configurations of these Large Language Models.

  • Initial setup and deployment is difficult
  • Monitoring and other best practices require engineering teams to write lots of custom code
  • Ongoing model optimization and evaluation is time consuming and requires deep technical knowledge

Going from 0 -> 1

When coming up with initial prompts, we’ve seen firsthand the challenges developers face when choosing between model providers1, foundation models2, and model parameters3. Several browser tabs are needed to perform experiments and results are stored in long spreadsheets for side-by-side comparison. There’s no good way to collaborate with colleagues while iterating on prompts.

Choosing the right prompts often comes down to a time-boxed guessing game and you are never sure if a better outcome is possible – forget about spending the time to try fine-tuning!

Managing Once in Production

Once the right prompt/model is deployed, a lot of internal custom code is needed to track model/prompt version history and an audit log of model inputs, outputs and ground truth results from the end user. Setting up this infrastructure is important to measure performance, experiment with new prompts, and revert to older model versions if the changes are not ideal. These LLMs are so sensitive that a single word change in your prompt could provide dramatically different results. Because of this, most developers are reluctant to iterate and try to improve the model in fear that it’ll break existing behavior.

The time spent building and maintaining monitoring and testing infrastructure is non-trivial and could instead go towards building your core product.

Optimizing to Get the Very Best

Once models have been running in production and the right tooling is set up, there's usually data available to fine-tune the models to provide better quality at a lower cost. However, setting up the right fine tuned model in production has its own challenges: getting training data in the right format, trial and error for different hyper parameter combinations, and retraining as new training data is collected.

To add to the complexity, this problem is only expected to increase over time as there are new model providers and foundation models, each with their own cost and quality tradeoffs. To keep up with the cutting edge, you have to constantly spend time evaluating new models as they’re released.

Why we chose this problem

We worked together at Dover (YC S19) for 2+ years where we built production use-cases of LLMs (both generation and classification). Noa and Sidd are MIT engineers who have worked DataRobot’s MLOps team and Quora’s ML Platform team respectively.

We realized that all the ops tooling we had built for traditional ML didn’t exist for LLMs. We’d build these reasonable production use-cases of AI only to then be hesitant in making changes and improving our setup due to a lack of observability. We ended up having to build custom internal tooling to solve for this.

We’ve come to deeply feel the pains and requirements of using LLMs in production, user-facing application. We’ve decided to productize our learnings and share them with other companies so more people can make use of Generative AI without having to overcome the steep learning curve we went through.

What's next for Vellum

We’re at the beginning of an exciting journey and will be releasing several products and sharing best practices on how to work with LLMs. Stay tuned for updates on our blog!


- Akash, Sidd & Noa

1Model provider examples: OpenAI, Cohere, AI21
2Foundation model examples: GPT-3’s Ada, Babbage, Curie and Davinci
3Parameter examples: Temperature, Top-P

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Feb 2, 2023
share post
Expert verified
Related Posts
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
Guides
October 6, 2025
15
A practical guide to AI automation
LLM basics
September 25, 2025
8 min
Top Low-code AI Agent Platforms for Product Managers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Q&A RAG Chatbot with Cohere reranking
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Population health insights reporter
Combine healthcare sources and structure data for population health management.
LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

PDF Data Extraction to CSV
Extract unstructured data (PDF) into a structured format (CSV).
Review Comment Generator for GitHub PRs
Generate a code review comment for a GitHub pull request.
Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.
AI legal research agent
Comprehensive legal research memo based on research question, jurisdiction and date range.
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.