Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

When to use fine-tuning?

Fine-tuning can provide significant benefits in cost, quality & latency when compared to prompting

Written by
Reviewed by
No items found.
TL;DR: Fine-tuning can provide significant benefits in cost, quality & latency when compared to prompting — we helped one of our customers increase model accuracy while decreasing cost by 94% after fine-tuning. This post provides details on how you can get started.

In our blog we will share best practices we've learned over the years on how to work with Large Language Models. The topic of today’s post is fine-tuning. This is one of the first optimization strategies we suggest because most companies we work with quickly face challenges when using LLMs with few-shot prompts in production.

Why you should fine-tune

Prompts are an excellent way to get started with LLMs — with just a few examples, you can harness the power of Generative AI via an API call. Relying on prompts long-term, however, could result in the following problems:

  • High cost: If your prompts have a large context and require multiple examples for good results, the tokens (and therefore costs) quickly add up!
  • Poor handling of edge cases (classification use cases): Due to token limits in the context window, there is a limit to how accurately your deployed model can classify inputs
  • Limited personalization (generation use cases): Unless you’re using advanced strategies to carefully craft prompts for each request at runtime, it’s unlikely you can provide a personalized experience for each user / company / industry with a few-shot prompt because of the limited context window
  • High latency: Long prompts, particularly those that are chained, could take 2-3 seconds to run and result in a poor UX. You can often fine-tune a faster model and get output with the same or better quality
  • Hallucination: A prompt-based approach without chaining can be more prone to hallucination because there often isn’t enough context to teach the model to provide concise, truthful answers
  • Undifferentiated results: Over time, the competitive advantage your prompt provides will go to zero as foundation models continue to improve. A fine-tuned model trained on proprietary data is needed to provide proprietary results

If these problems sound familiar to you, you might consider fine-tuning. There are other techniques that can help with a subset of these problems (like vector search, caching, and prompt chaining), all of which have their own pros / cons (we’ll cover in future posts!) but we’ve found fine-tuning to generally be the most impactful and widely applicable.

As an example, one of our customers got great results on a use case where we moved from prompts to fine-tuning —> costs went down by 94%, while improving accuracy and significantly decreasing latency.

But wait, what is fine-tuning?

Fine-tuning a language model involves training the model on a smaller, task-specific dataset to adapt it to a particular task or domain. The pre-trained foundation model acts as a starting point, with the weights of the network being further optimized based on the task-specific data. Fine-tuning helps the model better understand the specific context and language patterns of the task it is being fine-tuned for. This is just a short summary – we’re happy to chat more about fine-tuning in detail (mathematical formulae included by request 😁) at any time!

How can I get started with fine-tuning?

Alright, so you’re sold on wanting to try fine-tuning – great! Here’s how we recommend you go about it (OpenAI fine-tuning):

  • Collect a large number of high quality prompt/completion pairs: Ideally you already have this data from when your prompt-based model was in production (if you don’t, Vellum can track this data for you). We’ve seen great results with even just 100 rows of training data, but it depends on the use-case.
  • Clean the prompts: Remove the instructions and keep only the inputs.
  • Convert data to JSONL file format
  • If you have a classification use case, split off training & validation sets (we suggest an 80/20 split) so you can test the fine-tuned model’s performance.
  • Try various combinations of hyperparameters: Test different foundation models. For each foundation model, try different hyperparameters like learning rate, number of epochs until you find the best cost, quality & latency tradeoff for your use case. If you want to be thorough, you will likely be comparing at least 8-10 options in this step. Remember, each fine-tuning task can take 20+ minutes to run, so be prepared!
  • Once you decide on your new model, remember to not use your original prompt but only pass inputs to the fine-tuned model.

To protect against data drift and ensure your model is getting better over time, we recommend repeating this process regularly as your dataset grows.

Looking for an easier way?

Fine-tuning LLMs is an incredibly powerful strategy, but as you may have observed, the steps to do it well are time-consuming. The process requires collecting & labeling high quality training data, trying different model & hyper-parameter combinations, evaluating the quality of outputs, and retraining as new data comes up. This stuff takes time and usually, lots of custom code!

At Vellum, we’re firm believers in the power of fine-tuning and want to help make it incredibly easy to manage. We love this stuff and are always happy to chat to provide tailored advice on your fine-tuning approach. We also offer Vellum Optimize, an LLM Ops platform that simplifies and automates much of the fine-tuning busywork for you.

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Feb 7, 2023
share post
Expert verified
Related Posts
LLM basics
November 6, 2025
7 min
Beginners Guide to Building AI Agents
Product Updates
November 5, 2025
7 min
Vellum Product Update | October
All
November 3, 2025
6 min
I’m done building AI agents
Guides
October 21, 2025
15 min
AI transformation playbook
LLM basics
October 20, 2025
8 min
The Top Enterprise AI Automation Platforms (Guide)
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Healthcare explanations of a patient-doctor match
Summarize why a patient was matched with a specific provider.
SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.

Dynamic template box for Sales, Use {{sales}}

Start with some of these sales examples

Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

AI legal research agent
Comprehensive legal research memo based on research question, jurisdiction and date range.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Q&A RAG Chatbot with Cohere reranking
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.
PDF Data Extraction to CSV
Extract unstructured data (PDF) into a structured format (CSV).

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Population health insights reporter
Combine healthcare sources and structure data for population health management.
Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Financial Statement Review Workflow
Extract and review financial statements and their corresponding footnotes from SEC 10-K filings.
Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.
Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.