Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!
Introducing Vellum Search

Vellum Search, the latest addition to our platform helps companies use proprietary data in LLM applications

Written by
Reviewed by:
No items found.

TLDR: We’re launching Vellum Search, a document retrieval system to enable LLMs to use your company specific data in production. Companies take weeks to build this infrastructure today because of token window limitation from model providers. Search is tightly integrated with the rest of our platform, comes with smart defaults, but also supports maximal configuration at each step of the process.

This is an exciting announcement! Since we first announced Vellum, we've had the opportunity to work with 1000s of people using LLMs in production. The concepts we shared in our original blog still resonate with most people but, over time, we realized that our users face a whole different problem before they can even consider using LLMs: information retrieval of company specific data. Based on this feedback we're launching a new part of our platform: Vellum Search. This post shares more about Vellum Search, please reach out if this resonates with you!

When your LLM use-case requires factually accurate responses based on a proprietary corpus of text (i.e. company-specific information usually not present in foundation models), it’s best-practice to set up a pipeline that

  1. Ingests each “document” from your knowledge base
  2. Split each document into smaller chunks
  3. Run each chunk through an embedding model
  4. Store the resulting embeddings in a vector database (like Pinecone or Weaviate); then finally
  5. Given a user-submitted query, perform a hybrid-search at run-time and include the results in your LLM prompt so that it can synthesize an answer

You can either spend days-to-weeks setting up a naïve implementation of this infrastructure yourself OR use Vellum’s managed Search product, which takes just a few minutes to set up,  instills best-practices at each step along the way, and is tightly integrated with the rest of our AI developer tools. Here’s a comment from our Hacker News launch which summarizes the commons pains of going with the DIY approach:

Why Search is a Critical Piece of the LLM Stack

When LLMs need to answer questions factually, without hallucinations, it’s best to provide them the relevant context in the prompt and instruct them to answer just from this context. Easy enough, except the challenge comes when the corpus of text is larger than the token limit of the model. OpenAI is launching a 32k token window (50 pages) version of GPT-4 soon, but filling out those 32k tokens will cost a hefty $1.92 per request 😅 (not to mention, the more tokens you include, the slower the request!).

The solution here is document retrieval via embeddings. Embedding models allow for retrieval based on semantic similarity, which enables the inclusion of only the most relevant chunks of a document into the prompt at run time. This opens up a large number of potential LLM use-cases — here are just a few examples of how our Search product has helped customers in production already:

  • Support chatbot to answer product questions for a cosmetic brand based on detailed product documentation
  • Internal chatbot to questions based on legal documents with citations to specific cases
  • Agent assist for support agents at hotel chains to answer guest questions (e.g., where is the fitness center? what time is checkout? can you make this reservation for me?)
  • Sales / customer support reps at an insurance company can ask a chatbot about coverage-related questions instead of making a ticket for internal underwriters

Introducing Vellum Search

LLM use-cases that require document retrieval can be set up within 10 minutes when using Vellum Search, Playground and Manage. Vellum offers tried-and-true defaults to get started quickly, but also exposes advanced configuration for those that want to get in the weeds and experiment. Here’s a step by step guide of how it works:

Step 1 (1 minute): Create a document index (collection of documents which will be queried together at run-time), upload documents either through our API endpoint or our UI.

Step 2 (2 minutes): Once the documents are indexed using your chosen embedding model and chunking strategy, they are stored in a vector database and can be queried through our search API. Choose the number of chunks you want returned.

Step 3 (5 minutes): Go to Vellum Playground, start with our predefined prompt templates, do some prompt engineering, add the relevant chunks to your test cases and confirm the LLM is providing reasonable results.

You can see an interactive walkthrough of these steps here

Why use Vellum for Document Retrieval?

Our philosophy for document retrieval is to abstract away complex infrastructure, provide smart defaults, and support maximal configuration at each step of the process.

We’ve seen hundreds of people sweat the details on which embedding models to try, what chunking strategy to use, what vector db to implement etc. Some of these questions matter a lot (choice of embedding model), others less so (choice of vector db). Even learning what decisions you should be making can be burdensome!

At the end of the day, document retrieval is just another (albeit critical!) piece of the Al tech stack. With Vellum, document retrieval is tightly integrated into the rest of our AI developer platform so that you can quickly see the holistic impact of how changes to your search + prompt  effect your end-user experience.

Our goal is to provide product builders with the tooling needed to create great AI applications in production and Search is a big step towards delivering on that mission!

Our asks

  1. If you’re interested in using Vellum for any of your LLM use-cases, please reach out to me at akash@vellum.ai or request early access here
  2. Subscribe to our and stay tuned for updates from us. We will soon share more technical content about how we created our Search product (e.g., what chunking strategies we tested and built).

ABOUT THE AUTHOR
Noa Flaherty
Co-founder & CTO

Noa Flaherty, CTO and co-founder at Vellum (YC W23) is helping developers to develop, deploy and evaluate LLM-powered apps. His diverse background in mechanical and software engineering, as well as marketing and business operations gives him the technical know-how and business acumen needed to bring value to nearly any aspect of startup life. Prior to founding Vellum, Noa completed his undergrad at MIT and worked at three tech startups, including roles in MLOps at DataRobot and Product Engineering at Dover.

ABOUT THE reviewer

No items found.
lAST UPDATED
Apr 12, 2023
Share Post
Expert verified
Related Posts
All
September 16, 2025
12 min
MCP UI & The Future of Agentic Commerce
Guides
September 16, 2025
4 min
Google's AP2: A new protocol for AI agent payments
Guides
September 15, 2025
6 min
We don’t speak JSON
LLM basics
September 12, 2025
10 min
Top 13 AI Agent Builder Platforms for Enterprises in 2025
LLM basics
September 12, 2025
8 min
Top 12 AI Workflow Platforms in 2025
Customer Stories
September 8, 2025
8
How Marveri enabled lawyers to shape AI products without blocking developers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Email Signup
Sorts the trigger and email categories
Come to our next webinar
Description for our webinar
New CTA
Sorts the trigger and email categories

Start with some of these healthcare examples

Personalized healthcare explanations of a patient-doctor match
An AI workflow that extracts PII data and match evidence then summarizes to the user why a patient was matched with a specific provider, highlighting factors like insurance, condition, and symptoms.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.

Start with some of these insurance examples

Insurance claims automation agent
This workflow automates the claims adjudication process in the insurance industry. It collects and analyzes claim information, assesses risks, verifies policy details, and generates a final decision along with a comprehensive audit trail.

Start with some of these agents

LinkedIn Content Planning Agent
This agentic workflow generates a 30 day LinkedIn content plan based on your content goals, target audience, and business information. It automates the process of generating content ideas, drafting posts, and organizing them into Notion for easy access and management.
Automated Code Review Comment Generator for GitHub PRs
This agentic workflow automates the process of generating a code review comment for a GitHub pull request based on predefined guidelines. It retrieves the pull request details, analyzes the code changes, and formats a structured comment that can be posted back to GitHub.
Agent that summarizes lengthy reports (PDF -> Summary)
This agent summarizes a PDF document by processing its contents through the Vellum Document API and generating a user-friendly summary. It allows users to input a document ID and receive a concise summary of the document's text.