Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

Vellum Product Update | November 2023

November: major Test Suite improvements, arbitrary code execution, and new models!

Written by
Reviewed by
No items found.

Welcome to another Vellum Product Update! This one’s a big one 😎 We pushed hard to bring major improvements to LLM Eval (aka Test Suites), Workflows (aka Prompt Chaining), and support for a variety of new models.

Evaluations

Workflow Test Suites

Quantitative end-to-end testing of prompt chains has always been a nightmare at best, impossible at worst. But now, you can run Test Suites and perform evaluations against Vellum Workflow! This powerful functionality helps ensure that your prompt chains meet certain evaluation criteria. You can see a full demo of this in action here.

A screenshot from Vellum's Test Suites

Code Eval Metric

We’ve seen our customers use Webhook Eval metrics to define all sorts of cool custom eval criteria. This works great and provides ultimate flexibility, but has the added overhead of needing to stand up an API endpoint. To alleviate this, we’ve added the ability for you to write your own custom python code directly from within Vellum to perform bespoke assertions in a Test Suite. When the Test Suite is run, the code is securely executed on Vellum’s backend and the metrics your code produces are shown in the UI.

A screenshot of Vellum's code eval metric

Workflow Eval Metric

Now we’re gonna get meta… You can now use a Vellum Workflow as an evaluator for another Prompt/Workflow. This means you can construct a Workflow that calls an LLM and use it to score another LLM. This is particularly useful if you want to evaluate against subjective metrics like “helpfulness” or “politeness.” LLM-based eval is something we’re very bullish on – we’ve already seen some amazing usages of this and are excited to see what you come up with! You can learn more about this powerful feature here.

A screenshot from Vellum's workflow eval metric setup within a specific workflow

Multi-Metric Eval

Now that you can define custom evaluators via Workflow Metrics and Code Metrics, it’s likely that you’ll want to judge the output of a Prompt/Workflow across multiple dimensions. For example, maybe you want to check the output for a specific substring AND confirm it conforms to a JSON spec AND use an LLM to grade the output based on “politeness.” To achieve this, you can now configure a Test Suite to evaluate the output of a prompt across multiple evaluation metrics. Learn how to set this up here.

A screenshot from Vellum's metric configuration, where two metrics are set (sematic similarity and regex match)

Metric Names & Descriptions

With the introduction of multiple metrics and new custom evaluators, it’s become more important to make clear what each metric represents. To address this, we’ve added the ability to provide custom names and descriptions for the metrics you define in a Test Suite. These names are then shown wherever the Test Suite is run, and the description is used as help text.

Screenshot from a metric configuration of the Regex Match metric

Upload Test Cases Via .tsv Files

Hate commas but love tabs? You can now define Test Cases within a Test Suite by uploading tab-separated (.tsv) files.

A screenshot from Vellum on how to upload test cases from csv

Workflows

Code Nodes

Templating Nodes have allowed you to perform basic data transformations via Jinja 2 templating syntax for some time now. While flexible, nothing beats good ol’ fashioned code. For those that want ultimate flexibility, you can now run arbitrary python code securely from within a Vellum Workflow as part of the new “Code Execution Node."

A screenshot of the Code Execution Node from Vellum's Workflow module'

Chat Widget

There’s a whole new panel dedicated to testing chat-based Workflows. It’s now far smoother to simulate the interaction between an end-user and your AI-powered Workflow. You can see a full demo of this in action here.

A screenshot from the Chat Panel where you can test chat-based Workflows

Support for Numerical & JSON Inputs

It’s now possible to provide numerical and JSON values as inputs to a Workflow.

A screenshot from the input variable panel, showcasing that you can now provide string, numerical and JSON values as inputs to a Workflow

Looping Support

You can now perform loops in a Workflow. Looping is often used in conjunction with a Conditional Node that checks to see if a prompt has been executed a specific number of times and if so, exiting the loop.

Search Across Workflow Deployment Executions

Screenshot from a Workflow Deployment executions and the filtering option

Archiving/Unarchiving Workflow Sandboxes

Don’t need a Workflow anymore but want to keep it around just in case? You can now archive/unarchive Workflow Sandboxes (and also prompt Sandboxes!)

Screenshot from a prompt sandbox and the settings associated with it

First-Class Model Support

OpenChat 3.5 on Replicate

You can now use the open-source OpenChat 3.5 model directly within Vellum, hosted by Replicate. This is an exceptional model and is on-par with ChatGPT for many chat-based use cases. Give it a try!

Claude 2.1

We now support the use of Anthropic’s new Claude 2.1 model. This model features a 200k context window and 2x decrease in hallucinations. With the release of this model, Anthropic now supports System messages (already supported within Vellum) and the use of tools/function-calling in beta (support will soon be added to Vellum).

New OpenAI Models

OpenAI released 3 exciting new models, all of which are available within Vellum: gpt-3.5-turbo-1106, gpt-4-1106-preview, gpt-4-vision-preview. Note that models in preview will change and should not yet be used in production! First-class support for new OpenAI features such as JSON Mode and the ability to add images are coming soon.

Custom Model Support

A screenshot from two custom model options, self-managed OpenAI on Azure and a fine-tuned OpenAI model
You can add your own custom models to Vellum directly from within the UI

OpenAI on Azure

You can now use OpenAI models hosted within your own Microsoft Azure account securely from within Vellum. You can go to the Models page to configure the integration.

Fine-Tuned OpenAI Models

You can now add fine-tuned OpenAI models to Vellum directly through the UI such that you can then use these models throughout Vellum. You can go to the Models page to configure the integration.

Claude on Bedrock

You can now use Claude models hosted within your own AWS Bedrock account securely from within Vellum. You can go to the Models page for instructions and configuration.

And That’s a Wrap

It’s been a busy November but we have no intentions of slowing down going into December. It’s likely you’ll see even deeper improvements to Workflows, Test Suites, and more! Thanks to all of our customers who have pushed us to move fast and ship the tools they need to productionize their AI use-cases. Keep the feedback coming!

https://discord.gg/6NqSBUxF78

See you next month!

ABOUT THE AUTHOR
Noa Flaherty
Co-founder & CTO

Noa Flaherty, CTO and co-founder at Vellum (YC W23) is helping developers to develop, deploy and evaluate LLM-powered apps. His diverse background in mechanical and software engineering, as well as marketing and business operations gives him the technical know-how and business acumen needed to bring value to nearly any aspect of startup life. Prior to founding Vellum, Noa completed his undergrad at MIT and worked at three tech startups, including roles in MLOps at DataRobot and Product Engineering at Dover.

ABOUT THE reviewer

No items found.
lAST UPDATED
Nov 30, 2023
share post
Expert verified
Related Posts
Guides
October 21, 2025
15 min
AI transformation playbook
LLM basics
October 20, 2025
8 min
The Top Enterprise AI Automation Platforms (Guide)
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Healthcare explanations of a patient-doctor match
Summarize why a patient was matched with a specific provider.
Population health insights reporter
Combine healthcare sources and structure data for population health management.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.
LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.

Dynamic template box for Sales, Use {{sales}}

Start with some of these sales examples

Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Q&A RAG Chatbot with Cohere reranking
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.
Population health insights reporter
Combine healthcare sources and structure data for population health management.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
E-commerce shopping agent
Check order status, manage shopping carts and process returns.
SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.