Learn How to Launch Production-Ready AI Products. Download Our Free Guide
Product Updates
July 9, 2024

Vellum Product Update | June 2024

Guest Post
Noa Flaherty
Co-authors
No items found.

Welcome to another exciting Vellum Product Update!

June brought some eagerly-awaited features to Vellum that’ll help you build even more powerful AI systems!

Let’s start with one of the more exciting updates: Map Nodes.

Learn how successful companies build with AI

Download this practical guide and enable your teams to innovate with AI.
Get Free Copy
If you want to compare more models, check our LLM Leaderboard here or book a demo to start using Vellum Evaluations to run these tests at scale.
Read the whole analysis in the sections that follow, and sign up for our newsletter if you want to get these analyses in your inbox!
Inspired by this, we've designed Vellum to meet these needs, and now many product and engineering teams use our suite of tools—Workflows, Evaluations, and Deployments—to build agentic workflows.

Build a Production-Ready AI System

Platform for product and engineering teams to build, deploy, and improve AI products.
Learn More

LLM orchestration with Vellum

Platform for product and engineering teams to build, deploy, and improve AI products.
Learn More

Map Nodes

Up until now, iterating over an array of dynamic length and running a Subworkflow for each item required custom scripting or complex configurations and serial execution. You were required to manually set up a loop by connecting Nodes in a tedious layout that looked something like this:

Calling a Prompt for each item in an array without a Map Node in Vellum.

Now, you can use Map Nodes to iterate over an array and run a Subworkflow for each item in parallel.

Map Nodes work in the same way that array map functions do in many common programming languages (like Javascript’s Array.prototype.map). Map Nodes take a JSON array as an input and iterate over it, running a Subworkflow for each item. It supports up to 12 concurrent iterations, making it highly efficient for batch processing tasks.

Watch this demo to understand how to set it up, so you can switch from the tedious layout above to the more elegant workflow shown here:

Preview of a Map Node that iterates over a JSON array, and runs a Subworkflow in Vellum.

Inline Subworkflow Nodes

Subworkflows in Vellum are a great way to create reusable units of node logic and compose/organize more sophisticated Workflows.

Previously, you had to create and deploy these units in a separate Workflow before using them as a child of another.

Now, you can create and group modular units of nodes directly within an existing Workflow using Inline Subworkflow Nodes. This feature supports a similar user experience as the parent Workflow, ensuring consistency and ease of use.

Preview of a Subworkflow Node in Vellum.

This update is very important when developing your AI apps as it allows you to encapsulate complex logic in subunits without losing the context of the main Workflow!

Workflow Notes and Comments

Before, documenting your Workflow logic wasn’t possible, and collaborating required communicating outside of Vellum.

Not anymore! Now, you have two options to document your work: Notes and Comments.

Use Notes with customizable colors and font sizes to add high-level documentation about your workflow. Use Comments – a property of each Node – to document that specific Node’s purpose.

Preview of Notes and Comments in Vellum Workflows.

By adding notes and comments in your Workflows, you can provide context, instructions, or explanations, making it easier for you and your team to understand and manage complex AI systems.

Other Workflows Updates

Undo and Redo for Workflow Sandboxes

Making changes within Workflow Sandboxes was a one-way street, with no easy way to undo or redo actions. You can now use undo and redo functionality within Workflow Sandboxes using keyboard shortcuts or the UI buttons.

Support for Multiple Outputs in Workflow Metrics

Using Vellum Workflows to create LLM-based evaluators (i.e. have one AI grade another AI) is super powerful, but to date, you’ve only been able to use Workflows that produce a single score output.

We now have official support for Workflow Evaluators that produce multiple outputs!

If your Workflow has at least one Final Output Node called score with a type NUMBER, you can add more Final Output Nodes with any names and types you want.

New APIs

API for Updating a Test Suite’s Test Cases in Bulk

For a while now you’ve been able to programmatically upsert and delete Test Cases in a Test suite individually. While useful, this was problematic if you want to perform the same actions on multiple test cases at once.

To solve this, we’ve added an API to create, replace, and delete Test Cases in bulk.

Check out the new Bulk Test Case Operations API in our docs here.

Note: this API is available in our SDKs beginning version 0.6.4.

APIs for Programmatically Deploying Prompts/Workflows

Thanks to the desires of a few very forward-thinking customers, we now have APIs to support programmatically deploying prompts and workflows.

These APIs can be used as the basis for CI/CD pipelines for Vellum-managed entities.

We’re super bullish on integrating Vellum with existing release management systems (think, Github Actions) and you can expect to see more from us here, soon!

To deploy a Prompt, you’ll need the IDs of the Prompt Sandbox and the Prompt Variant shown here:

And can then hit the Deploy Prompt endpoint found here.

Similarly, to deploy a Workflow, you’ll need the IDs of the Workflow Sandbox and the Workflow shown here:

And can then hit the Deploy Workflow endpoint found here.

Note: these APIs are available in our SDKs beginning version 0.6.3.

APIs for Programmatically Moving Release Tags

We’re also excited to also announce APIs for programmatically moving Release Tags.

With these APIs, you can create a CI/CD pipeline that automatically moves a Release Tag for one environment from one version of a Prompt/Workflow to another. For example, you might run certain tests or QA processes before promoting STAGING to PRODUCTION.

To move a Prompt Deployment Release Tag, check out the API docs here.

To move a Workflow Deployment Release Tag, see the API docs here.

Note: these APIs are available in our SDKs beginning version 0.6.3.

Quality of Life Improvements

Breadcrumb Context Menus

Navigating and managing folder structures was challenging, so we made some changes to improve navigation:

  • You’ll now see breadcrumbs that show the folder path when visiting the details of an entity in Vellum. This helps you see the file structure and easily navigate up to a parent folder.
  • You can rename a parent folder by right-clicking on its breadcrumb without navigating to its parent.
  • You can now access all of an entity’s “More Menu” options by right-clicking its card in the grid view.

Override Vellum Provided API Keys

You can now provide your own API keys for models that Vellum provides API keys for such as Fireworks hosted models. To do so, click the 3 dot menu on a Model card and click the “Set API Key” option.

Image Support in Claude 3 and Gemini Models

Previously, the Claude 3 and Gemini models were limited to text-only processing.

Vellum now also supports multi-modality for Claude 3 and Gemini models, allowing parsing images and returning text.

For more on how to work with images in Vellum, see our help docs here.

Claude 3.5 Sonnet Support

We now support the new Claude 3.5 Sonnet model. It has already been automatically added to all workspaces.

We also support the model hosted through AWS Bedrock. You can add it to your workspace from the models page.

Looking ahead

That's a wrap for June — with many updates for workflows and CI/CD pipelines. We have bunch of improvements for Workflows and Evaluations planned for July as well.

Also, we’re super bullish on integrating Vellum with existing release management systems (think, Github Actions) and you can expect to see more from us here, soon!

Until next month!

TABLE OF CONTENTS

Join 10,000+ developers staying up-to-date with the latest AI techniques and methods.
🎉
Thanks for joining our newsletter.
Oops! Something went wrong.
Noa Flaherty
Linkedin's logo

Co-founder & CTO at Vellum

Noa Flaherty, CTO and co-founder at Vellum (YC W23) is helping developers to develop, deploy and evaluate LLM-powered apps. His diverse background in mechanical and software engineering, as well as marketing and business operations gives him the technical know-how and business acumen needed to bring value to nearly any aspect of startup life. Prior to founding Vellum, Noa completed his undergrad at MIT and worked at three tech startups, including roles in MLOps at DataRobot and Product Engineering at Dover.

About the authors

No items found.

Related posts