Product Updates
December 31, 2023

Vellum Product Update | December 2023

Noa Flaherty
Co-authors
No items found.

TABLE OF CONTENTS

Happy New Year from the Vellum team! We’re excited to usher in the new year with another product update for December.

This one features fine-grained control over your prompt release process, powerful new APIs for executing Prompts, and self-serve export for data in Vellum. Let’s dive in!

Release Management of Prompts and Workflows

Up until now, whenever you updated a Prompt or Workflow Deployment, the changes would go live immediately to your end-users. However, you might more precise release management so that you can decide via code when changes to a Prompt or Workflow go live in a given environment.

You can now achieve fine-grained release management via Release Tags! Release Tags are identifiers that you can pin to in your codebase to point to a specific version of a Prompt/Workflow. You can then increment this version in code, similar to how you might pin to a version of a node module or pip package.

You can watch a full demo of this feature here.

To further improve the process of releasing new versions of Prompts/Workflows, we also now display a warning if you’re about to change the input/output interface of the deployment that’s being updated.

New Execute Prompt APIs

We launched two new production-grade APIs for invoking your prompts in Vellum - Execute Prompt and Execute Prompt Stream!

For a complete overview, read the official announcement.

Here's a quick summary of what these new APIs enable:

  • Improved interface that’s more consistent with Vellum’s other APIs
  • Storing arbitrary JSON metadata alongside each execution
  • Future-proofed by default (this is very useful as LLMs are releasing new features in their API’s)
  • You can override anything about the API request that Vellum makes to the LLM provider at runtime
  • You can specify the return of raw data that comes directly from the LLM provider’s response

Here's a look at the new Execute Prompt API:

New Models

Some really powerful models were launched in the month of December, and we’re really excited to share that you now have the option to use them in Vellum.

Here are the latest models added in Vellum, complete with links for more details on their performance and use-cases:

Block Vellum Staff Access

Vellum employees have been trained to handle customer data with the utmost care and have access to customer workspaces in order to help debug issues and provide support. However, those that work with especially sensitive data may want to limit this access.

You now have a new option to control the whether Vellum staff has to your workspace(s). Simply go to the Organization module, find Settings, and switch off "Allow Vellum Staff Access."

In situations where you need us to debug an issue, you can give us temporary access by turning the toggle on again.

Workflow Deployment Monitoring

Using Workflow Deployments, you can see the individual API requests made to a Workflow. This is great for seeing granular details, but not so great for visualizing trends in how your Workflow is performing.

You’ll now see a new “Monitoring” tab in Workflow Deployments which includes charts for visualizing data such as the number of requests, average latency, errors, and more.

If you want to see other charts in this view, please reach out to support@vellum.ai and let us know!

Download Test Suite Test Cases as CSV

As with many of our other tables, you can now download the Test Cases within a Test Suite as a CSV.

Also, we now display a column count indicator for that shows how the number of visible columns. Remember, only the visible columns are included in exported CVS so make sure to display any columns you want in the export.

Export Workflow Deployment Executions

You can now download raw data about all API requests that were made against a Workflow Deployment.

This is useful if you want to perform your own bespoke analysis or use it as training data for a custom fine-tuned model.

Quality of Life improvements

Here are a few honorable mentions that we snuck in too:

  • Invite Email Notifications: Now, when you invite someone to your organization, they'll receive an email letting them know.
  • Links from Prompt Sandboxes to Deployments: You can navigate directly from a Prompt Sandbox to its Deployment using the “View Deployment” button in the top right corner
  • Filtered Prompt Deployments: Under the hood, we auto-create one Prompt Deployment per Prompt Node in a Workflow. These would clog up the Prompt Deployments view within providing much value. Now, they’re filtered out so you only see the Prompts that you explicitly deployed.

Looking Ahead

We have grand plans for the new year and can’t wait to share more of what we’re working on. Thanks so much to our customers who have helped us get to this point! 2024 is going to be even more exciting 😀

TABLE OF CONTENTS

Happy New Year from the Vellum team! We’re excited to usher in the new year with another product update for December.

This one features fine-grained control over your prompt release process, powerful new APIs for executing Prompts, and self-serve export for data in Vellum. Let’s dive in!

Join 10,000+ developers staying up-to-date with the latest AI techniques and methods.
🎉
Thanks for joining our newsletter.
Oops! Something went wrong.
Noa Flaherty
Linkedin's logo

Co-founder & CTO at Vellum

Noa Flaherty, CTO and co-founder at Vellum (YC W23) is helping developers to develop, deploy and evaluate LLM-powered apps. His diverse background in mechanical and software engineering, as well as marketing and business operations gives him the technical know-how and business acumen needed to bring value to nearly any aspect of startup life. Prior to founding Vellum, Noa completed his undergrad at MIT and worked at three tech startups, including roles in MLOps at DataRobot and Product Engineering at Dover.

About the authors

No items found.

Related posts