Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!
Announcing Vellum VPC

Vellum now offers VPC installations for secure AI development in your cloud, keeping data private and compliant.

Written by
Reviewed by:
No items found.

TLDR

We're excited to announce that Vellum now supports Virtual Private Cloud (VPC) installations, allowing enterprises to deploy our AI development platform within their own cloud environment (AWS, Azure and GCP). With VPC deployments, all sensitive data stays in your infrastructure, supported by our partnership with Replicated for seamless updates and remote debugging.

As a development platform that enables companies to build AI systems on Large Language Models, we knew we were working with sensitive data. The sensitive data could range from PHI in the context of healthcare companies to personal financial information in the context of banks & insurance companies. We’ve designed the product from Day 1 with enterprise grade security in mind and have had SOC 2 Type II certification & HIPAA compliance for some time now. However, we kept hearing requests for more:

  • “Our company’s data shouldn’t leave our four walls”
  • “We need better alignment with our security policies to meet industry best practices”
  • “We would like to find more ways to use our pre-committed cloud computing spend”

Today we’re excited to announce support for Virtual Private Cloud installations of our software in our customer’s cloud environments. For enterprises operating in regulated industries or looking to keep their data secure, a VPC deployment of Vellum is available on all major cloud providers: AWS, Azure & GCP.

How Vellum’s VPC offering works

Vellum’s Virtual Private Cloud installation provides you complete control over your data. If you use Language Models hosted in your private cloud, you can also leverage our platform tooling in a way that allows all testing & production data to remain private and secure.

We’ve partnered with Replicated to allow you to easily self-host Vellum in your VPC and keep it up to date without our team needing any access to your infrastructure. Data and compute live on your cloud, and compute usage can be charged against any credits offered by cloud providers. We leverage support bundles from Replicated to provide application updates & debug issues remotely.

Vellum VPC vs Vellum Managed

When it comes to choosing between Vellum VPC and Vellum Managed, the decision often hinges on your company’s specific needs around data residency, compliance, and security. Vellum VPC is designed for organizations that operate in highly regulated environments or have stringent internal policies that require complete control over their data infrastructure. It provides a dedicated environment within your own virtual private cloud, giving you full visibility and control over your data and workflows.

On the other hand, Vellum Managed is ideal for companies looking for a more hands-off approach. It offers the convenience of a fully managed service, where Vellum takes care of all the operational aspects, including scaling, security updates, and maintenance. This option is perfect for teams that want to focus on building and deploying AI solutions without worrying about the underlying infrastructure.

Below, you’ll find a comparison of the key features and benefits of Vellum VPC versus Vellum Managed to help you determine which option is the best fit for your organization:

Our VPC offering is meant for companies with strict data residency, compliance, or security requirements. Here’s how the VPC offering compares to our Managed offering:

How Vellum's VPC offering compares to the Managed offering.

If you’re hesitant about your company’s data but would like to use AI in production, we’d love to support you! We provide the tooling & best practices while adhering to your privacy & security requirements. You can now deploy Vellum in your own cloud, ensuring data doesn’t leave your four walls and draw down your pre-existing cloud commitments.

If you’d like to learn more, get in touch!

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Aug 27, 2024
Share Post
Expert verified
Related Posts
All
September 16, 2025
12 min
MCP UI & The Future of Agentic Commerce
Guides
September 16, 2025
4 min
Google's AP2: A new protocol for AI agent payments
Guides
September 15, 2025
6 min
We don’t speak JSON
LLM basics
September 12, 2025
10 min
Top 13 AI Agent Builder Platforms for Enterprises in 2025
LLM basics
September 12, 2025
8 min
Top 12 AI Workflow Platforms in 2025
Customer Stories
September 8, 2025
8
How Marveri enabled lawyers to shape AI products without blocking developers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Email Signup
Sorts the trigger and email categories
Come to our next webinar
Description for our webinar
New CTA
Sorts the trigger and email categories

Start with some of these healthcare examples

Personalized healthcare explanations of a patient-doctor match
An AI workflow that extracts PII data and match evidence then summarizes to the user why a patient was matched with a specific provider, highlighting factors like insurance, condition, and symptoms.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.

Start with some of these insurance examples

Insurance claims automation agent
This workflow automates the claims adjudication process in the insurance industry. It collects and analyzes claim information, assesses risks, verifies policy details, and generates a final decision along with a comprehensive audit trail.

Start with some of these agents

Financial Statement Review Workflow
This agent extracts and reviews financial statements and their corresponding footnotes from SEC 10-K filings. It identifies major financial statement tables and verifies the accuracy and completeness of footnotes, ensuring compliance with U.S. GAAP and SEC regulations.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.
Q&A RAG Chatbot with Cohere reranking
This agent is an Q&A chatbot (RAG) that leverages a search engine and a reranking model to provide accurate answers based on internal policy documents. It processes user queries and retrieves relevant policy excerpts, ensuring responses are well-cited and contextually grounded.