Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

Announcing Native Support for Cerebras Inference in Vellum

Starting today, you can unlock 2,100 t/s with Llama 3.1 70B in Vellum for real-time AI apps.

4 min
Written by
Reviewed by
No items found.

TLDR;

We're excited to announce that Vellum now has a native integration with Cerebras -  the fastest AI inference solution in the world, allowing customers to run 2,100 tokens per second for Llama3.1 70B(3x faster than the current best-in-market / that's 1 book per minute!), with flexible rate limits! This sets an industry record for inference speed, and starting October 24, 2024, Vellum users can benefit from this incredible performance boost to build faster, real-time AI applications.

As a development platform that enables companies around the world to build reliable AI systems with LLMs, we know that striking the right balance between accuracy, speed, and cost is a top priority for many companies today.

But, with the rise of more sophisticated AI applications—from traditional routing systems to more dynamic, agent-driven workflows—having fast response times is essential to handle the intricate logic involved.

Today, we’re excited to announce our native integration with Cerebras, the fastest AI inference solution that delivers 2,100 tokens/second for the Llama 3.1 70B model, using the original 16-bit weights released by Meta. This solution is 16x faster than any known GPU and 68x faster than hyperscale clouds, according to third-party benchmarks. Even more impressive, Cerebras Inference serves Llama 70B models over 8x faster than GPUs serve Llama 3B.

“Our customers are blown away with the results! Time to completion on Cerebras is hands down faster than any other inference provider and I’m excited to see the production applications we’ll power via the Cerebras inference platform.

- Akash Sharma, CEO of Vellum

How the native integration works

All public models on Cerebras are now available to add to your workspace.

For example, to enable the Llama 70b model hosted on Cerebras into your workspace, you only need to get your API key from your Cerebras profile, and add it as a Secret named CEREBRAS on each of the model pages:

Then, in your prompts and workflow nodes, simply select the model you just enabled:

What do you get with Cerebras inference

The Cerebras inference solves the memory bandwidth bottleneck by building the largest chip in the world and storing the entire model on-chip without sacrificing weight precision. They currently support only Llama 70b, and you get the best model in terms of speed, accuracy and cost.

High speed

For Llama 3.1-70B, Cerebras generates instant responses at 2,100 tokens per second, which is is 16x faster than any known GPU solution and 68x faster than hyperscale clouds as measured by third-party benchmarking organization.

The most interesting part is that Cerebras Inference serves Llama 70B more than 8x faster than GPUs serve Llama 3B.

Highest accuracy

Regarding accuracy, Cerebras doesn’t reduce weight precision from 16-bit to 8-bit to overcome the memory bandwidth bottleneck. They use the original 16-bit weights released by Meta, ensuring the most accurate and reliable model output — Evaluations and third-party benchmarks show that 16-bit models can score up to 5% higher than their 8-bit counterparts.

The Llama 3.1 70b model is already climbing up the ranks in various fields like math, reasoning and and coding, and being able to run them 68 times faster is unlocking many new use-cases.

Check how the Llama models compare on our LLM Leaderboard.

Lowest cost

Developers can easily access the Cerebras Inference API, which is fully compatible with theOpenAI Chat Completions API, making migration seamless with just a few lines of code.

Cerebras Inference offers three pricing tiers for its AI inference service: Free, Developer, and Enterprise:

  • Free: Offers API access and generous usage limits (1 million free tokens daily)
  • Developer: Offers an API endpoint at a fraction of the cost of alternatives, with models priced at 10 cents and 60 cents per million tokens
  • Enterprise: Offers provisioned throughput, production-grade service level agreements, fine-tuned models, and dedicated support.

If you want to test the inference speed with Cerebras —  get in touch! We provide the tooling & best practices for building and evaluating AI systems that you can trust in production.

ABOUT THE AUTHOR
Anita Kirkovska
Founding Growth Lead

An AI expert with a strong ML background, specializing in GenAI and LLM education. A former Fulbright scholar, she leads Growth and Education at Vellum, helping companies build and scale AI products. She conducts LLM evaluations and writes extensively on AI best practices, empowering business leaders to drive effective AI adoption.

ABOUT THE reviewer

No items found.
lAST UPDATED
Oct 24, 2024
share post
Expert verified
Related Posts
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
Guides
October 6, 2025
15
A practical guide to AI automation
LLM basics
September 25, 2025
8 min
Top Low-code AI Agent Platforms for Product Managers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.
ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

PDF Data Extraction to CSV
Extract unstructured data (PDF) into a structured format (CSV).
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Q&A RAG Chatbot with Cohere reranking

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

Population health insights reporter
Combine healthcare sources and structure data for population health management.
Synthetic Dataset Generator
Generate a synthetic dataset for testing your AI engineered logic.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Q&A RAG Chatbot with Cohere reranking
AI legal research agent
Comprehensive legal research memo based on research question, jurisdiction and date range.
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.