Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

OpenAI v/s Anthropic v/s Google: A latency comparison

We did an analysis comparing the latency of OpenAI, Anthropic and Google. Here are the results!

Written by
Reviewed by
No items found.

At Vellum we provide the platform and share best practices with the goal of helping companies build production use cases of large language models. When building applications, we’ve seen people get stuck in choosing the right model for their use case (given the large number of options across OpenAI, Anthropic, Cohere, Google and open source models).

The best model for the use case is usually one that sits at the efficient frontier of your quality, cost & latency criteria. A common question people ask is: “Can I take a 10% performance decrease for 50% lower latency & 80% lower cost?” Note, there are other criteria like privacy, reliability etc. too but for this article we will simplify a little.

We’ve written about how to measure LLM quality and our Test Suites product helps with exactly that. Given that latency often comes up as a reason to choose a certain model, we decided to do an analysis across model providers and share our findings.

Why is latency important and does it really vary across models?

Latency is important for the UX of an LLM powered application because it directly impacts the speed and responsiveness of the application. A lower latency means faster response times, leading to a smoother, more efficient user interaction. In contrast, high latency can result in delays, disrupting the flow of conversation and potentially frustrating users.

Here’s a quick side-by-side example in Vellum which shows how similar prompts have vastly different results when run across OpenAI’s GPT-4, Anthropic’s Claude-2 and Google’s Chat Bison

How do LLMs create responses? Why does latency vary across providers?

LLMs predict one token at a time by taking into account all the previous tokens in the text. They use this context to generate the most probable next token, and this process is repeated until a complete response is formed.

The context provided in the context window directly impacts latency. The larger the context window, the more tokens the LLM has to process, which can increase the time it takes for the model to generate a response. This adds up to affect time to create the first token and then the full completion based on the size of the response.

Latency can vary across models and model providers due to several factors. Firstly, the size of the model plays a significant role - larger models (in # of parameters) typically require more processing time, leading to higher latency. Secondly, the degree of parallelism in the model's architecture and deployment can affect speed, with more parallelism resulting in faster response times. Lastly, the amount of traffic a provider is handling at any given time can also impact latency, as higher traffic can lead to slower response times.

So we ran a test to compare these models side by side

We are sharing results from a one-time analysis which was conducted in a 2 hour window in early August 2023. The results will change over time as traffic fluctuates and the model providers make changes under the hood.

Metrics measured

Given that prompt length affects latency, we tested latency across short prompts & long prompts. For each prompt size, we measured time to first token and time to last token.

Prompt used

Short prompt: {{text}} variable had 10 tokens

Long prompt: {{text}} variable had 600 tokens

Models evaluated:

GPT-4, GPT-3.5, Claude 2, Chat Bison

Number of completions per model

100

With the experiment details out of the way, time for the main takeaways!

OpenAI had lowest time to first token and highest time to completion

All the models came in at a similar range for time to first token (ranging from 0.9s for GPT-3.5 to 1.1s for Claude 2), however there were significant differences in time to completion:

  • Chat Bison, at 2.8s for time to completion, was way faster than the other models (note: Chat Bison doesn’t support streaming so only time to last token is shown)
  • Claude 2, Anthropic’s top-of-the-line model had lower last token latency than GPT-3.5
  • GPT-4 had much higher last token latency than the other models

GPT-4 had the highest variability in latency

The longest GPT-4 responses took over 20s.

One miscellaneous finding: Long prompts typically have higher latency than short prompts for the same model, but the difference isn't very meaningful. The only noticeable gap was long prompt for Claude 2 took 1.9s for first token v/s short prompt took 1.1s for first token

How to best leverage this knowledge while building your LLM application

Latency is just one criterion used to make your decision on what the right prompt/model combination for your use case is, quality and cost are very important too! Ultimately it comes down to making a comparison between various models, running them across test cases to see which one is the best for a given use case. If you’d like to try these models out side by side in Vellum, sign up for a 14-day free trial of Vellum here. We’re excited to see what you end up building with LLMs!

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Aug 24, 2023
share post
Expert verified
Related Posts
Guides
October 21, 2025
15 min
AI transformation playbook
LLM basics
October 20, 2025
8 min
The Top Enterprise AI Automation Platforms (Guide)
LLM basics
October 10, 2025
7 min
The Best AI Workflow Builders for Automating Business Processes
LLM basics
October 7, 2025
8 min
The Complete Guide to No‑Code AI Workflow Automation Tools
All
October 6, 2025
6 min
OpenAI's Agent Builder Explained
Product Updates
October 1, 2025
7
Vellum Product Update | September
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI)

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

SOAP Note Generation Agent
Extract subjective and objective info, assess and output a treatment plan.
Healthcare explanations of a patient-doctor match
Summarize why a patient was matched with a specific provider.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.
ReAct agent for web search and page scraping
Gather information from the internet and provide responses with embedded citations.

Dynamic template box for Sales, Use {{sales}}

Start with some of these sales examples

Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Q&A RAG Chatbot with Cohere reranking
Trust Center RAG Chatbot
Read from a vector database, and instantly answer questions about your security policies.

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

PDF Data Extraction to CSV
Extract unstructured data (PDF) into a structured format (CSV).
AI legal research agent
Comprehensive legal research memo based on research question, jurisdiction and date range.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

E-commerce shopping agent
Check order status, manage shopping carts and process returns.
Q&A RAG Chatbot with Cohere reranking
Review Comment Generator for GitHub PRs
Generate a code review comment for a GitHub pull request.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.
Turn LinkedIn Posts into Articles and Push to Notion
Convert your best Linkedin posts into long form content.

Build AI agents in minutes for

{{industry_name}}

Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Population health insights reporter
Combine healthcare sources and structure data for population health management.
Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Legal contract review AI agent
Asses legal contracts and check for required classes, asses risk and generate report.
Legal RAG chatbot
Chatbot that provides answers based on user queries and legal documents.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.