Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!
OpenAI v/s Anthropic v/s Google: A latency comparison

We did an analysis comparing the latency of OpenAI, Anthropic and Google. Here are the results!

Written by
Reviewed by:
No items found.

At Vellum we provide the platform and share best practices with the goal of helping companies build production use cases of large language models. When building applications, we’ve seen people get stuck in choosing the right model for their use case (given the large number of options across OpenAI, Anthropic, Cohere, Google and open source models).

The best model for the use case is usually one that sits at the efficient frontier of your quality, cost & latency criteria. A common question people ask is: “Can I take a 10% performance decrease for 50% lower latency & 80% lower cost?” Note, there are other criteria like privacy, reliability etc. too but for this article we will simplify a little.

We’ve written about how to measure LLM quality and our Test Suites product helps with exactly that. Given that latency often comes up as a reason to choose a certain model, we decided to do an analysis across model providers and share our findings.

Why is latency important and does it really vary across models?

Latency is important for the UX of an LLM powered application because it directly impacts the speed and responsiveness of the application. A lower latency means faster response times, leading to a smoother, more efficient user interaction. In contrast, high latency can result in delays, disrupting the flow of conversation and potentially frustrating users.

Here’s a quick side-by-side example in Vellum which shows how similar prompts have vastly different results when run across OpenAI’s GPT-4, Anthropic’s Claude-2 and Google’s Chat Bison

How do LLMs create responses? Why does latency vary across providers?

LLMs predict one token at a time by taking into account all the previous tokens in the text. They use this context to generate the most probable next token, and this process is repeated until a complete response is formed.

The context provided in the context window directly impacts latency. The larger the context window, the more tokens the LLM has to process, which can increase the time it takes for the model to generate a response. This adds up to affect time to create the first token and then the full completion based on the size of the response.

Latency can vary across models and model providers due to several factors. Firstly, the size of the model plays a significant role - larger models (in # of parameters) typically require more processing time, leading to higher latency. Secondly, the degree of parallelism in the model's architecture and deployment can affect speed, with more parallelism resulting in faster response times. Lastly, the amount of traffic a provider is handling at any given time can also impact latency, as higher traffic can lead to slower response times.

So we ran a test to compare these models side by side

We are sharing results from a one-time analysis which was conducted in a 2 hour window in early August 2023. The results will change over time as traffic fluctuates and the model providers make changes under the hood.

Metrics measured

Given that prompt length affects latency, we tested latency across short prompts & long prompts. For each prompt size, we measured time to first token and time to last token.

Prompt used

Short prompt: {{text}} variable had 10 tokens

Long prompt: {{text}} variable had 600 tokens

Models evaluated:

GPT-4, GPT-3.5, Claude 2, Chat Bison

Number of completions per model

100

With the experiment details out of the way, time for the main takeaways!

OpenAI had lowest time to first token and highest time to completion

All the models came in at a similar range for time to first token (ranging from 0.9s for GPT-3.5 to 1.1s for Claude 2), however there were significant differences in time to completion:

  • Chat Bison, at 2.8s for time to completion, was way faster than the other models (note: Chat Bison doesn’t support streaming so only time to last token is shown)
  • Claude 2, Anthropic’s top-of-the-line model had lower last token latency than GPT-3.5
  • GPT-4 had much higher last token latency than the other models

GPT-4 had the highest variability in latency

The longest GPT-4 responses took over 20s.

One miscellaneous finding: Long prompts typically have higher latency than short prompts for the same model, but the difference isn't very meaningful. The only noticeable gap was long prompt for Claude 2 took 1.9s for first token v/s short prompt took 1.1s for first token

How to best leverage this knowledge while building your LLM application

Latency is just one criterion used to make your decision on what the right prompt/model combination for your use case is, quality and cost are very important too! Ultimately it comes down to making a comparison between various models, running them across test cases to see which one is the best for a given use case. If you’d like to try these models out side by side in Vellum, sign up for a 14-day free trial of Vellum here. We’re excited to see what you end up building with LLMs!

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

ABOUT THE reviewer

No items found.
lAST UPDATED
Aug 24, 2023
Share Post
Expert verified
Related Posts
All
September 16, 2025
12 min
MCP UI & The Future of Agentic Commerce
Guides
September 16, 2025
4 min
Google's AP2: A new protocol for AI agent payments
Guides
September 15, 2025
6 min
We don’t speak JSON
LLM basics
September 12, 2025
10 min
Top 13 AI Agent Builder Platforms for Enterprises in 2025
LLM basics
September 12, 2025
8 min
Top 12 AI Workflow Platforms in 2025
Customer Stories
September 8, 2025
8
How Marveri enabled lawyers to shape AI products without blocking developers
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Email Signup
Sorts the trigger and email categories
Come to our next webinar
Description for our webinar
New CTA
Sorts the trigger and email categories

Start with some of these healthcare examples

Personalized healthcare explanations of a patient-doctor match
An AI workflow that extracts PII data and match evidence then summarizes to the user why a patient was matched with a specific provider, highlighting factors like insurance, condition, and symptoms.
SOAP Note Generation Agent
This agentic workflow generates a structured SOAP note from a medical transcript by extracting subjective and objective information, assessing the data, and formulating a treatment plan.

Start with some of these insurance examples

Insurance claims automation agent
This workflow automates the claims adjudication process in the insurance industry. It collects and analyzes claim information, assesses risks, verifies policy details, and generates a final decision along with a comprehensive audit trail.

Start with some of these agents

Agent that summarizes lengthy reports (PDF -> Summary)
This agent summarizes a PDF document by processing its contents through the Vellum Document API and generating a user-friendly summary. It allows users to input a document ID and receive a concise summary of the document's text.
Financial Statement Review Workflow
This agent extracts and reviews financial statements and their corresponding footnotes from SEC 10-K filings. It identifies major financial statement tables and verifies the accuracy and completeness of footnotes, ensuring compliance with U.S. GAAP and SEC regulations.
React Agent for Web Search and Page Scraping
This React agent can perform web searches and scrape web pages based on user queries. It allows users to interact with the agent, which gathers information from the internet and provides responses with citations.