Vellum is coming to the AI Engineering World's Fair in SF. Come visit our booth and get a live demo!

Classifying Customer Tickets using Gemini Pro

Comparing the performance of Gemini Pro with zero and few shot prompting when classifying customer support tickets

Written by
Reviewed by
No items found.

To provide more insights on how Gemini Pro does with zero-shot vs few-shot prompting on classifying tasks, we decided to run an experiment.

We used Gemini Pro to classify if customer support ticket has been resolved or not.

Below, we share all our findings and observations.

The Classification task

We picked this task as customer conversations are hard to categorize, due to diverse speaking styles, subtle meanings, and often changing topics or tones.

For instance, if a vendor replies to a bug report by acknowledging that it’s a known issue, the ticket is considered resolved even though the bug isn’t. However, without the proper guidance, the language model may not always recognize this.

Here’s an example of such conversation, that’s marked as resolved:

A preview of a customer support ticket between a vendor and a customer

Zero-shot vs few-shot

To assess the model's capabilities, we employed both zero-shot and few-shot techniques. Our goal was to analyze how few-shot prompting influences the outputs produced by Gemini Pro.

Evaluating Gemini Pro

We evaluated Gemini Pro, focusing on its accuracy, recall, and precision. This assessment involved 200 test cases and utilized both zero-shot and few-shot prompting techniques.

Results

  • Gemini Pro with zero-shot prompting had the best F1 score (77.94%) and Recall (94.64%)
  • Gemini Pro with few-shot prompting had the best Accuracy(74%) and Precision (76.79%)

For this particular task, we wanted our model to be more conservative and capture all unresolved tickets, even if it meant tagging some resolved tickets as not resolved. To achieve this, we needed to choose a model with higher precision. In this case, Gemini Pro with few-shot prompting accomplished that for us.

By adding four examples to the prompt for few-shot prompting, we nearly halved the false positives, reducing them by 48% and increasing the precision to 76.79%.

Keep reading for more details on our methodology and the details of the experiment.

Methodology

Technical setup

For this comparison we used Vellum’s suite of features to manage various stages of the experiment. We used:

  • Prompt Sandbox: To compare zero-shot and few-shot prompts on the same model
  • Test Suites: To evaluate hundreds of test cases in bulk and measure which cases fail

The dataset we used had 200 test cases. Here is an example:

Preview of a conversation between a vendor and a customer

Prompt engineering & techniques

Before testing the models, we experimented with different prompts to ensure that the model would only output "true" or "false" as answers, without adding any additional explanation.

Here’s a snapshot on how that looked like within Vellum:

Once we were happy with the results, we were prepared to test the model on a larger set of cases.

In the zero-shot prompt we used the last N messages from a customer chat, instructions of what constitutes a resolved conversation, and a description of the expected answer format. In the few-shot case, the prompt had the same components, including examples of resolved conversations.

Note that the {{ messages }} tag is a variable that dynamically passes data within Vellum Prompt Sandboxes.

Model Information

We ran Gemini Pro with 0 temperature, and 10 token limit, because we wanted to get to a well defined answer.

A screenshot of the parameter configuration for the models within Vellum

Setting up the test cases

To evaluate the models, we uploaded Pylon's dataset into a Test Suite (which is our unit testing product for LLMs) and selected "Exact Match" as the metric for evaluation.

This evaluation metric verifies if the LLM's output perfectly matches the expected dataset output, considering any extra whitespaces the model might generate. Given that we anticipated either a "true" or "false" output, the evaluation process was straightforward.

Using Test Suites we were able to run all of these test cases at scale.

Running the evaluation

At this point we had our prompt configurations, and we were ready to run the prompt across our test cases.

We connected the Test Suite with our prompts and initiated the model runs. Here's what the setup looked like during the evaluation process:

Measuring the results

For this experiment, given that it’s a classification task we compared Gemini Pro on three metrics: accuracy, recall and precision.

Here are the final results that we got:

Conclusions

With this experiment, we learned that Gemini Pro when used with few-shot prompting can improve precision and accuracy, which is very important for classification tasks.

If you’re looking to scale your customer support operations using LLMs and want to evaluate different models and prompt techniques, we can help.

Vellum has the tooling layer to experiment with prompts and models, evaluate their quality, and make changes with confidence once in production.

You can take a look at some of our other use-cases, or book a call to talk with someone from our team, and we’d be happy to assist you.

ABOUT THE AUTHOR
Anita Kirkovska
Founding Growth Lead

An AI expert with a strong ML background, specializing in GenAI and LLM education. A former Fulbright scholar, she leads Growth and Education at Vellum, helping companies build and scale AI products. She conducts LLM evaluations and writes extensively on AI best practices, empowering business leaders to drive effective AI adoption.

ABOUT THE reviewer

No items found.
lAST UPDATED
Dec 20, 2023
share post
Expert verified
Related Posts
LLM basics
December 4, 2025
8 min
Top 12 AI Workflow Platforms
Product Updates
December 3, 2025
12 min
Vellum Product Update | November
Model Comparisons
November 27, 2025
18 min
Flagship Model Report: Gpt-5.1 vs Gemini 3 Pro vs Claude Opus 4.5
LLM basics
November 27, 2025
14 min
Gumloop Alternatives (Reviewed & Explained)
LLM basics
November 25, 2025
12 min
AI Voice Agent Platforms Guide
Guides
December 3, 2025
7 min
Claude Opus 4.5 Benchmarks
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component, Use {{general-cta}}

Build AI agents in minutes with Vellum
Build agents that take on the busywork and free up hundreds of hours. No coding needed, just start creating.

General CTA component  [For enterprise], Use {{general-cta-enterprise}}

The best AI agent platform for enterprises
Production-grade rigor in one platform: prompt builder, agent sandbox, and built-in evals and monitoring so your whole org can go AI native.

[Dynamic] Ebook CTA component using the Ebook CMS filtered by name of ebook.
Use {{ebook-cta}} and add a Ebook reference in the article

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.
Button Text

LLM leaderboard CTA component. Use {{llm-cta}}

Check our LLM leaderboard
Compare all open-source and proprietary model across different tasks like coding, math, reasoning and others.

Case study CTA component (ROI) = {{roi-cta}}

40% cost reduction on AI investment
Learn how Drata’s team uses Vellum and moves fast with AI initiatives, without sacrificing accuracy and security.

Case study CTA component (cutting eng overhead) = {{coursemojo-cta}}

6+ months on engineering time saved
Learn how CourseMojo uses Vellum to enable their domain experts to collaborate on AI initiatives, reaching 10x of business growth without expanding the engineering team.

Case study CTA component (Time to value) = {{time-cta}}

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.

[Dynamic] Guide CTA component using Blog Post CMS, filtering on Guides’ names

100x faster time to deployment for AI agents
See how RelyHealth uses Vellum to deliver hundreds of custom healthcare agents with the speed customers expect and the reliability healthcare demands.
New CTA
Sorts the trigger and email categories

Dynamic template box for healthcare, Use {{healthcare}}

Start with some of these healthcare examples

Prior authorization review agent
Reviews prior authorization packets, checks them against plan criteria and outputs JSON
Clinical trial matchmaker
Match patients to relevant clinical trials based on EHR.

Dynamic template box for insurance, Use {{insurance}}

Start with some of these insurance examples

AI agent for claims review
Review healthcare claims, detect anomalies and benchmark pricing.
Insurance claims automation agent
Collect and analyze claim information, assess risk and verify policy details.
Agent that summarizes lengthy reports (PDF -> Summary)
Summarize all kinds of PDFs into easily digestible summaries.

Dynamic template box for eCommerce, Use {{ecommerce}}

Start with some of these eCommerce examples

E-commerce shopping agent
Check order status, manage shopping carts and process returns.

Dynamic template box for Marketing, Use {{marketing}}

Start with some of these marketing examples

Reddit monitoring agent
Monitor Reddit for new posts and send summaries to a specified Slack channel.
Synthetic Dataset Generator
Generate a synthetic dataset for testing your AI engineered logic.

Dynamic template box for Sales, Use {{sales}}

Start with some of these sales examples

Closed-lost deal review agent
Review all deals marked as "Closed lost" in Hubspot and send summary to the team.
Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Dynamic template box for Legal, Use {{legal}}

Start with some of these legal examples

Legal document processing agent
Process long and complex legal documents and generate legal research memorandum.
Compliance review agent
Checks DPAs and privacy policies against your compliance checklist then scores coverage and make a plan.

Dynamic template box for Supply Chain/Logistics, Use {{supply}}

Start with some of these supply chain examples

Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.

Dynamic template box for Edtech, Use {{edtech}}

Start with some of these edtech examples

No items found.

Dynamic template box for Compliance, Use {{compliance}}

Start with some of these compliance examples

No items found.

Dynamic template box for Customer Support, Use {{customer}}

Start with some of these customer support examples

Trust center RAG Chatbot
RAG chatbot for internal policy documents with reranking model and Google search.
Renewal tracker agent
Create an agent that scans HubSpot for deals with upcoming renewal dates in the next 60 days.

Template box, 2 random templates, Use {{templates}}

Start with some of these agents

LinkedIn Content Planning Agent
Create a 30-day Linkedin content plan based on your goals and target audience.
Competitor research agent
Scrape relevant case studies from competitors and extract ICP details.

Template box, 6 random templates, Use {{templates-plus}}

Build AI agents in minutes

Client portfolio review agent
Compiles weekly portfolio summaries from PDFs, highlights performance and risk, builds a Gamma presentation deck.
Prior authorization navigator
Automate the prior authorization process for medical claims.
Prior authorization review agent
Reviews prior authorization packets, checks them against plan criteria and outputs JSON
NDA deviation review agent
Reviews NDAs against your standard template, highlights differences, and sends a risk rated summary to Slack.
Risk assessment agent for supply chain operations
Comprehensive risk assessment for suppliers based on various data inputs.
Research agent for sales demos
Company research based on Linkedin and public data as a prep for sales demo.

Build AI agents in minutes for

{{industry_name}}

Stripe transaction review agent
Analyzes recent Stripe transactions for suspicious patterns, flags potential fraud, posts a summary in Slack.
KYC compliance agent
Automates KYC checks by reviewing customer documents stored in HubSpot
Client portfolio review agent
Compiles weekly portfolio summaries from PDFs, highlights performance and risk, builds a Gamma presentation deck.
Contract review agent
Reviews contract text against a checklist, flags deviations, scores risk, and produces a lawyer friendly summary.
NDA deviation review agent
Reviews NDAs against your standard template, highlights differences, and sends a risk rated summary to Slack.
Compliance review agent
Checks DPAs and privacy policies against your compliance checklist then scores coverage and make a plan.

Case study results overview (usually added at top of case study)

What we did:

1-click

This is some text inside of a div block.

28,000+

Separate vector databases managed per tenant.

100+

Real-world eval tests run before every release.