← Back to blog
Akash Sharma

Author

Akash Sharma

Co-founder & CEO

28 posts

Articles by Akash Sharma

Introducing Vellum for Agents

Introducing Vellum for Agents

Today we're introducing Vellum. All you do is chat and let Vellum build reliable Agents for you.

Product UpdatesJan 13, 20266 min
Introducing Environments in Vellum: Isolate, Promote, and Deploy with Confidence

Introducing Environments in Vellum: Isolate, Promote, and Deploy with Confidence

A first-class way to manage your work across Development, Staging, and Production.

Product UpdatesJul 17, 20253 min
Vellum Workflows SDK is Generally Available

Vellum Workflows SDK is Generally Available

Full control in code and real-time visibility in UI, built for teams shipping reliable AI.

Product UpdatesJul 14, 20256 min
Announcing our $20m Series A

Announcing our $20m Series A

AI Development needs a standard & we’re building it at Vellum

AllJul 10, 20257 min
​​How GravityStack Cut Credit Agreement Review Time by 200% with Agentic AI

​​How GravityStack Cut Credit Agreement Review Time by 200% with Agentic AI

Helping a leading financial institution speed up legal reviews, without compromising quality.

Customer StoriesMay 30, 20254 min
Reintroducing Vellum for 2025

Reintroducing Vellum for 2025

We’re simplifying the complex world of AI development for teams of all sizes.

GuidesOct 10, 20245 min
LLM Evaluation: Key Metrics and Strategies for Every Use Case

LLM Evaluation: Key Metrics and Strategies for Every Use Case

Learn how to use guardrails, online/offline evaluation metrics for various LLM use-cases.

GuidesSep 17, 202412 min
When should I use function calling, structured outputs or JSON mode?

When should I use function calling, structured outputs or JSON mode?

Learn how and when to JSON mode, structured outputs and function calling for your AI application.

GuidesSep 6, 20246 min
Announcing Vellum VPC

Announcing Vellum VPC

Vellum now offers VPC installations for secure AI development in your cloud, keeping data private and compliant.

GuidesAug 27, 20243 min
What is Required for a Reliable AI System?

What is Required for a Reliable AI System?

Learn the key strategies and tools for building production-ready AI systems.

GuidesJun 4, 202411 min
How Drata Collaborates on AI Workflows with Vellum

How Drata Collaborates on AI Workflows with Vellum

Learn how Drata used Vellum to quickly validate AI ideas, and speed up AI development.

Customer StoriesApr 16, 20245 min
How Should I Manage Memory for my LLM Chatbot?

How Should I Manage Memory for my LLM Chatbot?

Tips to most effectively use memory for your LLM chatbot.

GuidesFeb 14, 20246 min
The Four Pillars of Building LLM Applications for Production

The Four Pillars of Building LLM Applications for Production

Learn how successful companies develop reliable AI products by following a proven approach.

GuidesJan 29, 20247 min
Miri: Collaboratively building a chatbot in production with Vellum

Miri: Collaboratively building a chatbot in production with Vellum

How Miri built a powerful chat experience using Vellum's platform

Customer StoriesOct 13, 20235 min
Why is collaborating on Prompt Engineering so difficult?

Why is collaborating on Prompt Engineering so difficult?

Collaborating with colleagues to test prompts yields good results but it's challenging.

GuidesSep 27, 20236 min
Should I use Prompting, RAG or Fine-tuning?

Should I use Prompting, RAG or Fine-tuning?

Rag vs Fine-Tuning vs Prompt Engineering: Learn how to pick which one is the best option for your use-case.

GuidesAug 31, 20236 min
OpenAI v/s Anthropic v/s Google: A latency comparison

OpenAI v/s Anthropic v/s Google: A latency comparison

We did an analysis comparing the latency of OpenAI, Anthropic and Google. Here are the results!

Model ComparisonsAug 24, 20235 min
How Left Field Labs was able to prototype fast, and improve collaboration

How Left Field Labs was able to prototype fast, and improve collaboration

Learn how Left Field Labs used Vellum for LLM prompt versioning, evaluation and monitoring once in production.

Customer StoriesAug 9, 20234 min
Fine-tuning open source models: why is it relevant now?

Fine-tuning open source models: why is it relevant now?

Why fine tuning is now relevant with open source models

GuidesJul 20, 20237 min
Announcing our seed round

Announcing our seed round

We've raised $5m to double down on our mission to help companies build production use cases of LLMs

Product UpdatesJul 13, 20232 min
Encore increased eng productivity 3x when working with LLMs

Encore increased eng productivity 3x when working with LLMs

If you’re versioning in Jupyter notebooks or Google Docs, running custom scripts for testing, you need to read this

Customer StoriesJul 6, 20234 min
My prompt is in production: now what should I do?

My prompt is in production: now what should I do?

Tips on how to monitor your in-production LLM traffic

GuidesJun 19, 20235 min
Testing LLM applications features  - before & after production

Testing LLM applications features - before & after production

Tips to experiment with your LLM related prompts

GuidesJun 12, 20235 min
Vellum <> LlamaIndex Integration

Vellum <> LlamaIndex Integration

Details about how to best leverage the Vellum <> LlamaIndex integration

Product UpdatesJun 5, 20236 min
Our thoughts on working with Google's LLM: PaLM

Our thoughts on working with Google's LLM: PaLM

Compare model quality across OpenAI's GPT-4, Anthropic's Claude and now Google's PaLM LLM in our platform

Product UpdatesMay 10, 20233 min
Great (and not so great) use cases of Large Language Models

Great (and not so great) use cases of Large Language Models

Despite high potential, LLMs are not a one-size-fits all solution. Choosing the right use case for LLMs is important

GuidesFeb 27, 20234 min
When to use fine-tuning?

When to use fine-tuning?

Fine-tuning can provide significant benefits in cost, quality & latency when compared to prompting

GuidesFeb 7, 20235 min
Announcing Vellum

Announcing Vellum

We’re excited to publicly announce the start of our new adventure: Vellum

Product UpdatesFeb 2, 20234 min