Case Studies
July 6, 2023

Encore increased eng productivity 3x when working with LLMs

Akash Sharma
Co-authors
No items found.

TABLE OF CONTENTS

If you’re version controlling prompts in Jupyter notebooks or Google Docs, running custom scripts for testing and wish you had more time, you need to read this.

The challenge

Encore is an interactive music app whose mission is to empower artists to make a living from their music. Generative AI plays a key role in their product, here are some example use cases:

Song lyrics to album cover art

  • Start with song lyrics
  • Run them through text summarization and idea generation prompts
  • Run through another LLM prompt to create image prompts
  • Send image prompt to Stable Diffusion to create cover art for the album

Generative AR with voice control

  • Build custom, immersive AR worlds on iOS devices
  • Use LLMs to enable voice control and integrate multiple txt2image generators
  • Integrate with lyric prompts to enable complete AR scene generation

There are several more AI use cases that Encore has which makes their product so exciting. Artists using Encore thoroughly enjoy these features but building and iterating on these prompts was extremely time consuming for the engineering team. The workflow was incredibly manual, spread across Google Docs, Colab notebooks & Github. Given limited engineering capacity (story of all startups!), Encore’s CEO Jon Gray had to be the primary person responsible for building, iterating and testing these long prompt chains.

LLM prompt development was bottlenecked on Jon and the team couldn’t iterate fast enough. Status quo was untenable and Encore needed to find a solution. After searching online, Jon signed up for a demo with Vellum.

A snapshot of the workflow that Jon Gray, CEO of Encore had to follow while building prompts

The solution

After seeing an initial demo, Jon was hooked by how quickly both him and his team would be able to iterate on prompts both while testing before production and changing them once in production.

Jon onboarded the team at Encore to Vellum and within a couple days the non-engineering team members were able to contribute significantly to the prompt testing and experimenting process. Multiple team members at Encore would immediately start using Vellum’s Playground, which Jon called the “killer feature” of Vellum’s platform.

When coming up with a new prompt, the team at Encore now start with a new sandbox, build a little 3x3 grid and start iterating. They find it a very easy, simple and a powerful workflow to iterate on prompts. The team can collaborate on prompts, compare between model providers and run them across many test cases all without writing a single line of code.

Once the prompts are ready for production, Encore’s engineering resources are just needed to make small integrations with Vellum’s endpoints. Anyone can go in and see the completions being made and change the prompts in production without having to involve engineering. The regression testing feature helps them ensure that no existing behavior is broken while working changing the prompt.

An example of how Encore uses Vellum’s platform to iterate on prompts

What’s next!

Encore has been a great partner and helped push Vellum’s platform forward. For instance, they’ve requested roles and permissions to limit what users can do in Vellum. They wanted some users to only edit prompts but not deploy them. Since the Role Based Access Control features were shipped, Encore has enjoyed using the platform even more.

Prompt chaining has been a common request for Encore and we at Vellum have worked closely with them to build out how that might work in the platform. We’re thrilled to have Encore as partners in the journey.

Want to try out for yourself?

Vellum has helped dozens of companies improve their internal AI development processes. Sign up here to start exploring the platform for yourself. You will also schedule an onboarding call with one of Vellum’s founders who can provide tailored advice for your use case. We’re excited to see what you and your team builds with Vellum next!

TABLE OF CONTENTS

Join 10,000+ developers staying up-to-date with the latest AI techniques and methods.
🎉
Thanks for joining our newsletter.
Oops! Something went wrong.
Akash Sharma
Linkedin's logo

Co-founder & CEO at Vellum.ai

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

About the authors

No items found.

Related posts