<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2429892&amp;fmt=gif">

New! Prompts that write themselves Try it out

LLM Prompting and the State of the GenAI Union (as told by Andy B.)

If your LLM-powered solution isn’t working, odds are you can’t fix it fast or cheap enough by training/retraining or fine-tuning the model. Messing with the prompts is the fastest, cheapest, and easiest way to see performance lift in your use case.

In any case, all prompting techniques should be exhausted before you head down the long and sticky road of model training.

If that sounds overwhelming, don’t worry - we have an upcoming feature called Auto-Prompting, where you give us your lovingly crafted baby prompt, and we hand you back full-grown prompt templates with cutting-edge techniques.

That's right folks, we write your prompts for you.  To be the first to access this, you should make sure you have a project in our workbench with at least 3 models and 3 prompts being tested.

How familiar are you with prompting techniques?

Have you tried?

  • Chain of Thought (see this prompting paper from 2022)
  • Chain of Density (see this prompting for summarization paper from Sep 2023)
  • Skeleton of Thought (see this paper from May 2023)

Industry Hype and Hot Takes

  • Did you catch this paper from Apple earlier this month? TLDR: multi-billion parameter LLMs (which used to be large but are now small) are coming to mobile devices - will they replace low-latency “tiny” models that do things like autocomplete? TBD. PS They made their own LLM, it’s called Ferret, and we're integrating it!
  • Remember the scene in Interstellar where they adjust comedy settings on TARS? Weird flex, but someone came up with a technique to train your LLM to reflect a specific Myers-Briggs Type. Like the movie but way less cute. Read the paper.
  • Understanding depth in computer vision is a big problem (listen to my silly podcast about how calorie estimation computer vision apps are awful) and this trending paper seems to have made incredible progress! 

Verta Workbench Updates and Sneak Peeks

  • Did you know that you can make an endpoint using any model/prompt in your leaderboard and serve it to your app using Verta? We are, after all, an MLOps company.  Our founder built the ML platform at Twitter and Nvidia - now he can build yours too! Instructions.
  • GPT-4 Is back, with limited free credits per user. Add it to your leaderboard and see if it’s worth the $$$$
  • Gemini Pro is available to test and add to your leaderboard - warning, this model has some bizarre outputs when it doesn’t like the prompt. Reach out to support if it’s giving you gibberish back, we’ll help you tweak your prompts.
  • New Models Coming Soon:
    • Mixtral by Mistral, oh la la
    • Eagle 7B - “the strongest multi-lingual model in open source today”, we love that is speaks Vietnamese
  • Some features on our roadmap that will only be available to trusted partners first
    • LLM Model Distillation - we help you hit an open source LLM with a shrink ray and you get the same performance on your task for 1/100th of the cost and time.
    • Auto-Prompting - because writing good prompts is hard
Want to give Verta a try? You can sign up for free today.

Subscribe To Our Blog

Get the latest from Verta delivered directly to you email.

Try Verta today