What even is a Prompt Engineer?

6 min read

My first attempts at prompting were laughable. I took a real interview transcript, fed it into ChatGPT, and said: "Summarize the key insights." The result? A garbled mess that missed the point entirely. After lots and lots of iterations, I eventually got to the point of creating, with some friends, an AI-powered tool to extract “aha” moments from customer interviews for a hackathon.

It was this hackathon project that caught the attention of the founders of Relevance AI.

Aha Moment Extractor Project: Built by Henry Badgery, Kevin Zhou, Richard Eve, and James Entwisle

I've now spent thousands of hours 'prompt engineering'—crafting instructions to get the AI to do what I want.

Turns out, prompting AI is less like coding and more like cross-cultural communication. Great prompts set the stage with motivation and clear direction, not just commands.

That said, writing prompts for a consistent output for a commercial-grade LLM-powered agent, like our Business Development Rep (BDR) agent, is a completely different beast. It requires a more nuanced and tailored approach.

The Art and Science of Prompt Engineering: Building the BDR Agent

Our first flagship agent that we set out to build was the BDR agent—an agent that could research, prospect, qualify, handle objections, discuss options with a prospect, and actually book meetings for sales teams.

The first attempt at the problem was a chained workflow with step-by-step conditions, using tons of chained prompts that followed IF/ELSE conditions. We felt that it was the only way we could ensure steerability, which we knew was non-negotiable for our future customers.

Ultimately, it became pretty clear that this approach wasn't going to work. From an R&D perspective, the iteration cycle was too slow. While the outcome would likely be consistent, it would also be distinctly average and not aligned with our vision.

At the time, we had a single agent builder, and the idea was that we could equip this agent with AI tools. AI tools can be thought of as abilities that the agent can use. For example, an AI tool could be the ability to access a list of case studies to choose the right one for the prospect or the ability to extract information from a CRM.

We tried to break down the tools required to be very granular. This way, the agent would have a specific tool for everything they'd need to be able to do—their abilities. Although this approach was brilliant in helping us break down and troubleshoot each ability to ensure that it worked, it created a new problem. We now had over 20 tools that the agent could choose from, and the agent struggled to choose the right tool for the job, leading to inconsistencies and mistakes.

The key was reminding ourselves that an AI agent's context memory is tiny when compared to a human's. We had really detailed system prompts for the agent describing their specialism and goal, but they couldn't prioritize well or consistently because the task was too large for their limited context memory.

We needed to break it down further; we needed a multi-agent system. This way, we could split the sales development process into distinct tasks that an AI agent could reliably perform. Fortunately, our colleagues were in the process of building this feature in Relevance as part of our overarching vision, which allowed us to focus on breaking down the sales development process into discrete tasks.

After lots and lots of first-principles thinking, iterations, edge-case testing, and stress testing it ourselves, we finally ended up with the following team structure:

  • BDR Manager: Receives the lead or task and delegates it to the right team.
  • Sales Strategy Team: Conducts prospect research, crafts messages and value propositions, and checks calendar availability.
  • Inbox Manager Team: Manages the inbox, replies to inquiries, and books meetings.
  • Follow-up Team: Schedules follow-up tasks and ensures timely responses.


Learnings and reminders

  • Breaking down the process into very discrete chunks is the only way to get something that works consistently with an LLM. This approach is also essential to the iterative development process, as it allows you to easily identify areas that need improvement and avoid getting stuck playing whack-a-mole.
  • Every agent needs to have a specific goal with clear instructions. They work differently than we do and have a very limited context memory, especially when compared to a human. Multi-agent systems are really the only way to get meaningful human-like tasks done on autopilot.
  • Working with multi-agent systems requires a completely different way of thinking for people who are used to working with LLM chains. You have to consider that the output is going to another agent, which takes some getting used to. But once you're comfortable with this approach, it eliminates the mental load of having to understand what happens in a very long chain. So you can do way more complex things way faster.
  • The above is true, except when it’s not. A core skill of a prompt engineer is to test assumptions constantly. Every new model will behave differently, some processes that seem complicated actually might be quite simple for LLMs. It really is the wild wild west out here.

A Day in the Life

At Relevance, my day is exceptionally varied. I work across all teams:

  • Some days I’m working on product development
  • Some days I’m helping out some customers who need a hand
  • Some days I’m working on internal projects where we are looking to use agents and LLMs help us out.
  • Some days I’m writing articles like this to share knowledge with the rest of the team.

It’s a bit of theme but communication is probably the most important skill. You ended up working with a lot of different people with different backgrounds.

The Path to Becoming a Prompt Engineer

If conversational AI is the future (and I believe it is), then prompt engineering is a ticket to the front row. Demand for this skill is skyrocketing, with folks from all backgrounds hustling to get up to speed. The key traits for success are curiosity, creativity, tenacity, and a love of experimentation.

The best way to start is to dive in headfirst. Get your hands dirty with tools like ChatGPT. Join communities to learn from other prompt engineers (find them on Twitter and Discord). Study how different prompts lead to radically different outputs. Seek out problems in your own work that could be turbocharged with AI—and start hacking.

Most importantly, embrace the role of teacher. Our AI’s don't truly understand yet; we must break down our knowledge to a level they can grasp. In the process, we make that knowledge clearer to ourselves. The feedback loop between human and machine will be the story of the coming decades. And prompt engineers will be the ones steering it toward breakthroughs.

This journey won't always be smooth. Teaching AI’s, and being changed by them in return, brings thrilling highs but also valid fears and ethical quandaries.

March 28, 2024
Contents
Henry Badgery
Tags:
No items found.
You might also like