Prompt engineering — the practice of improving your prompts to AI models such as ChatGPT — may sound complicated, but it's easier than you think. Here are five tips to get you started from Advisory Board's Thomas Seay.
A confession: I hate the term “prompt engineering.”
In case you aren't familiar with it, the phrase refers to techniques used to improve the outputs of AI models such as ChatGPT.
My problem is that “prompt engineering” sounds really hard — like “biomedical engineering” or “computer engineering,” an obscure wizardry that requires a graduate degree to master.
In reality, though, prompt engineering just means “clearly asking an AI for what you want.” Here's how to do it.
The most important thing to remember in writing effective prompts is that when you fire up a large language model (LLM) such as ChatGPT, your conversation begins with a blank slate.
The AI knows nothing about you. It doesn't remember your past conversations. It can't use Google to learn more about your request. You might envision it as a cheerful, well-educated, capable intern who lives on Mars and is receiving your message via interplanetary fax.
Let's imagine, then, that I fax my new Martian intern a healthcare news article and say, “Summarize this for me.” Will their response be what I want?
Probably not!
Maybe the intern will imagine I'm a healthcare CEO looking for a dense, executive-level summary. Maybe they'll think I'm a doctor who needs a patient-facing explanation. Or maybe they'll figure I'm a middle-school student working on a class project.
The reply from Mars might not be bad, exactly, but it's unlikely to be exactly what I’m looking for.
Viewed from this perspective, “prompt engineering” just means reframing your requests so your Martian intern understands you more clearly and responds more effectively.
There are few hard-and-fast rules for prompt engineering, just as there are few strict rules for framing your requests to human interns. But here are a few practices that often work well. (The sample outputs below come from Writer.com's LLM, but these approaches work equally well using ChatGPT, Bard, Claude, and other LLMs.)
1. Give examples of your desired output. This is often the easiest way to improve AI's outputs, since even one well-curated example embeds so much implicit information about your goals.
Let's say you're reading a report on healthcare reform, and you run across an unfamiliar term: “MU.” You might ask:
Basic prompt:
Define “MU.”
AI output:
MU is an abbreviation that stands for “mutual understanding.” It is a term that is often used to describe a situation in which two or more parties have a common understanding of each other's points of view and intention.
That's a fine answer … for somebody. But it's not how “MU” is used in health reform.
To cue the AI to think about your problem in the right way, you might give it an example definition of another healthcare term (this one comes from Advisory Board's Healthcare Jargon Dictionary):
Better prompt:
Example Term: MIPS
Stands for: Merit-Based Incentive Payment System
Definition: One of two new payment tracks under MACRA that scores providers on four categories — Quality, Resource Use, EHR Use, and Clinical Practice Improvement —which will be used to calculate provider bonuses or penalties.
Term: MU
AI output:
Stands for: Meaningful Use
Definition: A program created by the Centers for Medicare and Medicaid Services (CMS) to encourage healthcare providers to adopt and use certified Electronic Health Records (EHRs) in order to improve patient care. Providers must meet certain standards in order to qualify for incentive payments.
2. Ask the AI to adopt the role of a specific expert. You might ask your LLM to act as a marketing executive at a Fortune 500 company, a skilled and patient high school teacher, or a Buzzfeed listicle author, depending on your use case.
Imagine I'm planning for Advisory Board's 2023 Clinical Innovation Summit and want an LLM to help brainstorm other names for the event. A straightforward request might look like this:
Basic prompt:
Suggest a different title for this event: 2023 Clinical Innovation Summit
AI output:
2023 Clinical Innovations Forum
That scarcely qualifies as a new idea at all. Here's a way to get better results:
Better prompt:
Act as a skilled, creative event marketer. Suggest a different title for this event: 2023 Clinical Innovation Summit
AI output:
2023 Clinical Innovation Summit: Unlocking the Future of Healthcare
3. Explain how the AI's reply will be used. For instance, you might say that you're writing an email marketing newsletter, a social media post, an executive briefing, or a job posting.
Basic prompt:
Explain value-based care in one sentence.
AI output:
Value-based care is a healthcare delivery model that focuses on providing quality care while controlling costs.
Better prompt:
Explain value-based care in one sentence for a Twitter post.
AI output:
Value-based care focuses on providing quality patient care while managing costs and improving population health. #ValueBasedCare
4. Ask the AI to think step-by-step. This one's less intuitive, but it can be surprisingly powerful.
To understand why it works, recall that LLMs essentially work by generating one word at a time. If you ask them a complicated question and demand an immediate answer, they're forced to jam a lot of “thinking” into just a few words — often leading to bad results.
If you instead ask an LLM to think step by step, it tends to break problems into component parts and tackle them one at a time.
Basic prompt:
What is 5,899 + (3,244 x 7)?
AI output:
40,127
Better prompt:
Think step by step. What is 5,899 + (3,244 x 7)?
AI output:
5,899 + (3,244 x 7) =
5,899 + 22,708 =
28,607
5. Ask for multiple responses. There's no magic here, just the law of large numbers: The more answers the AI produces, the more likely one of them will be great.
Basic prompt:
What is a funny name for a pet dog?
AI output:
Fuzzy Wuzzy
Better prompt:
What are 5 funny names for a pet dog?
AI output:
1. Woofy McFuzzles
2. Sir Barks-a-lot
3. Fido McFluff
4. Fluffy McSnuggles
5. Spotty Wagglebottom
If you'd like to dive deeper into prompt engineering, you can find plenty of guides online (here's one solid primer). But here's my biggest advice: Experiment, experiment, and experiment some more.
If you give an AI a prompt that produces a poor output, try asking again a different way. Give additional context, examples, and references. Keep trying until the AI “gets” what you're asking for.
Put another way: Have some sympathy for your new Martian intern. It's doing the best it can — but sometimes, it needs a little hand-holding to serve you as effectively as possible.
AI is essential to healthcare’s future, but the key to success isn't a new AI strategy. Instead, we must understand how to use AI to further existing strategies.
Uncover three different ways organizations are approaching AI and which pitfalls to look out for.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
Never miss out on the latest innovative health care content tailored to you.