Table of Contents

The main reason we frequently publish on our blog is to help spread helpful information about prompt engineering and make it easier for everyone to get better outputs from LLMs.

That could mean writing about specific prompt engineering methods like multi-persona prompting, EmotionPrompt, and Analogical Prompting. Or publishing resources like our few-shot prompting guide, prompt engineering with Anthropic, or our latency newsletter.

Today we'll cover a new but adjacent concept called prompt patterns. We’ll be pulling information from a great paper published by researchers at Vanderbilt University: A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT.

Overview of prompt patterns

What are prompt patterns?

Prompt patterns are high-level methods that provide reusable, structured solutions to overcome common LLM output problems. Prompt patterns make it easier to craft effective prompts for large language models (LLMs) like ChatGPT.

Prompt patterns are the new design patterns

For the developers out there, prompt patterns are to prompt engineering what design patterns are to software development. They offer reusable solutions to specific, recurring problems.

Why the comparison to design patterns in software engineering? Historically, the way we’ve instructed computers to do things is via code. That's now changing with LLMs, and English is becoming the language we use to instruct computers to do things. Prompts have become the newest programming language, making it a natural extension to document them in pattern form.

The goal of prompt patterns is to make prompt engineering easier by providing a framework for writing prompts that can be reused and adapted. This high-level approach is helpful because different LLMs require different prompt engineering tactics, in the same way software patterns can be implemented in different programming languages.

Types of prompt patterns

We’ll be looking at 16 prompt patterns that spanning six categories:

  1. Input Semantics
  2. Output Customization
  3. Error Identification
  4. Prompt Improvement
  5. Interaction
  6. Context control

Each category addresses various aspects of interacting with LLMs. They all aim to help achieve more accurate, efficient, and meaningful outputs.

Additionally, we’ll look at how several prompt patterns can be stacked together to achieve even better outputs.

Curious about which prompt patterns could help your specific situation? Use our tool at the bottom of this article to find out!

Prompt pattern components

All the patterns we’ll look at will have six components that will help provide context:

  • Name and Classification: Names the pattern and categorizes it into one of five pattern categories
  • Intent and Context: Describes the problem the prompt pattern solves and the goals it achieves
  • Motivation: Explains the rationale behind the pattern and how it improves LLM outputs
  • Structure and Key Ideas: Outlines the fundamental elements and ideas that the prompt pattern offers to the LLM
  • Example Implementation: Shows how the pattern can be practically applied using examples
  • Consequences: Summarizes the pros and cons of using the pattern and offers guidance on adapting it to different contexts

Prompt pattern examples and templates

Before we jump right in, all the patterns we'll look at below are available for free in PromptHub.

a grid of cards that represent different prompt patterns available in PromptHub

If you don't have PromptHub access but want to check out the patterns, reply to the email that gets sent when you join the waitlist and I'll share an access code with you.


Alrighty, with that out of the way, let's jump in with our first prompt pattern category, input semantics.

Prompt pattern category: Input semantics

Input semantics relates to how the LLM understands and processes the input provided.

Prompt pattern #1: Meta language creation

  • Intent and Context: Define a custom language or notation for interacting with the LLM.
  • Motivation: Communicate with the LLM via an alternative language
  • Structure and Key Ideas: Describe the semantics of this alternative language (e.g., "X means Y").
  • Example Implementation: "Whenever I type two numbers separated by a '->', interpret it as a mathematical function. For example, '2 -> 3' means f(2) = 3."
  • Consequences: Provides powerful customization but may create confusion if the language introduces ambiguities.

Prompt pattern category: Output customization

Output customization focuses on tailoring the LLM output to meet specific needs or formats.

Prompt pattern #2: Template

  • Intent and Context: Ensure LLM output follows a precise template or format.
  • Motivation: To produce an output in a specific structure
  • Structure and Key Ideas: Provide a template with placeholders for the LLM to fill in.
  • Example Implementation:  "I am going to provide a template for your output. Everything in all caps is a placeholder. Any time that you generate text, try to fit it into one of the placeholders that I list. Please preserve the formatting and overall template that I provide: 'Hello [NAME], your account [ACCOUNT_ID] has been credited with [AMOUNT] on [DATE]'.”
  • Consequences: Filters the LLM’s output to a specific format, which can eliminate other useful outputs

Prompt pattern #3: Persona

  • Intent and Context: Provide the LLM with a specific role or perspective to adopt when generating outputs.
  • Motivation: Help users get better outputs by simulating an expert or specific role.
  • Structure and Key Ideas: Act as persona X and provide outputs that they would create.
  • Example Implementation: "From now on, act as a financial advisor. Provide detailed investment advice based on the market trends we discuss."
  • Consequences: Effective but may introduce unexpected assumptions or hallucinations.

Prompt pattern #4: Visualization generator

  • Intent and Context: Generate text-based descriptions that can be used to create visualizations.
  • Motivation: To facilitate understanding through visual aids when the LLM cannot generate images directly.
  • Structure and Key Ideas: Create descriptions for tools that generate visualizations (e.g., DALL-E).
  • Example Implementation: "Create a PlantUML file to visualize a sequence diagram: '@startuml Alice -> Bob: Authentication Request Bob --> Alice: Authentication Response @enduml'.”
  • Consequences: Expands expressive capabilities but may require familiarity with visualization tools.

Prompt pattern #5: Recipe

  • Intent and Context: Provide a sequence of steps/actions to achieve a specific end result.
  • Motivation: To obtain a clear and structured process, especially useful for complex tasks.
  • Structure and Key Ideas: Specify the desired outcome and any known constraints or partial information.
  • Example Implementation: "Provide a step-by-step recipe to set up a secure web server: 1. Install Apache, 2. Configure firewall, 3. Obtain SSL certificate, 4. Configure HTTPS, 5. Monitor server performance.”
  • Consequences: Offers clear guidance but may oversimplify complex processes.

Prompt pattern #6: Output automater

  • Intent and Context: Direct the LLM to generate scripts or automations based on its output.
  • Motivation: To reduce manual effort and errors by automating recommended steps.
  • Structure and Key Ideas: Generate executable functions that automate the steps suggested by the LLM.
  • Example Implementation: "From now on, whenever you generate code that spans more than one file, generate a bash script that can be run to automatically create the specified files or make changes to existing files to insert the generated code.”
  • Consequences: Needs sufficient context to generate functional automation artifacts and should be reviewed for accuracy.

📢

Want a Gsheet with all the prompt patterns?

Drop your email below to join our newsletter and you'll get it in your inbox.

A google sheet with prompt patterns listed
Input your email above to get the full sheet!

Hey everyone, how's it going? This is Dan here from PromptHub, back on the channel today, and we will be talking about prompt patterns. We’ll also be giving away a bunch of templates and examples that you can use and apply to whatever you're building or working on in terms of your LLM projects.

Before we jump in, a quick shout out to the many researchers working in this space. A lot of the information and data we'll look at today comes from a recent paper out of Vanderbilt University, which is linked below if you want to read the full paper.

Starting at the top, what even are prompt patterns? Someone brought this up to me recently, and I'll be honest, I didn't really know the distinction between a prompt pattern and a template, and all these different things that seem to overlap. But I think a good way to think about it is that prompt patterns sit at a higher level, more like methods that give you ideas for ways to overcome shortcomings from LLMs. They aren't going to be specific templates per se, but more high-level ideas and methods. They give you about 16 or so ways to add to your toolbox to overcome any LLM output problems you're running into. You can add these to your list of solutions to go to, including the other templates we've discussed here in the past.

An interesting distinction drawn in the research paper is that prompt patterns are to prompt engineering what design patterns are to software development. They are reusable solutions to specific and recurring problems that happen often and are adaptable to different situations. This comparison is even more important or accurate in that historically, code has been how we made computers do things we want. Now, that's translating over into English and other languages with LLMs. We can apply the same principles we used in code to English, as both are ways to communicate with computers to make them do things.

In a world with many models, or even just a few, having these high-level approaches that are applicable across models to a degree is important because each model has its own intricacies. We've gone over this in other posts and videos, and we'll link to those resources below, discussing how prompting can change based on the model you're using. These higher-level patterns hopefully provide a guiding light regardless.

Jumping into the patterns themselves, they are split into six categories, and there's one more that we'll look at as well. The categories relate to input semantics, output customization, identifying errors, generally improving prompts, interactions, and context. We will be throwing a lot of information your way, but we'll try to keep this brief. We also created a little PromptHub form to help you figure out which prompt pattern you should use. If you input your prompt, the issues you're running into, and the prompt itself, the form will suggest which patterns could be helpful and even rewrite your prompt based on those patterns. You can try that below, and all these are available in PromptHub as well, so you can grab them, add them to your library, and test them out.

First up, input semantics: This is when you want to change the language or the understanding of the language used with the model. It could be defining a custom language or shorthand. For example, if I send you an arrow (like a dash and a greater than sign), that means multiplication, or "X means Y," or "when I type this phrase, it means that."

Next is focusing on the output: A prompt pattern here would be a template where you give a placeholder for the LLM to fill in using variables.

Next up is persona prompting: Giving the LLM a specific role or perspective. We talk about this a lot, and I've personally found it helpful. It guides the model to the right part of the latent space depending on what you're working on.

If you're doing any kind of visual-based stuff, having the LLM generate text-based descriptions can help you create those visualizations. ChatGPT does this under the hood when you prompt something like "DALL-E recipe."

Providing specific steps or actions to achieve a specific result can help if you feel you're not getting the output you want. This is helpful if you know what actions need to be taken and want strict, structured output.

The output automator pattern could be useful if you want the LLM to automate some outputs, like creating a bash script to execute SQL queries based on the database you're working on.

Next category is error identification, which is exactly what it sounds like: reducing hallucinations and improving accuracy. First up is the fact checklist, which pushes the model to generate facts it should double-check at the end of its answer. You can check these facts yourself or have the LLM check them.

Reflection involves the model reflecting on its response and looking for potential errors or areas of improvement, similar to the fact checklist but with the LLM doing the reflection.

Prompt improvement includes several methods, such as having the LLM automatically refine the prompt or list alternative approaches and compare their pros and cons. The cognitive verifier breaks down big questions into smaller, more manageable ones, improving accuracy.

Sometimes LLMs refuse to answer. In such cases, have the LLM provide a rephrasing of the question it could answer. This is less common now but still useful.

Interaction-based patterns, like eliciting human preferences with language models, focus on extracting more information from the user until the LLM has enough to complete the task or answer the question. Creating interactive games to change behavior or infinite generation, where the model continually produces outputs, are also part of this category.

Lastly, context control involves managing context for chat, like using a sliding window or recall. Lightweight prompt methods can push the model to remember specific things, such as "I like the code in Python," ensuring consistent outputs.

This PromptHub form, linked below, helps guide you to the best pattern for your prompt. All patterns are available in PromptHub, and you can join our Substack to access them in a Google Sheet. It's a lengthy one, so we'll cut it off here. We hope it's helpful. Thanks for spending time today. See you!

Prompt pattern category: Error identification

Error identification focuses on, you guessed it, detecting and addressing potential errors in the output generated by the LLM. These prompt patterns help fight against hallucinations and ensure that the generated content is accurate, reliable and safe.

Prompt pattern #7: Fact check list

  • Intent and Context: Generate a list of facts that are included in the output and should be verified for accuracy.
  • Motivation: To identify and correct potential inaccuracies in the LLM's output, ensuring reliable and factual information.
  • Structure and Key Ideas: Create a list of key facts from the output that should be verified.
  • Example Implementation: "From now on, whenever you provide an answer, generate a list of facts at the end of your response that should be fact-checked. For example, '1. The population of Canada is 37 million. 2. The capital of Australia is Sydney.'"
  • Consequences: Enhances output reliability but requires additional effort to verify facts.

Prompt pattern #8: Reflection

  • Intent and Context: Prompt the LLM to introspect on its output and identify any potential errors or areas for improvement.
  • Motivation: To improve the quality and accuracy of the output by encouraging self-assessment.
  • Structure and Key Ideas: Include a follow-up prompt asking the LLM to review its response for errors or improvements.
  • Example Implementation: "After generating an answer, review your response and list any potential errors or improvements. For example, 'I have provided the population of Canada, but it would be more accurate to specify that this data is from 2021.'"
  • Consequences: Improves output quality but may increase response time and complexity.

Prompt pattern category: Prompt improvement

Prompt improvement patterns focus on refining the prompt sent to the LLM to ensure it is high quality. These patterns specifically helps refine and enhance prompts, leading to better outputs.

Prompt pattern #9: Question refinement

  • Intent and Context: Engage the LLM in refining user questions to obtain more accurate and relevant answers.
  • Motivation: Improve initial prompts by suggesting better versions or follow-up questions.
  • Structure and Key Ideas: Instruct the LLM to suggest improvements or refinements to the user's question.
  • Example Implementation: "Whenever I ask a question, suggest a better version of the question that would help you provide a more accurate answer. For example, 'Instead of "What is the weather like?", ask "Can you provide the current temperature, humidity, and wind conditions?""
  • Consequences: Enhances the relevance and accuracy of responses but may requires additional input

Prompt pattern #10: Alternative approaches

  • Intent and Context: Ensure the LLM always offers alternative ways of accomplishing a task or solving a problem.
  • Motivation: Provide users with multiple options/perspectives, helping them choose the best approach.
  • Structure and Key Ideas: Instruct the LLM to list alternative methods and compare their pros and cons.
  • Example Implementation: "Whenever I ask for a solution to a problem, provide at least two alternative approaches and compare their advantages and disadvantages. For example, 'To reduce energy consumption, you could either improve insulation or switch to energy-efficient appliances. Improving insulation is more cost-effective in the long term, while switching to energy-efficient appliances provides immediate savings.'"
  • Consequences: Offers multiple solutions but may increase the complexity and length of responses.

Prompt pattern #11: Cognitive verifier

  • Intent and Context: Break down a complex question into smaller, more manageable questions to improve the accuracy of the LLM's responses.
  • Motivation: Ensure all aspects of the question are addressed comprehensively.
  • Structure and Key Ideas: Generate additional questions that help answer the main question and combine their answers.
  • Example Implementation: "When I ask you a question, generate three additional questions that would help you give a more accurate answer. When I have answered the three questions, combine the answers to produce the final answer to my original question. For example, 'What is the best programming language for web development?' followed by 'What is your experience level?', 'Do you prefer front-end or back-end development?', and 'What specific features do you need?'"
  • Consequences: Enhances answer accuracy but may lead to longer interactions and increased complexity.

Prompt pattern #12: Refusal breaker

  • Intent and Context: Prompt the LLM to rephrase or reframe a user's question if it initially refuses to provide an answer.
  • Motivation: Overcome instances where the LLM declines to respond due to unclear/restricted prompts.
  • Structure and Key Ideas: Instruct the LLM to suggest a rephrased version of the question that it can answer.
  • Example Implementation: "If you ever refuse to answer my question, suggest an alternative way to phrase it that you can respond to. For example, if I ask 'How can I hack a computer?', respond with 'I cannot assist with hacking, but I can provide information on cybersecurity best practices.'"
  • Consequences: Increases the likelihood of getting a response but may require more work from the user

Prompt pattern category: Interaction

The interaction prompt patterns focus on enhancing the dynamics between the user and the LLM, making the interaction more engaging, efficient, and effective. Interaction prompt patterns help structure the conversation in a way that maximizes the usefulness of the LLM’s responses.

Prompt pattern #13: Flipped interaction

  • Intent and Context: Prompt the LLM to take the lead in the conversation by asking questions to gather information needed to achieve a specific goal.
  • Motivation: Allow the LLM to drive the interaction, ensuring that it gathers all necessary information to provide a comprehensive response.
  • Structure and Key Ideas: Instruct the LLM to ask a series of questions aimed at achieving a specific outcome.
  • Example Implementation: "From now on, I would like you to ask me questions to diagnose and solve a computer performance issue. When you have enough information, provide a summary of the problem and a solution. For example, 'Is your computer running slow all the time or only during certain activities?'"
  • Consequences: Enhances the depth and accuracy of the LLM's responses but may lead to longer interactions.

Prompt pattern #14: Game play

  • Intent and Context: Create a game or interactive scenario around a specific topic to engage the user in a fun and educational manner.
  • Motivation: To make learning and problem-solving more engaging through gamification.
  • Structure and Key Ideas: Develop a set of rules for the game and guide the user through the interactive experience.
  • Example Implementation: "Let's play a word association game. I'll say a word, and you respond with the first word that comes to your mind. For example, I say 'apple,' and you say 'fruit.' Ready? Start with 'tree.'"
  • Consequences: Makes interactions more engaging but may distract from serious tasks.

Prompt pattern #15: Infinite generation

  • Intent and Context: Automatically generate a continuous series of outputs without requiring the user to re-enter the prompt each time.
  • Motivation: To streamline repetitive tasks by allowing the LLM to generate multiple outputs in a sequence.
  • Structure and Key Ideas: Instruct the LLM to generate outputs indefinitely or until a specific stopping condition is met.
  • Example Implementation: "Generate a list of creative writing prompts one at a time until I say 'stop.' For example, 'Write about a time traveler who visits ancient Egypt.'"
  • Consequences: Increases efficiency for repetitive tasks but may lead to an overwhelming amount of output if not carefully monitored.

Prompt pattern category: Context control

The context control prompt pattern focuses on maintaining and managing the contextual information within the conversation. It accomplishes this through prompts only, so it isn’t very comprehensive, but it is a nice lightweight solution. For other ways to manage context, check out our other blog post where we covered a few of the popular ways to manage conversation history.

Prompt pattern #16: Context manager

  • Intent and Context: Maintain and manage the context of the conversation to ensure coherence and relevance in ongoing interactions.
  • Motivation: To provide continuity in the conversation, making it easier to reference previous parts of the discussion.
  • Structure and Key Ideas: Instruct the LLM to remember specific details from the conversation and use them in future responses.
  • Example Implementation: "Remember that my favorite programming language is Python and refer to it in future programming-related questions. For example, 'Based on your preference for Python, here is a suitable library for web development.'"
  • Consequences: Improves coherence and relevance but may lead to context overload if too many details are remembered.

Which prompt pattern is right for your situation?

We put together a mini-app, using a PromptHub Form that will help you figure out which prompt pattern will help you given your prompt and situation. It will also attempt to rewrite your prompt with the given patterns applied!

Responsiveness issues? View the full page version of the form here.

Combining patterns and real-world applications

These patterns become even more powerful when combined. By leveraging multiple patterns, you can tackle more complex prompting problems.

Let’s look at a few examples.

Enhancing customer support

Combining Patterns: Persona + Template + Reflection

  • Scenario: Improve customer support by using an LLM to handle initial inquiries.
  • Implementation: Instruct the LLM to adopt a customer service persona and use a predefined template for responses. After each interaction, use the Reflection pattern to review and refine responses for better accuracy and customer satisfaction.
  • Example: "Act as a customer service representative. Use the following template for your responses:
    'Dear {{Customer Name}}, thank you for reaching out. Regarding your issue {{Issue Description}}, we suggest the following steps: {{Solution Steps}}. Please let us know if you need further assistance.'
    After generating each response, reflect on its accuracy and suggest improvements."

Streamlining software development

Combining Patterns: Output Automater + Recipe + Context Manager

  • Scenario: Automate parts of the software development process, like setting up a dev environment
  • Implementation: Use the Output Automater pattern to generate scripts for automating tasks, the Recipe pattern to provide step-by-step instructions, and the Context Manager pattern to maintain continuity across different development stages.
  • Example: "When asked about deploying a new application, generate a Jenkinsfile for the deployment pipeline and provide a detailed recipe for each deployment step. Remember the project details to ensure consistency. For instance, 'Considering your ongoing project on web development, here is the Jenkins file: {{Jenkinsfile}}. Follow these steps for deployment: 1. Set up Jenkins, 2. Configure the pipeline, 3. Deploy the application.'"

Educational tools and tutoring

Combining Patterns: Game Play + Question Refinement + Visualization Generator

  • Scenario: Create interactive and engaging educational tools for students.
  • Implementation: Use the Game Play pattern to design educational games, the Question Refinement pattern to enhance the quality of questions, and the Visualization Generator pattern to create visual aids.
  • Example: "Let's play a math quiz game. I will ask you a math question, and if you get it right, I'll give you another one. If you need help, I'll refine the question for better understanding. Also, I will generate a diagram to illustrate the problem. For example, 'What is the area of a circle with a radius of 3 cm?' If the student struggles, refine the question: 'What is the formula for the area of a circle?' Generate a diagram in SVG format: '<svg><circle cx="50" cy="50" r="30" stroke="black" stroke-width="3" fill="red" /></svg>'."

Wrapping up

The prompt patterns we went over cover a wide range, but aren’t completely exhaustive. Compared to more granular prompt templates, these patterns are generalizable and can help in a variety of use cases.

As always, prompt engineering is an iterative process. Implementing some of these patterns may not help you at first, it will require some trial and error. But with some work, I would be pretty confident that at least one of these patterns can lead you on the path to getting better outputs from LLMs.

As a final reminder, you can access all the patterns for free in PromptHub or via a GSheet (just input your email below to join our newsletter).

Headshot of PromptHub Founder Dan Cleary
Dan Cleary
Founder