Hallucinations are more present in Large Language Models (LLM) outputs than anyone in the industry would like to acknowledge.

We’ve discussed a few methods that look to help reduce hallucinations (like "According to.." prompting), and we’re adding another one to the mix today: AutoHint.

The AutoHint method

AutoHint is a framework designed to automate the process of enhancing prompts to account for previous incorrect outputs. It looks to enhance a prompt by automatically generating a "hint" to help steer the model away from incorrect outputs.

It’s like if you were to remind a friend right before an Algebra test about the order of operations (PEMDAS). Except with Autohuint, that hint would be generated automatically, based on your friend’s history of getting questions wrong due to forgetting which operation came first. But now, with that additional hint, your friend will probably score better on the exam.

How AutoHint works in practice

AutoHint is broken down into 4 steps.

  1. Initial Classification: First the model is guided to produce a bunch of outputs based on a given prompt and a series of inputs. Here's one example:

    Prompt: "Based on the following movie plot description, determine its genre. {{Input}}"
    Input: "A detective is on a mission to save the city from a criminal mastermind."
    Output: "Sounds like a Romance."

    The model will cycle through many inputs and produce many outputs. Some will be right, some will be wrong (see above).
  2. Identification of Incorrect Outputs: Once the outputs are generated they are evaluated against known correct answers to see which were incorrect.
  3. Hint Generation: From the pool of incorrect outputs, a sample is selected (more on this below). The LLM is then prompted to create a single, general, hint to address the subset. The goal is to generate a hint that can guide the model towards better classification in the future. For example:

    Prompt: "Based on the following incorrect classifications, deduce a general hint that can help in future classifications:
    -Plot: 'A detective is on a mission to save the city from a criminal mastermind.'
    Classified as: Romance.
    -Plot: 'A group of astronauts embark on a mission to a distant planet.'
    Classified as: Horror.
    -Plot: 'A young woman discovers she has magical powers and must save her kingdom.'
    Classified as: Documentary."
  4. Refining the Original Prompt: The hint is then appended to the end of the original prompt.

I found it interesting that only a sample of the incorrect answers were selected to generate a hint. The main reasons were:

  • Generalization: Since we are looking to create a broad hint, a randomly selected sample should capture many of the edge cases we need to account for.
  • Avoiding Overfitting: Analyzing every incorrect output might produce overly specific hints.
  • Practicality: Sampling provides insights into the broader dataset without needing to process every data point, making it more manageable and cost-effective.

AutoHint template

The framework involves multiple LLM calls, so creating a single prompt to encapsulate everything isn't possible. But, we put a prompt together for the Hint Generation step.

You can access the prompt template in PromptHub (here) and try it out right away. If you don't have PromptHub access but want to try it out, reply to the email that gets sent when you join the waitlist and I'll share an access code with you.

AutoHint prompt template in PromptHub platform


Experiments: Setup

The researchers tested AutoHint across 6 tasks, ranging from logical reasoning to linguistic challenges.

They compared its effectiveness against zero-shot and few-shot prompting.

Datasets: BIG-Bench Instruction Induction (BBII) dataset.

Models: Azure OpenAI API service (GPT-4)

Benchmarks: The basic initial prompt

Experiments: Results

Across the board, AutoHint showed an increase in accuracy in 5/6 tasks when compared against both zero-shot and few-shot prompting.

Zero-shot accuracy increase: 6.24%

Few-shot accuracy increase: 8.21%

Table showing results of the auto hint framework against zero-shot and few-shot prompting
AutoHint's performance compared to zero and few-shot prompting

Addressing Hallucinations

In addition to increasing accuracy, AutoHint was able to reduce hallucinations and infactual outputs. The addition of the generated hint helped the model get correct answers more often, and reduce the amount of times it went off the rails.

Wrapping up

The AutoHint framework offers a new perspective on how to further optimize LLM outputs and reduce hallucinations. It is slightly involved, but the results speak for themselves.

Its power lies in its ability to automate the entire process, thereby reducing the time required to optimize prompts by hand. It is also extremely versatile and works across a variety of tasks (unlike, Skeleton of Thought).

If you’re interested in other prompt engineering methods I would recommend checking out our other articles, or trying the prompts directly in PromptHub.

Happy prompting!

Headshot of author Dan Cleary
Dan Cleary
Founder