Continuing from the last post, here are some more prompt engineering techniques.
Tree of Thought
Rather than being a way to enhance a prompt, Tree of Thought is more so a prompting framework. You break down your task into intermediate steps and repeat a step multiple times. If a step seems like it is in the right direction, or at least possibly is in the right direction, we continue with that as a new base, and move onto the next step, once again generating multiple responses. This allows for continuously validated reasoning.
Prompt chaining
Prompt chaining is simply breaking a task down into steps and prompting the generative model with those one by one. For example, if you wanted to make a presentation:
Prompt 1: “I want to make a presentation on XYZ. I’d like to cover these points: … . Please create an outline for a 10-slide presentation.”
Prompt 2: “This outline looks good. Give me a good title for my presentation.”
Prompt 3: “Now plan out an effective introduction for a general audience.”
And so on.
Self-consistency
A single LLM response may be incorrect due to various factors – hallucinating details, miscalculations, etc. To improve on this, self-consistency is a prompt engineering technique that essentially produces multiple independent responses to a question, often using randomization techniques to encourage diverse reasoning.
Then, a majority vote is used to obtain the final answer.
Leave a comment