Welcome to the last chapter of Advanced Techniques in Prompt Engineering, titled "Chain of Thought with Backtracking". This is another advanced prompt engineering method that enables us to help LLMs find solutions in highly complex situations. Let's start by understanding the core idea behind it.
The core idea behind "Chain of Thought with Backtracking" is to help the LLM think backward after analyzing the situation instead of being only a one-way train. As you might remember, LLMs are designed to be next-word-prediction machines, and in case problems require a lookback
, they naturally tend to become less and less effective. Our goal as Prompt Engineers is to help them navigate this challenge by creating specific instructions on how to backtrack after getting to a certain point.
Let's take a look at a simple example prompt that includes Chain of Thought
and nothing more.
Sample output:
As you can see the LLM gives it a good attempt but gets the result completely wrong. The reason is that this is the kind of problem where working backward from the end is much easier than the other way around.
While progressing linearly through a problem is effective by using the standard Chain Of Thought method, there are instances where you might need to revisit and revise earlier steps. This is where "Backtracking" comes into play. It involves going back after the initial analysis to come up with the final answer. Let's enhance the prompt above to see an example:
Notice here a key component do not suggest an answer
- this is done to avoid biasing the LLM. If the LLM produces an answer while doing the forward analysis (aka Chain of Thought), it will be more likely to stick to that answer even after Backtracking, even though it's likely to be incorrect.
Sample output:
This time the answer is absolutely correct and you can see the overall depth of the analysis is much stronger.
In solving complex problems like the one above and in dealing with complicated scenarios applying one technique like Chain of Thought with Backtracking
won't be enough. Even in this scenario, if you were to run the prompt above multiple times you'd see that not very advanced LLMs don't always get the right answer.
This is where the combination of advanced techniques comes to the rescue. You might remember the method of Brainstorming that we've learned in this course. Instead of just running the above prompt once and trusting the output, you could run it 5-10 times and then write a consolidator prompt that reviews all the outputs and determines which solution is correct. That will dramatically increase the ability of your prompt to get to the right answer.
Now that you have learned how to utilize Chain of Thought with Backtracking in prompt engineering, it's time to put your knowledge to the test. So exciting to see you get to the end of this course. We are all proud of what you've accomplished here.
Happy Prompt Engineering!
