Contrastive Chain of Thought Prompting

Check out our Prompt Research Page

For now, I am cataloging Peer Reviewed evidence-based prompting techniques only

I’ve always been intrigued by innovative prompting techniques, and discovering the “Contrastive Chain-of-Thought (CCOT) Prompting” was a true highlight. This method isn’t just a step forward in prompting GPT models; it’s a leap! CCOT stands out with its unique approach of presenting both correct and incorrect reasoning paths to enhance the model’s reasoning ability.

The research on CCOT involves two key strategies:

  1. Contrastive Reasoning: The language model is exposed to both right and wrong reasoning processes. This helps it to understand not just what is correct, but also why certain answers are incorrect.
  2. Enhanced Problem-Solving: By learning from contrastive examples, the model develops a more robust problem-solving mechanism, leading to more accurate and reliable outputs.

CCOT is a game-changer in improving the quality of responses from language models, especially in complex reasoning tasks. Its integration into existing prompting techniques is seamless and highly effective. This approach is a testament to the power of learning from mistakes, a principle that’s as valuable in AI as it is in human learning.

Before we get into CCOT, make sure you have a solid understanding of how to do role-based prompting. In the examples below we utilize math tutor or science tutor as our “role” in the prompt pattern.


Original Prompt:

“If a baker divides 20 cookies among 4 children, how many cookies does each child get?”

CCOT Prompt

  • Example Question: “A person has 15 apples and gives away 8. How many are left?”
  • Correct Reasoning: “Start with 15 apples. Subtract 8 apples given away. 15 – 8 = 7 apples left.”
  • Incorrect Reasoning: “The person gives away 8, so they must have 8 apples left.”
  • Question: “If a baker divides 20 cookies among 4 children, how many cookies does each child get?”

Remember, in the Contrastive Chain of Thought approach, the follow-up question should be a new, independent question within the same category and subcategory, rather than a continuation of the initial problem. It should test similar skills or knowledge but in a different context.

Original Prompt:

“Was George Washington a general during the American Revolution?”

CCOT Prompt

  • Example Question: “Did Queen Elizabeth I rule during the Victorian Era?”
  • Correct Reasoning: “No, Queen Elizabeth I reigned from 1558 to 1603, which is much earlier than the Victorian Era, which was from 1837 to 1901.”
  • Incorrect Reasoning: “Yes, she did, because she was a famous queen of England.”
  • Question: “Was George Washington a general during the American Revolution?”

I know this feels a little weird, to give the model basic common sense but the evidence shows marked improvement in accuracy. The purpose of this structure is to test the model’s ability to apply its reasoning to a different yet related scenario within the same domain, and we see a marked improvement in its capabilities!

The takeaway seems to be to provide a good example AND a bad example of what kind of output you seek. Thanks Guys!


Home » prompt » Contrastive Chain of Thought Prompting