Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to Insights @ NAACL 2025 (Oral Presentation)

Error Reflection Prompting: Can Large Language Models Successfully Understand Errors

Jason Li, Lauren Yraola

Abstract

Prompting methods for language models, such as Chain-of-thought (CoT), present intuitive step-by-step processes for problem solving. These methodologies aim to equip models with a better understanding of the correct procedures for addressing a given task. Despite these advancements, CoT lacks the ability of reflection and error correction, potentially causing a model to perpetuate mistakes. We propose Error Reflection Prompting (ERP) to further enhance reasoning in language models. Building upon CoT, ERP is a method comprised of an incorrect answer, error recognition, and a correct answer. This process enables the model to recognize types of errors and the steps that lead to incorrect answers, allowing the model to better discern which steps to avoid and which to take. Our results demonstrate that ERP serves as a versatile supplement to conventional CoT, contributing to more robust and capable reasoning abilities along with increased interpretability.

Citation

Jason Li, Lauren Yraola. "Error Reflection Prompting: Can Large Language Models Successfully Understand Errors". Accepted to Insights @ NAACL 2025 (Oral Presentation).

Details

Conference
Accepted to Insights @ NAACL 2025 (Oral Presentation)
Authors
2 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application