LLMs cannot spot math errors, even when allowed to peek into the solution

KV Aditya Srivatsa, Kaushal Kumar Maurya, Ekaterina Kochmar

公開日: 2025/9/1

Abstract

Large language models (LLMs) demonstrate remarkable performance on math word problems, yet they have been shown to struggle with meta-reasoning tasks such as identifying errors in student solutions. In this work, we investigate the challenge of locating the first error step in stepwise solutions using two error reasoning datasets: VtG and PRM800K. Our experiments show that state-of-the-art LLMs struggle to locate the first error step in student solutions even when given access to the reference solution. To that end, we propose an approach that generates an intermediate corrected student solution, aligning more closely with the original student's solution, which helps improve performance.

LLMs cannot spot math errors, even when allowed to peek into the solution | SummarXiv | SummarXiv