Towards a Progress Bar for Reasoning: Progress Prediction in Large Reasoning Models

Hans Peter Lynsgøe Raaschou-jensen, Constanza Fierro, Anders Søgaard

Published: 2025/6/29

Abstract

Reasoning models that produce long, hidden chains of thought, have emerged as powerful tools for reasoning-intensive and agentic tasks. However, as the time horizons at which these models can operate grow exponentially, it becomes increasingly difficult to know how much progress the model is making on a task, making it challenging for users to set appropriate expectations about completion time. By probing the internal representations of Large Language Models (LLMs), we find evidence that their reasoning progress can be quantified, with simple linear probes achieving 30\% accuracy over 10 progress classes and Mean Absolute Error (MAE) of 1.75. Rooted in this insight, we propose a two-stage fine-tuning method that trains existing reasoning models to explicitly generate progress estimates (0-100\%) during their reasoning process. We find that the predictions of our best fine-tuned language model for sequences below 16K tokens are on average 10\% from the true label.

Towards a Progress Bar for Reasoning: Progress Prediction in Large Reasoning Models | SummarXiv | SummarXiv