More Bang for the Buck: Process Reward Modeling with Entropy-Driven Uncertainty
Lang Cao, Renhong Chen, Yingtian Zou, Chao Peng, Wu Ning, Huacong Xu, Qian Chen, Yuxian Wang, Peishuo Su, Mofan Peng, Zijie Chen, Yitong Li
Published: 2025/3/28
Abstract
We introduce the Entropy Driven Uncertainty Process Reward Model (EDU-PRM), a novel entropy-driven training framework for process reward modeling that enables dynamic, uncertainty-aligned segmentation of complex reasoning steps, eliminating the need for costly manual step annotations. Unlike previous Process Reward Models (PRMs) that rely on static partitioning and human labeling, EDU-PRM automatically anchors step boundaries at tokens with high predictive entropy. On the MATH test set, EDU-PRM achieves 65.5% accuracy, surpassing strong public PRM baselines such as Math-Shepherd PRM (61.7%) and Omega PRM (62.4%) under the High Temperature (HT) Sample + BON setting. Furthermore, when replacing HT sampling with EDU sampling, EDU-PRM further improves both accuracy and efficiency: at N=64, accuracy increases from 64.7% (HT Sample + BON) to 67.3% (EDU Sample + BON), while the number of generated tokens is reduced by 47%, demonstrating a superior accuracy-cost balance. On the ProcessBench test set, EDU-PRM achieves a new state-of-the-art accuracy of 88.4% using less than 1.5% of the Qwen2.5-Math-PRM-72B training data, surpassing the previous best of 87.8%. In summary, EDU-PRM provides a scalable and annotation-efficient paradigm for process supervision in mathematical reasoning, opening new avenues for efficient complex reasoning on math.