Adaptive Group Policy Optimization: Towards Stable Training and Token-Efficient Reasoning

Chen Li, Nazhou Liu, Kai Yang

公開日: 2025/3/20

Abstract

Since DeepSeek-R1 popularized, Group Relative Policy Optimization (GRPO) has become the core part of training Reasoning LLMs. However, we find some deficiency that influences RL stability and inference efficiency, like zero-variance in advantage estimation. Thus, we propose Adaptive Group Policy Optimization (AGPO) which uses a simple but effective method, an adaptive loss function, to mitigate training fluctuation and token inefficiency. The experiments demonstrate our method achieves more stable training and superior performance with significantly fewer tokens in reasoning steps.

Adaptive Group Policy Optimization: Towards Stable Training and Token-Efficient Reasoning | SummarXiv | SummarXiv