Towards Better Code Generation: Adaptive Decoding with Uncertainty Guidance

Kaifeng He, Mingwei Liu, Chong Wang, Zike Li, Yanlin Wang, Xin Peng, Zibin Zheng

Published: 2025/6/10

Abstract

Code generation with large language models (LLMs) is highly sensitive to token selection during decoding, particularly at decision points where uncertainty strongly affects program correctness. Conventional strategies such as greedy decoding treat all tokens uniformly and fail to capture the uncertainty characteristics unique to code, often resulting in suboptimal outputs. In this work, we conduct an empirical analysis and show that a large fraction of generation errors arises from token misranking at high-uncertainty positions, where the correct token is available but not prioritized. To address this, we introduce AdaDec, an adaptive decoding framework that employs a lookahead-based, uncertainty-aware pause-and-rerank mechanism. AdaDec automatically learns model-specific uncertainty thresholds and selectively invokes reranking when high uncertainty is detected, leveraging lookahead to refine token choice. Across HumanEval+, MBPP+, and DevEval benchmarks, AdaDec yields substantial improvements, achieving up to 20.9% absolute gains in Pass@1 accuracy compared with greedy decoding, while consistently outperforming prior adaptive decoding approaches such as AdapT. Furthermore, by applying reranking only when necessary, AdaDec reduces computational overhead and latency, enhancing efficiency alongside reliability. These findings underscore the value of uncertainty-guided decoding strategies in advancing the robustness and practicality of LLM-based code generation.