A Preliminary Study on the Robustness of Code Generation by Large Language Models
Zike Li, Mingwei Liu, Anji Li, Kaifeng He, Yanlin Wang, Xin Peng, Zibin Zheng
Published: 2025/3/26
Abstract
Robustness is a critical factor for reliable code generation by large language models, yet most evaluations focus on correctness and overlook key issues such as missing input validation and inadequate error handling. In this work, we present the first empirical study of LLM-generated code robustness using the CoderEval benchmark. Evaluating four state-of-the-art code LLMs, we find that 35.2% of their outputs are less robust than human-written code, with over 90% of deficiencies caused by missing conditional checks-70% of which occur in the first line. Interestingly, in 63% of cases where a conditional statement is needed but absent, the "if" token still ranks among the top three predictions, suggesting implicit recognition of control flow. To address these issues, we propose RobGen, a model-agnostic framework that improves robustness without retraining. RobGen combines a line-level intervention checker, which decides whether to adjust logits for each generated line, with token-level conditional logit adjustments to promote essential control structures. Experiments show that RobGen reduces the proportion of less robust code by 10%, achieves the highest average Pass@1 (43.57), and adds minimal overhead (+33.4%). As a lightweight and adaptable solution, RobGen effectively enhances the reliability of LLM-generated code across diverse tasks.