Ensembling Large Language Models for Code Vulnerability Detection: An Empirical Evaluation
Zhihong Sun, Jia Li, Yao Wan, Chuanyi Li, Hongyu Zhang, Zhi jin, Ge Li, Hong Liu, Chen Lyu, Songlin Hu
公開日: 2025/9/16
Abstract
Code vulnerability detection is crucial for ensuring the security and reliability of modern software systems. Recently, Large Language Models (LLMs) have shown promising capabilities in this domain. However, notable discrepancies in detection results often arise when analyzing identical code segments across different training stages of the same model or among architecturally distinct LLMs. While such inconsistencies may compromise detection stability, they also highlight a key opportunity: the latent complementarity among models can be harnessed through ensemble learning to create more robust vulnerability detection systems. In this study, we explore the potential of ensemble learning to enhance the performance of LLMs in source code vulnerability detection. We conduct comprehensive experiments involving five LLMs (i.e., DeepSeek-Coder-6.7B, CodeLlama-7B, CodeLlama-13B, CodeQwen1.5-7B, and StarCoder2-15B), using three ensemble strategies (i.e., Bagging, Boosting, and Stacking). These experiments are carried out across three widely adopted datasets (i.e., Devign, ReVeal, and BigVul). Inspired by Mixture of Experts (MoE) techniques, we further propose Dynamic Gated Stacking (DGS), a Stacking variant tailored for vulnerability detection. Our results demonstrate that ensemble approaches can significantly improve detection performance, with Boosting excelling in scenarios involving imbalanced datasets. Moreover, DGS consistently outperforms traditional Stacking, particularly in handling class imbalance and multi-class classification tasks. These findings offer valuable insights into building more reliable and effective LLM-based vulnerability detection systems through ensemble learning.