VerilogMonkey: Exploring Parallel Scaling for Automated Verilog Code Generation with LLMs
Juxin Niu, Yuxin Du, Dan Niu, Xi Wang, Zhe Jiang, Nan Guan
Published: 2025/9/17
Abstract
We present VerilogMonkey, an empirical study of parallel scaling for the under-explored task of automated Verilog generation. Parallel scaling improves LLM performance by sampling many outputs in parallel. Across multiple benchmarks and mainstream LLMs, we find that scaling to hundreds of samples is cost-effective in both time and money and, even without any additional enhancements such as post-training or agentic methods, surpasses prior results on LLM-based Verilog generation. We further dissect why parallel scaling delivers these gains and show how output randomness in LLMs affects its effectiveness.