Comparative Evaluation of Large Language Models for Test-Skeleton Generation

Subhang Boorlagadda, Nitya Naga Sai Atluri, Muhammet Mustafa Olmez, Edward F. Gehringer

公開日: 2025/9/4

Abstract

This paper explores the use of Large Language Models (LLMs) to automate the generation of test skeletons -- structural templates that outline unit test coverage without implementing full test logic. Test skeletons are especially important in test-driven development (TDD), where they provide an early framework for systematic verification. Traditionally authored manually, their creation can be time-consuming and error-prone, particularly in educational or large-scale development settings. We evaluate four LLMs -- GPT-4, DeepSeek-Chat, Llama4-Maverick, and Gemma2-9B -- on their ability to generate RSpec skeletons for a real-world Ruby class developed in a university software engineering course. Each model's output is assessed using static analysis and a blind expert review to measure structural correctness, clarity, maintainability, and conformance to testing best practices. The study reveals key differences in how models interpret code structure and testing conventions, offering insights into the practical challenges of using LLMs for automated test scaffolding. Our results show that DeepSeek generated the most maintainable and well-structured skeletons, while GPT-4 produced more complete but conventionally inconsistent output. The study reveals prompt design and contextual input as key quality factors.

Comparative Evaluation of Large Language Models for Test-Skeleton Generation | SummarXiv | SummarXiv