Interleaving Natural Language Prompting with Code Editing for Solving Programming Tasks with Generative AI Models

Victor-Alexandru Pădurean, Paul Denny, Andrew Luxton-Reilly, Alkis Gotovos, Adish Singla

公開日: 2025/9/17

Abstract

Nowadays, computing students often rely on both natural-language prompting and manual code editing to solve programming tasks. Yet we still lack a clear understanding of how these two modes are combined in practice, and how their usage varies with task complexity and student ability. In this paper, we investigate this through a large-scale study in an introductory programming course, collecting 13,305 interactions from 355 students during a three-day laboratory activity. Our analysis shows that students primarily use prompting to generate initial solutions, and then often enter short edit-run loops to refine their code following a failed execution. We find that manual editing becomes more frequent as task complexity increases, but most edits remain concise, with many affecting a single line of code. Higher-performing students tend to succeed using prompting alone, while lower-performing students rely more on edits. Student reflections confirm that prompting is helpful for structuring solutions, editing is effective for making targeted corrections, while both are useful for learning. These findings highlight the role of manual editing as a deliberate last-mile repair strategy, complementing prompting in AI-assisted programming workflows.

Interleaving Natural Language Prompting with Code Editing for Solving Programming Tasks with Generative AI Models | SummarXiv | SummarXiv