Evaluating Large Language Models for Code Translation: Effects of Prompt Language and Prompt Design

Aamer Aljagthami, Mohammed Banabila, Musab Alshehri, Mohammed Kabini, Mohammad D. Alahmadi

公開日: 2025/9/16

Abstract

Large language models (LLMs) have shown promise for automated source-code translation, a capability critical to software migration, maintenance, and interoperability. Yet comparative evidence on how model choice, prompt design, and prompt language shape translation quality across multiple programming languages remains limited. This study conducts a systematic empirical assessment of state-of-the-art LLMs for code translation among C++, Java, Python, and C#, alongside a traditional baseline (TransCoder). Using BLEU and CodeBLEU, we quantify syntactic fidelity and structural correctness under two prompt styles (concise instruction and detailed specification) and two prompt languages (English and Arabic), with direction-aware evaluation across language pairs. Experiments show that detailed prompts deliver consistent gains across models and translation directions, and English prompts outperform Arabic by 13-15%. The top-performing model attains the highest CodeBLEU on challenging pairs such as Java to C# and Python to C++. Our evaluation shows that each LLM outperforms TransCoder across the benchmark. These results demonstrate the value of careful prompt engineering and prompt language choice, and provide practical guidance for software modernization and cross-language interoperability.