The Effect of Language Diversity When Fine-Tuning Large Language Models for Translation

David Stap, Christof Monz

公開日: 2025/5/19

Abstract

Prior research diverges on language diversity in LLM fine-tuning: Some studies report benefits while others find no advantages. Through controlled fine-tuning experiments across 132 translation directions, we systematically resolve these disparities. We find that expanding language diversity during fine-tuning improves translation quality for both unsupervised and -- surprisingly -- supervised pairs, despite less diverse models being fine-tuned exclusively on these supervised pairs. However, benefits plateau or decrease beyond a certain diversity threshold. We show that increased language diversity creates more language-agnostic representations. These representational adaptations help explain the improved performance in models fine-tuned with greater diversity.

The Effect of Language Diversity When Fine-Tuning Large Language Models for Translation | SummarXiv | SummarXiv