Improving Low-Resource Machine Translation via Cross-Linguistic Transfer from Typologically Similar High-Resource Languages

Saughmon Boujkian

公開日: 2024/12/27

Abstract

This study examines the cross-linguistic effectiveness of transfer learning for low-resource machine translation by fine-tuning models initially trained on typologically similar high-resource languages, using limited data from the target low-resource language. We hypothesize that linguistic similarity enables efficient adaptation, reducing the need for extensive training data. To test this, we conduct experiments on five typologically diverse language pairs spanning distinct families: Semitic (Modern Standard Arabic to Levantine Arabic), Bantu (Hausa to Zulu), Romance (Spanish to Catalan), Slavic (Slovak to Macedonian), and a language isolate (Eastern Armenian to Western Armenian). Results show that transfer learning consistently improves translation quality across all pairs, confirming its applicability beyond closely related languages. As a secondary analysis, we vary key hyperparameters learning rate, batch size, number of epochs, and weight decay to ensure results are not dependent on a single configuration. We find that moderate batch sizes (e.g., 32) are often optimal for similar pairs, smaller sizes benefit less similar pairs, and excessively high learning rates can destabilize training. These findings provide empirical evidence for the generalizability of transfer learning across language families and offer practical guidance for building machine translation systems in low-resource settings with minimal tuning effort.