Best-of-L: Cross-Lingual Reward Modeling for Mathematical Reasoning

Sara Rajaee, Rochelle Choenni, Ekaterina Shutova, Christof Monz

Published: 2025/9/19

Abstract

While the reasoning abilities of large language models (LLMs) continue to advance, it remains unclear how such ability varies across languages in multilingual LLMs and whether different languages produce reasoning paths that complement each other. To investigate this question, we train a reward model to rank generated responses for a given question across languages. Our results show that our cross-lingual reward model substantially improves mathematical reasoning performance compared to using reward modeling within a single language, benefiting even high-resource languages. While English often exhibits the highest performance in multilingual models, we find that cross-lingual sampling particularly benefits English under low sampling budgets. Our findings reveal new opportunities to improve multilingual reasoning by leveraging the complementary strengths of diverse languages.

Best-of-L: Cross-Lingual Reward Modeling for Mathematical Reasoning | SummarXiv | SummarXiv