Multi-Objective Reinforcement Learning for Large Language Model Optimization: Visionary Perspective

Lingxiao Kong, Cong Yang, Oya Deniz Beyan, Zeyd Boukhers

Published: 2025/9/25

Abstract

Multi-Objective Reinforcement Learning (MORL) presents significant challenges and opportunities for optimizing multiple objectives in Large Language Models (LLMs). We introduce a MORL taxonomy and examine the advantages and limitations of various MORL methods when applied to LLM optimization, identifying the need for efficient and flexible approaches that accommodate personalization functionality and inherent complexities in LLMs and RL. We propose a vision for a MORL benchmarking framework that addresses the effects of different methods on diverse objective relationships. As future research directions, we focus on meta-policy MORL development that can improve efficiency and flexibility through its bi-level learning paradigm, highlighting key research questions and potential solutions for improving LLM performance.

Multi-Objective Reinforcement Learning for Large Language Model Optimization: Visionary Perspective | SummarXiv | SummarXiv