Simple Optimizers for Convex Aligned Multi-Objective Optimization

Ben Kretzu, Karen Ullrich, Yonathan Efroni

Published: 2025/9/6

Abstract

It is widely recognized in modern machine learning practice that access to a diverse set of tasks can enhance performance across those tasks. This observation suggests that, unlike in general multi-objective optimization, the objectives in many real-world settings may not be inherently conflicting. To address this, prior work introduced the Aligned Multi-Objective Optimization (AMOO) framework and proposed gradient-based algorithms with provable convergence guarantees. However, existing analysis relies on strong assumptions, particularly strong convexity, which implies the existence of a unique optimal solution. In this work, we relax this assumption and study gradient-descent algorithms for convex AMOO under standard smoothness or Lipschitz continuity conditions-assumptions more consistent with those used in deep learning practice. This generalization requires new analytical tools and metrics to characterize convergence in the convex AMOO setting. We develop such tools, propose scalable algorithms for convex AMOO, and establish their convergence guarantees. Additionally, we prove a novel lower bound that demonstrates the suboptimality of naive equal-weight approaches compared to our methods.

Simple Optimizers for Convex Aligned Multi-Objective Optimization | SummarXiv | SummarXiv