Two Innovations in Inexact Augmented Lagrangian Methods for Convex Optimization
Jonathan Eckstein, Chang Yu
公開日: 2025/3/14
Abstract
This paper presents two new techniques relating to inexact solution of subproblems in augmented Lagrangian methods for convex programming. The first involves combining a relative error criterion for solution of the subproblems with over- or under-relaxation of the multiplier update step. This analysis enables a new kind of inexact augmented Lagrangian method that adapts the amount of multiplier step relaxation to the accuracy of the subproblem, subject to a viability test employing the discriminant of a certain quadratic function. The second innovation involves solution of augmented Lagrangian subproblems for problems posed in standard Fenchel-Rockafellar form. We show that applying alternating minimization to this subproblem, as in the first two steps of the ADMM, is equivalent to executing the classical proximal gradient method on a dual formulation of the subproblem. By substituting more sophisticated variants of the proximal gradient method for the classical one, it is possible to construct new ADMM-like methods with better empirical performance than using ordinary alternating minimization within an inexact augmented Lagrangian framework. The paper concludes by describing some computational experiments exploring using these two innovations, both separately and jointly, to solve LASSO problems.