Learning Acceleration Algorithms for Fast Parametric Convex Optimization with Certified Robustness
Rajiv Sambharya, Jinho Bok, Nikolai Matni, George Pappas
公開日: 2025/7/22
Abstract
We develop a machine-learning framework to learn hyperparameter sequences for accelerated first-order methods (e.g., the step size and momentum sequences in accelerated gradient descent) to quickly solve parametric convex optimization problems with certified robustness. We obtain a strong form of robustness guarantee -- certification of worst-case performance over all parameters within a set after a given number of iterations -- through regularization-based training. The regularization term is derived from the performance estimation problem (PEP) framework based on semidefinite programming, in which the hyperparameters appear as problem data. We show how to use gradient-based training to learn the hyperparameters for several first-order methods: accelerated versions of gradient descent, proximal gradient descent, and alternating direction method of multipliers. Through various numerical examples from signal processing, control, and statistics, we demonstrate that the quality of the solution can be dramatically improved within a budget of iterations, while also maintaining strong robustness guarantees. Notably, our approach is highly data-efficient in that we only use ten training instances in all of the numerical examples.