Asymptotic stability properties and a priori bounds for Adam and other gradient descent optimization methods

Steffen Dereich, Robin Graeber, Arnulf Jentzen, Adrian Riekert

公開日: 2025/8/27

Abstract

Gradient descent (GD) based optimization methods are these days the standard tools to train deep neural networks in artificial intelligence systems. In optimization procedures in deep learning the employed optimizer is often not the standard GD method but instead suitable adaptive and accelerated variants of standard GD (including the momentum and the root mean square propagation (RMSprop) optimizers) are considered. The adaptive moment estimation (Adam) optimizer proposed in 2014 by Kingma \& Ba is presumably the most popular variant of such adaptive and accelerated GD based optimization methods. Despite the popularity of such sophisticated optimization methods, it remains a fundamental open problem of research to provide a rigorous mathematical analysis for such accelerated and adaptive optimization methods. In particular, it remains an open problem of research to establish boundedness of the Adam optimizer. In this work we solve this problem in the case of a simple class of quadratic strongly convex stochastic optimization problems. Specifically, for the considered class of stochastic optimization problems we reveal a priori bounds for momentum, RMSprop, and Adam. In particular, we prove for the considered class of strongly convex stochastic optimization problems, for the first time, that Adam does not explode but stays bounded for any choices of the learning rates. In this work we also introduce certain stability concepts - such as the notion of the stability region - for deep learning optimizers and we discover that among standard GD, momentum, RMSprop, and Adam we have that Adam is the only optimizer that achieves the optimal higher order convergence speed and also has the maximal stability region.

Asymptotic stability properties and a priori bounds for Adam and other gradient descent optimization methods | SummarXiv | SummarXiv