On Global Rates for Regularization Methods based on Secant Derivative Approximations

Coralia Cartis, Sadok Jerad

公開日: 2025/9/9

Abstract

An inexact framework for high-order adaptive regularization methods is presented, in which approximations may be used for the $p$th-order tensor, based on lower-order derivatives. Between each recalculation of the $p$th-order derivative approximation, a high-order secant equation can be used to update the $p$th-order tensor as proposed in (Welzel 2024) or the approximation can be kept constant in a lazy manner. When refreshing the $p$th-order tensor approximation after $m$ steps, an exact evaluation of the tensor or a finite difference approximation can be used with an explicit discretization stepsize. For all the newly adaptive regularization variants, we prove an $\mathcal{O}\left( \max[ \epsilon_1^{-(p+1)/p}, \, \epsilon_2^{(-p+1)/(p-1)} ] \right)$ bound on the number of iterations needed to reach an $(\epsilon_1, \, \epsilon_2)$ second-order stationary points. Discussions on the number of oracle calls for each introduced variant are also provided. When $p=2$, we obtain a second-order method that uses quasi-Newton approximations with an $\mathcal{O}\left(\max[\epsilon_1^{-3/2}, \, \, \epsilon_2^{-3}]\right)$ iteration bound to achieve approximate second-order stationarity.