Simple linesearch-free first-order methods for nonconvex optimization
Shotaro Yagishita, Masaru Ito
Published: 2025/9/18
Abstract
This paper presents an auto-conditioned proximal gradient method for nonconvex optimization. The method determines the stepsize using an estimation of local curvature and does not require any prior knowledge of problem parameters and any linesearch procedures. Its convergence analysis is carried out in a simple manner without assuming the convexity, unlike previous studies. We also provide convergence analysis in the presence of the Kurdyka--\L ojasiewicz property, adaptivity to the weak smoothness, and the extension to the Bregman proximal gradient method. Furthermore, the auto-conditioned stepsize strategy is also applied to the conditional gradient (Frank--Wolfe) method and the Riemannian gradient method.