A Riemannian AdaGrad-Norm Method
Glaydston de C. Bento, Geovani N. Grapiglia, Mauricio S. Louzeiro, Daoping Zhang
公開日: 2025/9/24
Abstract
We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ iterations to compute a point $x$ such that $\|\operatorname{grad} f(x)\|\leq \varepsilon$. Under the additional assumptions that $f$ is geodesically convex and the manifold has sectional curvature bounded from below, we show that the method takes at most $\mathcal{O}(\varepsilon^{-1})$ to find $x$ such that $f(x)-f_{low}\leq\epsilon$, where $f_{low}$ is the optimal value. Moreover, if $f$ satisfies the Polyak--\L{}ojasiewicz condition globally on the manifold, we establish a complexity bound of $\mathcal{O}(\log(\varepsilon^{-1}))$, provided that the norm of the initial Riemannian gradient is sufficiently large. For the manifold of symmetric positive definite matrices, we construct a family of nonconvex functions satisfying the PL condition. Numerical experiments illustrate the remarkable performance of \textsc{MAdaGrad} in comparison with Riemannian Steepest Descent equipped with Armijo line-search.