Adaptive Gradient Normalization and Independent Sampling for (Stochastic) Generalized-Smooth Optimization

Yufeng Yang, Erin Tripp, Yifan Sun, Shaofeng Zou, Yi Zhou

公開日: 2024/10/17

Abstract

Recent studies have shown that many nonconvex machine learning problems satisfy a generalized-smooth condition that extends beyond traditional smooth nonconvex optimization. However, the existing algorithms are not fully adapted to such generalized-smooth nonconvex geometry and encounter significant technical limitations on their convergence analysis. In this work, we first analyze the convergence of adaptively normalized gradient descent under function geometries characterized by generalized-smoothness and generalized P{\L} condition, revealing the advantage of adaptive gradient normalization. Our results provide theoretical insights into adaptive normalization across various scenarios.For stochastic generalized-smooth nonconvex optimization, we propose \textbf{I}ndependent-\textbf{A}daptively \textbf{N}ormalized \textbf{S}tochastic \textbf{G}radient \textbf{D}escent algorithm, which leverages adaptive gradient normalization, independent sampling, and gradient clipping to achieve an $\mathcal{O}(\epsilon^{-4})$ sample complexity under relaxed noise assumptions. Experiments on large-scale nonconvex generalized-smooth problems demonstrate the fast convergence of our algorithm.

Adaptive Gradient Normalization and Independent Sampling for (Stochastic) Generalized-Smooth Optimization | SummarXiv | SummarXiv