On Tackling High-Dimensional Nonconvex Stochastic Optimization via Stochastic First-Order Methods with Non-smooth Proximal Terms and Variance Reduction

Yue Xie, Jiawen Bi, Hongcheng Liu

公開日: 2025/9/17

Abstract

When the nonconvex problem is complicated by stochasticity, the sample complexity of stochastic first-order methods may depend linearly on the problem dimension, which is undesirable for large-scale problems. To alleviate this linear dependence, we adopt non-Euclidean settings and propose two choices of nonsmooth proximal terms when taking the stochastic gradient steps. This approach leads to stronger convergence metric, incremental computational overhead, and potentially dimension-insensitive sample complexity. We also consider acceleration through variance reduction which achieves near optimal sample complexity and, to our knowledge, state-of-art in the non-Euclidean setting ($\ell_1/\ell_\infty$). Since the use of nonsmooth proximal terms is unconventional, the convergence analysis deviates much from approaches in Euclidean settings or employing Bregman divergence, providing tools for analyzing other non-Euclidean choices of distance functions. Efficient resolution of the subproblem in various scenarios are also discussed and simulated. We illustrate the dimension-insensitive property of the proposed methods via preliminary numerical experiments.