Stochastic versus Deterministic in Stochastic Gradient Descent
Runze Li, Jintao Xu, Wenxun Xing
Published: 2025/9/3
Abstract
This paper considers the mini-batch stochastic gradient descent (SGD) for a structured minimization problem involving a finite-sum function with its gradient being stochastically approximated, and an independent term with its gradient being deterministically computed. We focus on the stochastic versus deterministic behavior of the mini-batch SGD for this setting. A convergence analysis is provided that captures the different roles of these two parts. Linear convergence of the algorithm to a neighborhood of the minimizer is established under some smoothness and convexity assumptions. The step size, the convergence rate, and the radius of the convergence region depend asymmetrically on the characteristics of the two components, which shows the distinct impacts of stochastic approximation versus deterministic computation in the mini-batch SGD. Moreover, a better convergence rate can be obtained when the independent term endows the objective function with sufficient strong convexity. Also, the convergence rate of our algorithm in expectation approaches that of the classic gradient descent when the batch size increases. Numerical experiments are conducted to support the theoretical analysis as well.