Inexact and Stochastic Gradient Optimization Algorithms with Inertia and Hessian Driven Damping

Harsh Choudhary, Jalal Fadili, Vyachelav Kungurtsev

公開日: 2025/9/23

Abstract

In a real Hilbert space setting, we study the convergence properties of an inexact gradient algorithm featuring both viscous and Hessian driven damping for convex differentiable optimization. In this algorithm, the gradient evaluation can be subject to deterministic and stochastic perturbations. In the deterministic case, we show that under appropriate summability assumptions on the perturbation, our algorithm enjoys fast convergence of the objective values, of the gradients and weak convergence of the iterates toward a minimizer of the objective. In the stochastic case, assuming the perturbation is zero-mean, we can weaken our summability assumptions on the error variance and provide fast convergence of the values both in expectation and almost surely. We also improve the convergence rates from $\mathcal{O}(\cdot)$ to $o(\cdot)$ in almost sure sense. We also prove almost sure summability property of the gradients, which implies the almost sure fast convergence of the gradients towards zero. We will highlight the trade-off between fast convergence and the applicable regime on the sequence of errors in the gradient computations. We finally report some numerical results to support our findings.

Inexact and Stochastic Gradient Optimization Algorithms with Inertia and Hessian Driven Damping | SummarXiv | SummarXiv