An Optimistic Gradient Tracking Method for Distributed Minimax Optimization
Yan Huang, Jinming Xu, Jiming Chen, Karl Henrik Johansson
Published: 2025/8/29
Abstract
This paper studies the distributed minimax optimization problem over networks. To enhance convergence performance, we propose a distributed optimistic gradient tracking method, termed DOGT, which solves a surrogate function that captures the similarity between local objective functions to approximate a centralized optimistic approach locally. Leveraging a Lyapunov-based analysis, we prove that DOGT achieves linear convergence to the optimal solution for strongly convex-strongly concave objective functions while remaining robust to the heterogeneity among them. Moreover, by integrating an accelerated consensus protocol, the accelerated DOGT (ADOGT) algorithm achieves an optimal convergence rate of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) \right)$ and communication complexity of $\mathcal{O} \left( \kappa \log \left( \epsilon ^{-1} \right) /\sqrt{1-\sqrt{\rho _W}} \right)$ for a suboptimality level of $\epsilon>0$, where $\kappa$ is the condition number of the objective function and $\rho_W$ is the spectrum gap of the network. Numerical experiments illustrate the effectiveness of the proposed algorithms.