Graph-Aware Learning Rates for Decentralized Optimization

Aaron Fainman, Stefan Vlaski

公開日: 2025/9/18

Abstract

We propose an adaptive step-size rule for decentralized optimization. Choosing a step-size that balances convergence and stability is challenging. This is amplified in the decentralized setting as agents observe only local (possibly stochastic) gradients and global information (like smoothness) is unavailable. We derive a step-size rule from first principles. The resulting formulation reduces to the well-known Polyak's rule in the single-agent setting, and is suitable for use with stochastic gradients. The method is parameter free, apart from requiring the optimal objective value, which is readily available in many applications. Numerical simulations demonstrate that the performance is comparable to the optimally fine-tuned step-size.

Graph-Aware Learning Rates for Decentralized Optimization | SummarXiv | SummarXiv