LSAM: Asynchronous Distributed Training with Landscape-Smoothed Sharpness-Aware Minimization

Yunfei Teng, Sixin Zhang

Published: 2025/9/3

Abstract

While Sharpness-Aware Minimization (SAM) improves generalization in deep neural networks by minimizing both loss and sharpness, it suffers from inefficiency in distributed large-batch training. We present Landscape-Smoothed SAM (LSAM), a novel optimizer that preserves SAM's generalization advantages while offering superior efficiency. LSAM integrates SAM's adversarial steps with an asynchronous distributed sampling strategy, generating an asynchronous distributed sampling scheme, producing a smoothed sharpness-aware loss landscape for optimization. This design eliminates synchronization bottlenecks, accelerates large-batch convergence, and delivers higher final accuracy compared to data-parallel SAM.

LSAM: Asynchronous Distributed Training with Landscape-Smoothed Sharpness-Aware Minimization | SummarXiv | SummarXiv