Subgradient Methods for Nonsmooth Convex Functions with Adversarial Errors
Martijn Gösgens, Bart P. G. Van Parys
公開日: 2025/10/3
Abstract
We consider minimizing nonsmooth convex functions with bounded subgradients. However, instead of directly observing a subgradient at every step $k\in [0, \dots, N-1]$, we assume that the optimizer receives an adversarially corrupted subgradient. The adversary's power is limited to a finite corruption budget, but allows the adversary to strategically time its perturbations. We show that the classical averaged subgradient descent method, which is optimal in the noiseless case, has worst-case performance that deteriorates quadratically with the corruption budget. Using performance optimization programming, (i) we construct and analyze the performance of three novel subgradient descent methods, and (ii) propose a novel lower bound on the worst-case suboptimality gap of any first-order method satisfying a mild cone condition proposed by Fatkhullin et al. (2025). The worst-case performance of each of our methods degrades only linearly with the corruption budget. Furthermore, we show that the relative difference between their worst-case suboptimality gap and our lower bound decays as $\mathcal O(\log(N)/N)$, so that all three proposed subgradient descent methods are near-optimal. Our methods achieve such near-optimal performance without a need for momentum or averaging. This suggests that these techniques are not necessary in this context, which is in line with recent results by Zamani and Glineur (2025).