Data-Driven Performance Guarantees for Parametric Optimization Problems
Jingyi Huang, Paul Goulart, Kostas Margellos
Published: 2025/6/30
Abstract
We propose a data-driven method to establish probabilistic performance guarantees for parametric optimization problems solved via iterative algorithms. Our approach addresses two key challenges: providing convergence guarantees to characterize the worst-case number of iterations required to achieve a predefined tolerance, and upper bounding a performance metric after a fixed number of iterations. These guarantees are particularly useful for online optimization problems with limited computational time, where existing performance guarantees are often unavailable or unduly conservative. We formulate the convergence analysis problem as a scenario optimization program based on a finite set of sampled parameter instances. Leveraging tools from scenario optimization theory enables us to derive probabilistic guarantees on the number of iterations needed to meet a given tolerance level. Using recent advancements in scenario optimization, we further introduce a relaxation approach to trade the number of iterations against the risk of violating convergence criteria thresholds. Additionally, we analyze the trade-off between solution accuracy and time efficiency for fixed-iteration optimization problems by casting them into scenario optimization programs. Numerical simulations demonstrate the efficacy of our approach in providing reliable probabilistic convergence guarantees and evaluating the trade-off between solution accuracy and computational cost.