Two-point Random Gradient-free Methods for Model-free Feedback Optimization
Amir Mehrnoosh, Gianluca Bianchin
公開日: 2025/9/15
Abstract
Feedback optimization has emerged as a promising approach for optimizing the steady-state operation of dynamical systems while requiring minimal modeling efforts. Unfortunately, most existing feedback optimization methods rely on knowledge of the plant dynamics, which may be difficult to obtain or estimate in practice. In this paper, we introduce a novel randomized two-point gradient-free feedback optimization method, inspired by zeroth-order optimization techniques. Our method relies on function evaluations at two points to estimate the gradient and update the control input in real-time. We provide convergence guarantees and show that our method is capable of computing an $\epsilon$-stationary point for smooth, nonconvex functions at a rate $\mathcal{O} (\epsilon^{-1})$, in line with existing results for two-point gradient-free methods for static optimization. Simulation results validate the findings.