Robust Mean Field Social Control: A Unified Reinforcement Learning Framework
Zhenhui Xu, Jiayu Chen, Bing-Chang Wang, Yuhu Wu, Tielong Shen
Published: 2025/2/27
Abstract
This paper studies linear quadratic Gaussian robust mean field social control problems in the presence of multiplicative noise. We aim to compute asymptotic decentralized strategies without requiring full prior knowledge of agents' dynamics. The primary challenges lie in solving an indefinite stochastic algebraic Riccati equation for feedback gains, and an indefinite algebraic Riccati equation for feedforward gains. To overcome these challenges, we first propose a unified dual-loop iterative framework that handles both indefinite Riccati-type equations, and provide rigorous convergence proofs for both the outer-loop and inner-loop iterations. Secondly, considering the potential biases arising in the iterative processes due to estimation and modeling errors, we verify the robustness of the proposed algorithm using the small-disturbance input-to-state stability technique. Convergence to a neighborhood of the optimal solution is thus ensured, even in the existence of disturbances. Finally, to relax the limitation of requiring precise knowledge of agents' dynamics, we employ the integral reinforcement learning technique to develop a data-driven method within the dual-loop iterative framework. A numerical example is provided to demonstrate the effectiveness of the proposed algorithm.