Coordinated Car-following Using Distributed MPC
Di Shen, Qi Dai, Suzhou Huang
公開日: 2025/10/2
Abstract
Within the modeling framework of Markov games, we propose a series of algorithms for coordinated car-following using distributed model predictive control (DMPC). Instead of tracking prescribed feasible trajectories, driving policies are solved directly as outcomes of the DMPC optimization given the driver's perceivable states. The coordinated solutions are derived using the best response dynamics via iterated self-play, and are facilitated by direct negotiation using inter-agent or agent-infrastructure communication. These solutions closely approximate either Nash equilibrium or centralized optimization. By re-parameterizing the action sequence in DMPC as a curve along the planning horizon, we are able to systematically reduce the original DMPC to very efficient grid searches such that the optimal solution to the original DMPC can be well executed in real-time. Within our modeling framework, it is natural to cast traffic control problems as mechanism design problems, in which all agents are endogenized on an equal footing with full incentive compatibility. We show how traffic efficiency can be dramatically improved while keeping stop-and-go phantom waves tamed at high vehicle densities. Our approach can be viewed as an alternative way to formulate coordinated adaptive cruise control (CACC) without an explicit platooning (or with all vehicles in the traffic system treated as a single extended platoon). We also address the issue of linear stability of the associated discrete-time traffic dynamics and demonstrate why it does not always tell the full story about the traffic stability.