Improved Trial and Error Learning for Random Games

Jérôme Taupin, Xavier Leturc, Christophe J. Le Martret

公開日: 2025/9/23

Abstract

When a game involves many agents or when communication between agents is not possible, it is useful to resort to distributed learning where each agent acts in complete autonomy without any information on the other agents' situations. Perturbation-based algorithms have already been used for such tasks. We propose some improvements based on practical observations to improve the performance of these algorithms. We show that the introduction of these changes preserves their theoretical convergence properties towards states that maximize the average reward and improve them in the case where optimal states exist. Moreover, we show that these algorithms can be made robust to the addition of randomness to the rewards, achieving similar convergence guarantees. Finally, we discuss the possibility for the perturbation factor of the algorithm to decrease during the learning process, akin to simulated annealing processes.

Improved Trial and Error Learning for Random Games | SummarXiv | SummarXiv