Evolutionary dynamics under coordinated reciprocity

Feipeng Zhang, Bingxin Lin, Lei Zhou, Long Wang

Published: 2025/8/20

Abstract

Using past behaviors to guide future actions is essential for fostering cooperation in repeated social dilemmas. Traditional memory-based strategies that focus on recent interactions have yielded valuable insights into the evolution of cooperative behavior. However, as memory length increases, the complexity of analysis grows exponentially, since these strategies need to map every possible action sequence of a given length to subsequent responses. Due to their inherent reliance on exhaustive mapping and a lack of explicit information processing, it remains unclear how individuals can handle extensive interaction histories to make decisions under cognitive constraints. To fill this gap, we introduce coordinated reciprocity strategies ($CORE$), which incrementally evaluate the entire game history by tallying instances of consistent actions between individuals without storing round-to-round details. Once this consistency index surpasses a threshold, $CORE$ prescribes cooperation. Through equilibrium analysis, we derive an analytical condition under which $CORE$ constitutes an equilibrium. Moreover, our numerical results show that $CORE$ effectively promotes cooperation between variants of itself, and it outperforms a range of existing strategies including memory-$1$, memory-$2$, and those from a documented strategy library in evolutionary dynamics. Our work thus underscores the pivotal role of cumulative action consistency in enhancing cooperation, developing robust strategies, and offering cognitively low-burden information processing mechanisms in repeated social dilemmas.

Evolutionary dynamics under coordinated reciprocity | SummarXiv | SummarXiv