Will anyone review this paper? Screening, sorting, and the feedback cycles that imperil peer review
Carl T. Bergstrom, Kevin Gross
公開日: 2025/7/14
Abstract
Scholarly publishing relies on peer review to identify the best science. Yet finding willing and qualified reviewers to evaluate manuscripts has become an increasingly challenging task, possibly even threatening the long-term viability of peer review as an institution. What can or should be done to salvage it? Here, we develop mathematical models to reveal the intricate interactions among incentives faced by authors, reviewers, and readers in their endeavors to identify the best science. Two facets are particularly salient. First, peer review partially reveals authors' private sense of their work's quality through their decisions of where to send their manuscripts. Second, journals' reliance on traditionally unpaid and largely unrewarded review labor deprives them of a standard market mechanism -- wages -- to recruit additional reviewers when review labor is in short supply. We highlight a resulting feedback loop that threatens to overwhelm the peer review system: (1) an increase in submissions overtaxes the pool of suitable peer reviewers; (2) the accuracy of review drops because journals either must either solicit assistance from less qualified reviewers or ask current reviewers to do more; (3) as review accuracy drops, submissions further increase as more authors try their luck at venues that might otherwise be a stretch. We illustrate how this cycle is further propelled by forces including the increasing emphasis on high-impact publications, the proliferation of journals, and competition among these journals for peer reviews. Finally, we suggest interventions that could slow or even reverse this cycle of peer-review meltdown.