Skip to main content

Faculty & Research

Close

AI and Publication Commons

Working Paper
Large language model (LLM)-based Artificial Intelligence (AI) assistance can reduce the cost of producing academic content, and increase submissions to academic journals. We formalize a mechanism through which this can disrupt peer review based evaluation at top-tier outlets, journals distinguished by high acceptance value and stringent peer review in a pre-AI equilibrium. This analysis allows us to discuss how such outlets can respond to the AI-disruption in academic publishing. In our model, the introduction of AI-assistance draws more submissions into a journal’s submission pool, congesting reviewer capacity and in turn, degrading review accuracy. Because publication value is decreasing in review error rates, this carries the potential to erode the quality of academic contributions, and the impact of the academic community a journal represents. We first characterize the journal impact loss in such circumstances, and then provide two possible mechanisms to overcome the same, conditional on AI-use observability. We find that when AI-use is observable, a combination of submission and AI-use contingent fees, and reviewer compensation can implement the first best. When AI use cannot be detected, we identify a two-track mechanism in which AI- assisted submissions self-select into an AI-assisted review track. A perceived value difference between tracks, together with possible preference for noisier reviews, induces truthful revelation in a separating equilibrium and recovers substantial impact lost due to AI assistance. These insights can be useful for academic organizations and editorial boards in top-tier academic outlets.
Faculty

Professor of Technology and Operations Management