There is growing evidence of positive impacts of generative artificial intelligence on productivity. This includes productivity in research (see this post, for example), including my own. However, some have questioned whether increasing research productivity comes at a cost of narrowing the scope of research.
So, I was interested to read this article by Qianyue Hao (Tsinghua University) and co-authors, published in the prestigious journal Nature (ungated earlier version here) late last year. They look at the impact of AI tools (not limited to generative AI) on the productivity of researchers and the quality of research. Specifically, they look at authors publishing in six representative fields: biology, medicine, chemistry, physics, materials science, and geology, across three 'eras': (1) the 'machine learning era ' (from 1980 to 2014), the 'deep learning era' (from 2015 to 2022), and the 'generative AI era' (from 2023 onwards). Hao et al. compare authors who publish 'AI augmented papers' with those who do not. An 'AI augmented paper' is one that uses methods such as:
...support vector machines and principal component analysis from the machine learning era, and convolutional neural networks and generative adversarial networks from the deep learning era. Large language models, which have emerged in recent years, also rank among the most frequently used methods...
Using a dataset that includes over 27 million papers with complete records that were published between 1980 and 2025, of which about 310,000 were 'AI augmented', Hao et al. find that:
...annual citations to AI papers are 98.70% higher than those to non-AI papers on average...
So, AI augmented research gathers more citations, which suggests that authors using AI in their research achieve greater impact. This is reinforced by evidence that AI augmented papers are published in higher quality journals (with Q1 journals being the highest ranked). Hao et al. report that:
...the proportion of AI papers in Q1 journals is 18.60% higher than that of non-AI papers in all journals; in Q2 journals, the AI proportion is 1.59% higher; whereas Q3 and Q4 journals hold a relatively lower proportion of papers with AI... These results indicate a heterogeneous distribution of AI-augmented papers across journals, with a higher prevalence in high-impact journals.
And AI appears to make authors more productive, as:
On average, researchers adopting AI annually publish 3.02 times more papers... and garner 4.84 times more citations... than those not adopting AI, with consistency.
All of these results seem to hold across all of the disciplines that Hao et al. consider. However, it is not all good news. Hao et al. use machine learning to create a measure of the 'breadth of scholarly attention'. Using that measure, they find that:
Compared with conventional research, AI research is associated with a 4.63% contracted median collective knowledge extent across science, which is consistent across all six disciplines... Moreover, when dividing these disciplines into more than two hundred sub-fields, the contraction of knowledge extent can be observed in more than 70% of them...
Of course, some of the differences here may be due to selection, as the types of researchers, and the types of research, involving AI use may be meaningfully different from those that don't. However, putting the selection issues aside, Hao et al. note that there is a tension between the individual researcher's incentive to produce a greater quantity of research that has higher impact, which would suggest greater use of AI, and the social incentive to produce a greater breadth of research.
So, the takeaway from this paper is that we need to consider researcher incentives, not just productivity. Specifically, this research suggests that the use of AI in research is leading to a 'prisoners' dilemma' outcome: each individual researcher acting in their own best interests (and using AI in their research) leads to an outcome that is worse for society overall (less breadth of research and more incremental gains).
Hao et al. conclude that:
The substantial academic benefits of AI use may be a driving force behind its accelerated rate of adoption; however, we also find unintended consequences from the increased prevalence of AI-augmented research. In all fields, AI-augmented research focuses on a narrower scope of scientific topics and reduces the scientific engagement of follow-on research, leading to more overlapping research work that slows the expansion of knowledge. Further, with a greater concentration of collective attention to the same AI papers, the adoption of AI seems to induce authors to converge on the same solutions to known problems rather than create new ones.
So, what is the solution here? Society probably wants research to be higher quality and have a broad scope. But individual researchers' incentives to use AI in their research appears inconsistent with that outcome. The traditional prisoners' dilemma is a repeated game (see here or here, for example), and the players of that game can avoid the worst outcome by cooperating. In this case, the researchers could cooperate by agreeing not to use AI in their research. The problem is that every researcher has an incentive to cheat on that agreement, since if they use AI, then that will be good for their career. This prisoners' dilemma is more difficult to ensure cooperation in than the traditional game, because there are not just two players who need to cooperate, but thousands (or millions). Ensuring cooperation in a prisoners' dilemma game with many players, each of whom is far better off cheating than cooperating, is almost impossible (which is why solving the problem of climate change is so difficult).
My own view is that the answer is not to keep AI out of research. That is not realistic, in the same way that it's not realistic to expect students not to use generative AI. The incentives need to be redesigned, but this will be no easy task. As long as universities, research funders, and publishers reward researchers for quantity, citations, and publication in top-ranked outlets, then we should expect more AI-augmented work, with a narrower scope than society might prefer. If we want AI to expand knowledge rather than simply accelerate competition within narrow foci, then we need institutions that also reward novelty, breadth, and the discovery of new questions. That is the economic challenge we must face up to.
[HT: Marginal Revolution]










