One common piece of folk wisdom among academics is always to make sure that you cite articles in the journal to which you are submitting. Taken a step further, if this strategy is successful, then citing journal editors or potential reviewers should be the next obvious step. At the margin, this sort of 'strategic citation' must lead to some papers (and journals, and authors) receiving more citations than the quality of their work warrants, creating inefficiency in the research system.
How much of this 'strategic citation' goes on? That seems difficult to assess, but this recent article by Amir Rubin (Simon Fraser University) and Eran Rubin (California State University), published in the Journal of Political Economy (ungated earlier version here), used a natural experiment to find out. Specifically, Rubin and Rubin look at the case of the Journal of Business (JB), which was discontinued in 2006, despite being one of the top five international journals in finance. If you're as surprised as I was to learn that a seriously highly ranked journal would simply cease publication, Rubin and Rubin explain in one of the footnotes that:
We thank Douglas Diamond, the editor of the Journal of Business for 13 years, who told us that the main reason for the discontinuation was the difficulty in finding an editor from within Booth’s faculty.
Wow. Anyway, this natural experiment allows Rubin and Rubin to explore aspects of strategic citation, because:
In broad terms, rather than just serving their intended objective of referencing relevant work, citations of articles published in top-tier journals may be driven by agency considerations because authors focus on achieving professional goals with respect to these journals. These personal goals include, most obviously, the desire to obtain publications in top-tier journals but can also include the desire to get invited to a conference sponsored by such journals, become a referee in those journals, or receive reference letters from scholars associated with the journals. To facilitate this, authors may cite top-tier journals as a way to enhance their relationships with those journals...
And once the Journal of Business ceased publication, the incentives for authors changed, and:
...authors may reduce their tendency to reference low-relevance JB articles, and there may be increased negligence in citing relevant JB articles.
Rubin and Rubin collected data from the top five finance journals (Journal of Business, plus the Journal of Finance, the Journal of Financial Economics, the Review of Financial Studies, and the Journal of Financial and Quantitative Analysis) covering the period 1996 to 2016 (i.e. ten years either side of the discontinuation). Matching articles from the Journal of Business with similar articles (in terms of publication year and early citation trajectory) from the other top journals, and then comparing subsequent citation measures, they find:
...strong evidence that compared to the matched articles of the other four journals, the citations of JB articles that were published prior to 2006 were significantly negatively affected after 2006. Furthermore, we do not find any change in the citation count difference in the years prior to the discontinuation decision, which implies that the change in citing practices occurred during the time when publishing in JB was no longer possible...
This is well illustrated in their Figure 2, which compares the citation trajectory of articles published in the Journal of Business with the same trajectory for the other four top finance journals:
That suggests that articles in the other four journals received about 20 percent more citations after the discontinuation of the Journal of Business than did similar articles published in the Journal of Business. Taking their analysis a bit further, Rubin and Rubin importantly find that:
...articles that do not cite JB articles (conditional on them being expected to cite JB articles) in the post-2006 period are more likely not to be cited after their publication, which is consistent with the idea that strategic citations are associated with reduced research quality.
In other words, authors who engage in strategic citation are often doing so as a way of attempting to signal that their own paper is of higher quality than it actually is. So, what should the research system do about this? Rubin and Rubin suggest that:
...if authors of academic studies were to include more information on references cited (as done in patent applications), it could potentially benefit academic research and help reduce adverse citing practices.
Closer monitoring is one of the solutions to agency problems, as we note in my ECONS102 class. However, closer monitoring is costly, both to the principal (the journal editor and reviewers would need to review the inclusion of all, or perhaps just a sample, of the included references in every reviewed article), and to the agent (because authors know their citations are going to be closely monitored, they will spend more time ensuring that they summarise them more accurately). However, it isn't clear that such a practice would eliminate strategic citing. Articles are often broad enough in their coverage that they can be cited, even when they may not be the optimal citation to make a particular point.
Strategic citing is clearly a problem, but I don't think that this article provides us with a workable solution.
[HT: Marginal Revolution]
No comments:
Post a Comment