However, I struggled to get past the 'so what?' question in this article. I guess maybe I was expecting the unexpected. Yuret's measure of difficulty of publishing was the proportion of the authors publishing in the journal who are affiliated with the top 125 departments. He argues:
A journal is less likely to be accepting papers from the researchers from lower ranked departments if most of the authors are from the top departments. Therefore the measure developed by Moore (1972) also reflects the difficulty in publishing in a journal. Therefore we label his measure as the difficulty measure.I would argue that if you wanted a measure of difficulty of publishing in a journal, you probably want to start with the acceptance rate (the proportion of submitted papers that are eventually accepted). But then you would want to control for selection bias - authors don't send all papers to the top journals, because we know that not all papers will be accepted there and prefer not to waste our time (or that of the editors and reviewers). So, the more difficult journals to publish in may have low acceptance rates, but those low acceptance rates are actually likely to be biased upwards (they would be even lower if every researcher submitted every relevant paper to them).
When Yuret proceeds to show that there is a high correlation (0.62) between impact factor and his difficulty measure for economics journals, he is simply showing that faculty in top economics departments make up a higher proportion of the authors in the highest impact factor economics journals. Given that faculty in top economics departments are probably higher quality researchers, producing higher quality research, this should not be a surprise. This paper could clearly be filed under 'so what'.
A more interesting question to ask (and probably the question this article was trying to answer but really didn't) is, for a paper of a given quality, is it more difficult to get it accepted in a journal with a high impact factor than a journal with a lower impact factor? I think most researchers' experiences (and certainly mine) would suggest that it is - papers rejected at top journals usually eventually find a home at a lower-ranked journal.
What is perhaps more interesting is that the correlations between impact factor and proportion of authors from top departments are much smaller for the other disciplines that Yuret looked at: chemistry (0.49), physics (0.23), and mathematics (0.22). What's going on in those disciplines (especially physics and mathematics)? Do faculty outside the top departments in those disciplines have a better shot at publishing in the top journals? Given his data I suspect that the lower correlations (for physics and chemistry at least) may be an effect of the other disciplines simply having more journals with top impact factors - it's much harder for faculty at top departments to monopolise the pages of many top journals than it is to do so when there are only a few top journals. Still, the correlations are all positive - researchers in top departments publish in top journals. Surprise!
No comments:
Post a Comment