Articles with shorter titles tend to be published in better journals. They tend to be more cited and to get higher novelty scores. Moreover, these tendencies are more pronounced in better journals... Moreover, including novelty in the regressions on journal quality and citations has essentially no impact on the estimates. This means that the observed relation between title length and citations is not explained by the fact that novel articles tend to have shorter titles and to be more cited. Together, these results show that title length correlates well with the overall scientific quality of a paper.So, there is a strong negative relationship between title length and the quality of the article, even after controlling for the quality of the authors, and the quality of the publication. Bramoullé and Ductor have obviously taken this on board, as their article title ("Title length") is as short as possible. But why would title length be related to research quality? Bramoullé and Ductor suggest a couple of possible explanations:
On one hand, title length could have a causal impact on journal quality or citations. A short title could make an article easier to memorize, affecting citations and, possibly, editorial decisions... On the other hand, title length could proxy for the true, unobserved qualities of the article. Articles with a strong potential to influence subsequent research could thus both generate more citations and have shorter titles.The second explanation seems more intuitive. Many (maybe all?) highly cited papers are those that steer research in new directions. For example, the Kahneman and Tversky paper that I mentioned yesterday, entitled "Prospect Theory: An Analysis of Decision under Risk" (note: 51 characters including spaces and the colon), is the most cited paper in economics (with over 8000 citations in these author's dataset, and over 50,000 in Google Scholar). Subsequent papers in the new research area opened up by the highly-cited papers are likely to have slightly longer titles, as they build on the original theory, refute it, apply it to new areas or new sub-fields, and so on. However, that is of course a causal interpretation, and the authors' results don't establish causality here. Nevertheless, an interesting quirk - I wonder if it extends to blog posts?
[Update]: I forgot to mention that the authors do attempt to control for novelty in their analysis, which would seem to mitigate against the intuitive explanation in my last paragraph. However, I don't find their measure of novelty (the number of 'atypical' keywords the article uses) to be particularly convincing, since novel articles may well use keywords that themselves are common, even though the content is not.
[HT: Marginal Revolution]