Tuesday, 24 January 2023

Does economics have a bigger publication bias problem than other fields?

Publication bias is the tendency for studies that show statistically significant effects, often in a particular direction predicted by theory, to be much more likely to be published than studies that show statistically insignificant effects (or statistically significant effects in the direction opposite to that predicted by theory). Meta-analyses, which collate the results of many published studies, often show evidence for publication bias.

Publication bias could arise because of the 'file drawer' problem: since studies with statistically significant effects are more likely to be published, researchers put studies that find statistically insignificant effects into a file drawer, and never try to get them published. Or, publication bias could result from p-hacking: researchers make a number of choices about how the analysis is conducted in order to increase the chances of finding a statistically significant effect, which can then be published.

Is publication bias worse in some fields than others? That is the question addressed by this new working paper by František Bartoš (University of Amsterdam) and co-authors. Specifically, they compare publication bias in the fields of medicine, economics, and psychology by undertaking a 'meta-analysis of meta-analyses', combining about 800,000 effect sizes from 26,000 meta-analyses. However, the sample is not balanced across the three fields, with 25,447 meta-analyses in medicine, 327 in economics, and 605 in psychology. Using this sample though, Bartoš et al. find that:

...meta-analyses in economics and psychology predominantly show evidence for an effect before adjusting for PSB [publication selection bias] (unadjusted); whereas meta-analyses in medicine often display evidence against an effect. This disparity between the fields remains even when comparing meta-analyses with equal numbers of effect size estimates. When correcting for PSB, the posterior probability of an effect drops much more in economics and psychology (medians drop from 99.9% to 29.7% and from 98.9% to 55.7%, respectively) compared to medicine (38.0% to 27.5%).

In other words, we should be much more cautious about the claims of statistically significant effects arising from meta-analyses in economics than equivalent claims from meta-analyses in psychology or medicine (over and above any general caution we should have about meta-analysis - see here). In other words, these results suggest that publication bias is a much greater problem in economics than in psychology or medicine.

However, there are a few points to note here. The number of meta-analyses included in this study is much lower for economics than for psychology or medicine. Although Bartoš et al. appear to account for this, I think it suggest the potential for another issue.

Perhaps there is publication bias in meta-analyses (a meta-publication bias?), which Bartoš et al. don't test for? If it were the case that meta-analyses that show statistically significant effects were more likely to be published in economics than meta-analyses that show statistically insignificant effects, and this meta-publication bias was larger in economics than in psychology or medicine, then that would explain the results of Bartoš et al. However, it would not necessarily demonstrate that there was underlying publication bias in the underlying economic studies. Bartoš et al. need to test for publication bias in the meta-analysis sample.

That is a reasonably technical point. However, it does seem likely that there is publication bias, and it would not surprise me if it is larger in economics than in medicine, but I wouldn't necessarily expect it to be any worse than psychology. As noted in the book The Cult of Statistical Significance by Stephen Ziliak and Dierdre McCloskey (which I reviewed here), there remains a dedication among social scientists in general, and economists in particular, to finding statistically significant results, and that is a key driver of publication bias (see here).

Maybe economists are dodgy researchers. Or maybe, we just need to be better at reporting and publishing statistically insignificant results, and adjusting meta-analyses to account for this bias.

[HT: Marginal Revolution]

No comments:

Post a Comment