Tuesday 14 March 2017

Take your pick: Economists are dodgy researchers, or not

I've written a couple of times here about economics education and moral corruption (see here and here). If economists were really corrupting influences though, you would probably expect to see them engage in dodgy research practices like falsifying data. I was recently point to this 2014 paper by Sarah Necker (University of Freiburg) published in the journal Research Policy (sorry I don't see an ungated version anywhere online).

In the paper, Freiburg reports on data collected from 426 members of the European Economic Association. She asked them about the justifiability of different questionable research practices, about whether they had engaged in the practice themselves, and whether others in their department had done so. Here's what she found in terms of views on justifiability:
Economists clearly condemn behavior that misleads the scientific community or causes harm to careers. The least justifiable action is “copying work from others without citing.” Respondents unanimously (CI: 99–100%) agree that this behavior is unjustifiable. Fabricating or correcting data as well as excluding part of the data are rejected by at least 97% (CI: 96–99%). “Using tricks to increase t-values, R2, or other statistics” is rejected by 96% (CI: 94–98%), 93%(CI: 90–95%) consider “incorrectly giving a colleague co-authorship who has not worked on the paper” unjustifiable...
Strategic behavior in the publication process is also rejected but more accepted than practices applicable when analyzing data or writing papers. Citing strategically or maximizing the number of publications by slicing into the smallest publishable unit is rejected by 64% (CI: 60–69%). Complying with suggestions by referees even though one thinks they are wrong is considered unjustifiable by 61% (CI: 56–66%)...
So, these practices are all viewed as unjustifiable by the majority of respondents. Does that translate into behaviour? Freiburg reports:
The correction, fabrication, or partial exclusion of data, incorrect co-authorship, or copying of others’ work is admitted by 1–3.5%. The use of “tricks to increase t-values, R2, or other statistics” is reported by 7%. Having accepted or offered gifts in exchange for (co-)authorship, access to data, or promotion is admitted by 3%. Acceptance or offering of sex or money is reported by 1–2%. One percent admits to the simultaneous sub-mission of manuscripts to journals. About one fifth admits to having refrained from citing others’ work that contradicted the own analysis or to having maximized the number of publications by slicing their work into the smallest publishable unit. Having at least once copied from their own previous work without citing is reported by 24% (CI: 20–28%). Even more admit to questionable practices of data analysis (32–38%), e.g., the “selective presentation of findings so that they confirm one’s argument.” Having complied with suggestions from referees despite having thought that they were wrong is reported by 39% (CI: 34–44%). Even 59% (CI: 55–64%) report that they have at least once cited strategically to increase the prospect of publishing their work.
According to their responses, 6.3% of the participants have never engaged in a practice rejected by at least a majority of peers.
You might think those rates are high, or low, depending on your priors. However, Freiburg notes that they are similar to those reported in a similar study of psychologists (see here for an ungated version of that work). Other results are noted as being similar to those for management or business scholars.

Freiburg then goes on to demonstrate that these questionable behaviours are related to respondents' perceptions of the pressure to publish. That is, that those facing greater publication pressure are more likely to engage in questionable research behaviours. Those results are not nearly as clear, as they are for the most part statistically insignificant, mostly likely due to the relatively small sample size. So, although they provide a plausible narrative, I don't find them convincing.

However, the takeaway message here clearly depends on your own biases. Either economists are dodgy researchers, frequently engaging in questionable research practices ("only 6.3% have never engaged in a practice rejected by at least a majority of peers"), or they are no better or worse than other disciplines in this regard. Take your pick.

[HT: Bill Cochrane]

No comments:

Post a Comment