There is a large (and growing) literature on gender bias and the gender gap. Regular readers of this blog will no doubt have noted that it is a recurring theme. In fact, there is so much research on gender bias that it is hard to believe that there could be a bias against such research. Nevertheless, that is the conclusion of this 2018 article by Aleksandra Cislak (Nicolaus Copernicus University), Magdalena Formanowicz (University of Bern), and Tamar Saguy (Interdisciplinary Center Herzliya), published in the journal Scientometrics (open access).
Cislak et al. collated data on publications listed in PsycINFO and PsycARTICLES over the period from 2008 to 2015, on gender bias or racial bias. After removing irrelevant articles and duplicates, their analysis was based on slightly more than 1000 articles over that time. They then compared articles on gender bias with articles on racial bias, in terms of prestige. Prestige was measured in two ways: (1) by the impact factor of the journal they were published in, for the year of publication); and (2) by whether the research had been funded. They found that:
...research on gender bias was funded less often (B = - .20; SE = .09; p = .02) and published in lower Impact Factor journals (B = - .67; SE = .20; p = .001).
So, research on gender bias appears from this study to have attracted less prestige than research on racial bias. Cislak et al. take that as evidence that there is bias against research on gender bias. However, there is good reason to doubt their conclusion. It relies on an assumption that, in the absence of any bias, there would be no difference in the impact factor or funding between research on gender bias and research on racial bias. That assumption strikes me as difficult to support.
In order for there to be bias, the ideal experiment would be two otherwise identical groups of articles, one group on gender bias, and one group on racial bias, submitted to the same journals. That would hold constant the quality of the articles, the quality of the journals, general editorial policies and practices, authorship, authors' incentives (for writing long comprehensive articles, or shorter articles on sub-topics), and article context (other than gender or racial bias). Differences in acceptance rates between these two groups of articles might be taken as evidence of bias.
Instead, we have an observational study based on different articles published by different journals. It tells us almost nothing about the editorial process from submission to publication. We really have no idea if there observed difference arises because of bias, or because of differences in article quality or something else. In fact, the data could even be consistent with bias in favour of articles on gender bias, if the acceptance rate of submitted gender bias articles was higher than the acceptance rate of submitted racial bias articles of otherwise similar quality and other attributes. Maybe the bar for acceptance of an article on racial bias is set higher than the bar for acceptance of an article on gender bias? Cislak et al. engage in some hand-waving about the quality of the research being the same because of the use of "similar methods and paradigms". However, that's not a very convincing argument, and they don't actually control for research quality (noting whether articles use quantitative or qualitative methods is not a control for research quality).
Similarly, if the source of funding is not held constant between gender bias and racial bias, it tells us little about bias (unless we are considering bias in the availability of funding, which Cislak et al. probably argue they are getting at). Nevertheless, without knowing the rate of acceptance of funding applications for gender bias and racial bias, there is no reason to believe that more articles being funded is evidence of bias in either direction.
In short, this research is unconvincing. Show me an audit study on this topic, and I'd likely give it more weight. But this observational research simply doesn't cut it.
No comments:
Post a Comment