Following on from yesterday's post, which discussed research demonstrating bias against female teachers in student evaluations of teaching [SETs] (see also this post, although the meta-analysis in yesterday's post suggested that the only social science to have such a bias is economics), it is reasonable to wonder how the bias can be addressed. Would simply drawing students' attention to the bias be enough, or do we need to moderate student evaluations in some way?
The question of whether a simple intervention would work is addressed in this recent article by Anne Boring (Erasmus School of Economics) and Arnaud Philippe (University of Bristol), published in the Journal of Public Economics (ungated earlier version here). Boring and Philippe conducted an experiment in the 2015-16 academic year across seven campuses of Sciences Po in France, where each campus was assigned to one of three groups: (1) a control group; (2) a "purely normative" treatment; or (3) an "informational" treatment. As Boring and Philippe explain:
The administration sent two different emails to students during the evaluation period. One email—the ‘‘purely normative” treatment—encouraged students to be careful not to discriminate in SETs. The other email—the ‘‘informational” treatment—added information to trigger bias consciousness. It included the same statement as the purely normative treatment, plus information from the study on gender biases in SETs. The message contained precise information on the presence of gender biases in SET scores in previous years at that university, including the fact that male students were particularly biased in favor of male teachers.
Of students at the treated campuses, half received the email and half did not. No students at the control campuses received an email. In addition, the emails were sent after the period for students to complete evaluations had already started, so some students completed their evaluations before the treatment, and some after. This design allows Boring and Philippe to adopt a difference-in-differences analysis, comparing the difference in evaluations before and after the email for students at treatment campuses who were assigned to receive the email, with the difference in evaluations before and after the email for students at control campuses. The difference in those two differences is the effect of the intervention. Conducting this analysis, they find that:
...the purely normative treatment had no significant impact on reducing biases in SET scores. However, the informational treatment significantly reduced the gender gap in SET scores, by increasing the scores of female teachers. Overall satisfaction scores for female teachers increased by about 0.30 points (between 0.08 and 0.52 for the confidence interval at 5%), which represents around 30% of a standard error. The informational treatment did not have a significant impact on the scores of male teachers...
The reduction in the gender gap following the informational email seems to be driven by male students increasing their scores for female teachers. On the informational treatment campuses, male students’ mean ratings of female teachers increased from 2.89 to 3.20 after the emails were sent... Furthermore, the scores of the higher quality female teachers (those who generated more learning) seem to have been more positively impacted by the informational email.
That all seems very positive. Also, comparing evaluations from students at control campuses with evaluations from students in the control group at treated campuses before and after the email was sent allows Boring and Philippe to investigate whether the interventions had a spillover effect on those who did not receive the email. They find that:
...the informational treatment had important spillover effects. On informational treatment campuses, we find an impact on students who received the email and on students who did not receive the email. Anecdotal evidence suggests that this email sparked conversations between students within campuses, de facto treating other students.
The anecdotal evidence (based on responses to an email asking students about whether they had discussed the informational email with others) both provides a plausible mechanism to explain the spillover effects, and suggests that the emails may have been effective in spurring important conversations on gender bias. Also, importantly, the informational emails had an enduring effect. Looking at evaluations one semester later, Boring and Philippe find that:
The effect of the informational treatment remains significant during the spring semester: female teachers improved their scores. The normative treatment remained ineffective.
So, it appears that it is possible to reduce gender bias in student evaluations of teaching with a simple intervention.
Read more:
No comments:
Post a Comment