I've previously written a couple of posts on gender differences in multiple choice answering (see here and here). The key points of the research I highlighted there is that male students perform relatively better in multiple choice questions than female students. However, a good answer as to why male students perform better still eludes us. Here's what I said in my 2019 post:
Maybe it is that female students don't respond well to high pressure or competitive situations (and multiple choice questions are reasonably high pressure). Women are more risk averse than men, so maybe it is related to that? Female students might be more likely to skip questions in order to avoid the risk of losing marks... Also, men are more overconfident than women, so maybe it is related to that? Again, male students might be less likely to skip questions, because they are less likely to be unsure they have the right answer.
So, with that question still open, I was interested to read this job market paper by Silvia Griselda (Bocconi University) on the gender gap in mathematics. Griselda used the PISA tests from 2012 and 2015, which included around 500,000 students in total, from over 60 countries. Interestingly, I hadn't realised that the PISA test formats varied, with different students facing different numbers of multiple choice, closed response questions (which require a very simple, short answer, e.g. a single number), and open response questions (which require a more detailed answer, e.g. an explanation of how the answer was derived). So, Griselda compares the performance on the test between male and female students, based on the proportion of mathematics questions that were multiple choice. She first notes that:
Both in the PISA 2012 and 2015 tests, boys perform better than girls in all formats of mathematics questions. Yet, the gender difference in performance is significantly bigger in multiple-choice questions.
Then in her main analysis, Griselda finds that:
...an increase in the proportion of multiple-choice questions by 10 percentage points differentially reduces girls' scores by 0.031 standard deviations compared to boys in 2012, and by 0.021 in 2015. The effect of multiple-choice questions on the gender gap in performance is not small... This effect is comparable to a decrease in teacher quality of one-quarter of a standard deviation... or an increase in a class size of one student...
There also appear to be spillovers, whereby female students also perform worse on both closed response and open response questions, when more of the mathematics questions are multiple choice. She then further exploits the data from the 2015 PISA, which was computerised and recorded the time that each student spent on each question. Categorising students as 'inattentive' if they spent too little time on three or more questions (defined as being shorter than 10 percent of the time taken on average for that question in the student's country), and/or if they skipped three or more questions (despite having five or more minutes left when they finished the test). She finds that:
...boys are significantly more likely than girls to be identified as inattentive students (the proportion of inattentive boys is 9.38%, while the proportion of inattentive girls is 8.51%, and the difference is statistically different from zero)...
...when the proportion of multiple-choice question features [sic] in the exam increases, girls become differentially more disengaged than boys... This means that a 10 percentage point increase in the proportion of multiple-choice questions can reverse the gender gap in student engagement level.
So, when there are more multiple choice questions, female students are more disengaged (inattentive) and that explains their worse performance in multiple choice. Griselda takes things further though, showing that:
The proportion of multiple-choice questions received has a negative and significant effect on the gender difference in performance among students with a low level of confidence and self-efficacy. On the contrary, there is not a significantly different effect of the proportion of multiple-choice questions receive among high confident [sic] and high self-efficacy students...
The proportion of multiple-choice questions has a negative and significant effect on female performance only among students whose mother is not employed in STEM-related occupations. The marginal effect of the proportion of multiple-choice is not statistically significant among students whose mothers work in STEM-related occupations.
That would seem to support the idea of stereotype threat, as noted in my 2019 post. However, there is one problem that I see with this paper, and that is that the results only seem to hold for mathematics, and not for science (or for reading). If multiple choice is a problem for female students more than male students, then that difference should be apparent for all domains, and yet Griselda finds:
The proportion of multiple-choice questions in reading does not affect males and females performance, while the proportion of science multiple-choice question has an unclear effect on students' performance in science.
I would have been interested to see the same sort of analysis based on mothers in STEM-related occupations, for the science results (comparable results for confidence wouldn't be able to be done, as only confidence with mathematics was asked in 2015).
So, these results again demonstrate that male students perform better than female students in multiple choice questions. They provide further suggestive evidence that this might relate to confidence and stereotype threat. However, given that they appear to hold for mathematics and not for science, they are not conclusive to me. We need more research like this, especially trying to identify the mechanisms that are driving the headline differences. Without understanding the mechanisms, we won't be able to adequately address the problem.
[HT: This article by Griselda in The Conversation, back in May]
Read more:
No comments:
Post a Comment