Monday, 5 December 2022

The prevalence of cheating in online multiple-choice exams

In my ECONS101 class, we have weekly online tests, comprised of multiple choice questions, with a couple of additional calculation questions thrown in. While the online tests contribute to students' grades, the contribution is small (each test is worth about 1 percent of the students' grades). The purpose of the online tests in that paper is to get students to engage with learning each topic as we go, and to give them quick feedback on their learning, rather than to test their knowledge of the content and how to apply it. Even though the student code of conduct precludes it, no doubt some students work together on the online tests, and that is part of the reason why they are worth so little towards the students' overall grades.

The pandemic caused an immediate change to assessment procedures. Many lecturers, who previously would have conducted in-class tests or exams, were forced to shift these assessments online. When the contribution of an online test to a student's grade is much greater, there is a much greater incentive for students to work together, and we should expect much greater levels of academic integrity issues. But how much greater?

That is the question addressed in this new article by Flip Klijn (Barcelona School of Economics), Mehdi Mdaghri Alaoui (Universitat Pompeu Fabra), and Marc Vorsatz (Universidad Nacional de EducaciĆ³n a Distancia), published in the Journal of Economic Psychology (open access). Klijn et al. report on a randomised experiment they conducted when classes were rapidly moved online at Universitat Pompeu Fabra, only two weeks before the final exam. Their exam was 100 percent multiple choice, and when moved online they set it up so that students were able to view only one question at a time, and once they answered a particular question, they could not backtrack. By randomising the order in which students were shown particular questions, Klijn et al. test whether students who saw the same questions later are more likely to get those questions correct, and spend less time on them (both of which would indicate that some students who saw questions later were copying answers from those who saw them earlier).

Their data come from an introductory game theory class of 494 students. They find that:

First, the students that received a given problem in the later round performed better in terms of higher correctness and shorter completion time. Second, with respect to the questions of the problem that was not subject to order randomization, no significant differences regarding correctness and completion time are found for the different exam versions.

Finally, they gave half of students a reminder notice about academic integrity halfway through the online exam. However:

...the reminder of the university’s code of ethics... did not affect the correctness of the answers to nor the completion time of subsequent questions.

Of course, Klijn et al. don't know who, if any, of the students were actually cheating. However, they undertake a simulation exercise that establishes an upper-bound of 8.7 percent of students copying from each other. I'm unsure on whether that seems higher, or lower, than I would expect. Klijn et al. conclude with the suggestion that:

...giving all students the same questions seems a risky procedure for on-line exams, especially if there are no further measures to inhibit cheating. In fact, a fair and possibly more cheating-proof procedure in this case would be precisely the opposite of a unique list of questions: for each question, a sufficiently large number of different versions should be generated and then randomly assigned to students. Here, ‘‘different versions’’ refers to scaling, switching, etc. of numerical values, and depending on the permitted procedures by the university’s authorities, a potentially wider range of variations.

That seems feasible in the case of multiple choice (and calculation-style) questions, and in fact is something that I already apply in my ECONS101 weekly online tests (though not for all questions). However, not all online exams are 100 percent multiple choice, and nor should they be as that limits the ability to test students' skills in applying what they have learned. It seems to me, though, that open-ended questions are even more susceptible to academic integrity issues in online tests (and that was our experience in 2020, when we had online tests and assignments, and I sent a large number of students to the student disciplinary committee).

Online assessment is rife with academic integrity issues, and I don't think we have found a good way to address them. I'll post sometime in the new year about our trial in the most recent trimester, where we gave students the option of completing weekly video reflections in place of tests and exams.

No comments:

Post a Comment