From my perspective, the most challenging aspect of teaching during the pandemic lockdowns last year wasn't the teaching itself, it was dealing with students cheating in the online assessment. To give you some idea, I sent more students to the Student Discipline Committee in B Trimester 2020 than I had in the previous 10 years of teaching combined. All but one of those students ended up failing their paper. And I was not alone. The Student Disciplinary Committee faced a huge increase in workload, especially related to students using contract cheating websites to answer assessment questions for them.
Anyway, as you may expect, my experiences (and those of my colleagues) are not isolated examples. In a new paper in the Journal of Economic Behavior and Organization (ungated earlier version here), Eren Bilen (University of South Carolina) and Alexander Matros (Lancaster University) looked at cheating in online assessments. They use two examples to illustrate the pervasiveness of cheating: (1) students in an intermediate level class in Spring Semester 2020 (when lockdowns were introduced partway through the semester); and (2) online chess tournaments. They motivate their analysis with a simple game theoretic model, as shown below (the first payoff is to the student, and the second payoff is to the professor).
They note that in the sequential game:
It is easy to find a unique subgame perfect equilibrium outcome, where the student is honest and the professor does not report the student. Note that this is the best outcome for the professor and the second best outcome for the student.
To see why that is the subgame perfect Nash equilibrium, we can use backward induction. Essentially, we work out what the second player (the professor) will do first, and then use that to work out what the first player (the student) will do. In this case, if the student cheats, then we are moving down the left branch of the tree. The best option for the professor in that case is to report the student (since a payoff of 3 is better than a payoff of 2). So, the student knows that if they cheat, the professor will report them. Now, if the student doesn't cheat, then we are moving down the right branch of the tree. The best option for the professor in that case is not to report the student (since a payoff of 4 is better than a payoff of 1). So, the student knows that if they don't cheat, the professor will not report them. So, the choice for the student is to cheat and get reported (and receive a payoff of 1) or not cheat and not get reported (and receive a payoff of 3). Of course, the student will choose not to cheat. The subgame perfect Nash equilibrium here is that the student doesn't cheat, and the professor doesn't report them.
The problem with that analysis is that the professor doesn't know with certainty if the student has cheated or not. So, Bilen and Matros move onto a sequential game, as shown below. Even though the players make their choices sequentially, because the student's choice about whether to cheat or not is not revealed to the professor, it is
as if the professor is making their choice about whether to report or not at the same time as the student. That makes this a simultaneous game.
Bilen and Matros note that, in this game:
This game has a unique mixed-strategy equilibrium, which means that the student and the professor should randomize between their two actions in equilibrium. Thus cheating as well as reporting is a part of the equilibrium.
To see why, we need to try to find the Nash equilibriums in this game, and to do that we can use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the textbook definition of Nash equilibrium). In this game, the best responses are:
- If the student chooses to cheat, the professor's best response is to report the student (since 3 is a better payoff than 2);
- If the student chooses not to cheat, the professor's best response is not to report the student (since 4 is a better payoff than 1);
- If the professor chooses to report the student, the student's best response is to not cheat (since 2 is a better payoff than 1); and
- If the professor chooses not to report the student, the student's best response is to cheat (since 4 is a better payoff than 3).
A Nash equilibrium occurs where both players' best responses coincide (normally I would track this with ticks and crosses, but since I didn't create the payoff table I haven't done so in this case). Notice that there isn't actually any case where both players are playing a best response. If the student cheats, the professor's best response is to report them. But if the professor is going to report the student, the student's best response is to not cheat. But if the student doesn't cheat, the professor's best response is not to report them. But if the professor doesn't report the student, the student's best response is to cheat. We are simply going around in a circle.
In cases such as this, we say that there is no Nash equilibrium in pure strategy. However, there will be a mixed strategy equilibrium, where the players randomise their choices of strategy. The student should cheat with some probability, and the professor should report the student with some probability. The optimal probabilities depend on the actual payoffs to the players (we could work it out in this case, but I'm not going to go that far today).
Anyway, Bilen and Matros conclude that in-person tests and examinations most closely resemble the sequential game, where the equilibrium is for students not to cheat, but that online tests most closely resemble the simultaneous game, where cheating will be much more common. And, their analysis supports that theory:
Using a simple way to detect cheating - timestamps from the students’ Access Logs - we identify cases where students were able to type in their answers under thirty seconds per question. We found that the solution keys for the exam were distributed online, and these students typed in the correct as well as incorrect answers using the solution keys they had at hand.
Then they present their proposed solution to the problem of online cheating:
In order to address this issue based on our theoretical models, we suggest that instructors present their students with two options: (1) If a student voluntarily agrees to use a camera to record themselves while taking an exam, this record can be used as evidence of innocence if the student is accused of cheating; (2) If the student refuses to use a camera due to privacy concerns, the instructor should be allowed to make the final decision on whether or not the student is guilty of cheating, with evidence of cheating remaining private to the instructor.
I'm not sure that I agree. The optimal solution would be one that returns to conditions where cheating is easy to detect, as in the sequential game above. A voluntary webcam doesn't do this, since students who want to cheat, as well as students who have privacy concerns, would opt out. The game would revert to the simultaneous game for those students, and some of those students would cheat.
My solution, which will be implemented if we go into lockdown and can't have a final examination in the trimester due to start in a couple of weeks, is to move to individual oral examinations. It is a lot harder to hide when you are put on the spot in a Zoom call and asked to answer a question chosen at random by the lecturer. Certainly, you can't use an online cheating website to provide the answer for you in such a situation! I haven't fully worked out the mechanics of how it would work (and hopefully I never have to implement it!), but it would seem to me to return the assessment to a sequential game.
On the other hand, the theory of asymmetric information and signalling does suggest that the webcam solution may work. The student knows whether they intend to cheat or not. The professor doesn't know. Whether the student is planning to cheat is private information. The student can reveal this information by the choices they make. Say that the professor offers a voluntary webcam policy, where students who agree set up a webcam such that they can be observed while they complete the test. Students who agree to the webcam are clearly those that weren't intending to cheat, because then they would surely be caught. In contrast, if a student was intending to cheat, they wouldn't agree to the policy. And so, the professor is able to reveal who the likely cheaters are, simply by offering the policy. The students signal whether they are cheaters by agreeing to have the webcam, or not.
That seems way too simple, and you could argue that it is somewhat coercive, forcing honest students to compromise their privacy to signal their honesty. It's going to be imperfect too, because it would be difficult to separate students who chose not to webcam from those that can't afford a webcam, or whose internet connection is too slow to support webcam use, and so on.
And invigilating the webcam solution is going to be extraordinarily expensive. You can't get AI to do the supervision (at least, not yet). And since Zoom can only support 25 faces on screen at a time, you would need at least one invigilator per 25 students. Probably you need more than that, because the invigilator needs to be able to see clearly what the student is doing (so unless they are using a 50" screen, you probably want a whole lot fewer than 25 images on-screen at a time). I think I'll stick to the oral examination solution.
Cheating in online assessments is clearly a serious problem. This is the main reason that I am highly sceptical about the move to online education. Until we can solve the serious academic integrity issues in a cost-effective way (and we currently don't have one), the quality of assessment (in terms of accurately ranking students or assessing their knowledge and skills) is so flow that it makes a mockery of the whole idea of teaching online.