Friday, 8 April 2022

Deterring cheating in online assessment requires more than cheap talk

Cheating is a serious problem in online assessments. The move to more online teaching, learning, and assessment has made it all the more apparent that teachers need appropriate strategies and tools to deal with cheating. However, once those tools are in place, they will only deter students if students know that there are cheating detection tools, and if students believe that they will be caught. How do teachers get students to believe they will be caught?

That is essentially the question that is addressed in this new article by Daniel Dench (Georgia Institute of Technology) and Theodore Joyce (City University of New York), published in the Journal of Economic Behavior and Organization (ungated earlier version here). Dench and Joyce run an experiment at a large public university in the US, to see if cheating could be deterred. As they explain:

The setting is a large public university in which undergraduates have to complete a learning module to develop their facility with Microsoft Excel. The software requires that students download a file, build a specific spreadsheet, and upload the file back into the software. The software grades and annotates their errors. Students can correct their mistakes and resubmit the assignment two more times. Students have to complete between 3 to 4 projects over the course of the semester depending on the course. Unbeknownst to the students, the software embeds an identifying code into the spreadsheet. If students use another student’s spreadsheet, but upload it under their name, the software will indicate to the instructor that the spreadsheet has been copied and identify both the lender and user of the plagiarized spreadsheet. Even if a student copies just part of another student’s spreadsheet, the software will flag the spreadsheet as not the student’s own work.

Focusing on four courses (one in finance, one in management, and two in accounting) that required multiple projects, Dench and Joyce randomised students into two groups (A and B):

One week before the first assignment was due, we sent an email to Group A reminding students to submit their own work and that the software could detect any work they copied from another spreadsheet. The email further stated that those caught cheating on the first assignment would be put on a watch list for subsequent assignments. Further violations of academic integrity would involve their course instructor for further disciplinary action. Group B received the same email one week before the second assignment. All students flagged for cheating in either of the two assignments were sent an email informing them that they were currently on a watch list for the rest of the semester’s assignments.

Dench and Joyce then test the effect of information about the software being able to detect cheating, and then they test the effect of students being sanctioned after having cheated on one (or both) of the first two assignments. They found that:

...warning students about the software’s ability to detect cheating has a practically small and statistically insignificant effect on cheating rates. Flagging cheaters, however, and putting them at risk for sanctions lowers cheating by approximately 75 percent.

The results were similar for the finance and management courses, but much smaller for the accounting courses, where the extent of cheating was much lower (interestingly), and where the first projects were due after the sanctions had occurred in the finance and management courses (and so, there may have been spillover effects).

Overall, these results suggest that simply telling students that you can detect cheating and that they will be caught is not effective. Students see it as 'cheap talk' and not credible. Students need to credibly believe that they will be caught, and that there will be consequences. One way to make this credible is to actually catch them cheating and call them out on it. And if this happens in one class, it appears that it can spill over to other classes. And to other semesters, as Dench and Joyce note in an epilogue to the article:

In Finance and Management the rate of cheating after the spring of 2019 was 80 to 90 percent lower than levels reported in the experiment. Given the complete lack of an effect of warnings in the experiment, we suspect that subsequent warnings were viewed as more credible based on the experience of students in the spring of 2019.

Student cheating in assessments is a challenging problem to deal with. If we want to deter students from cheating, we have to catch them, tell them they have been caught, and make them suffer some consequences. Only then will the statements that we make in order to deter students from cheating be credible deterrents.

[HT: Steve Tucker]

Read more:

No comments:

Post a Comment