Wednesday, 14 January 2026

David Deming on generative AI and commitment to learning, and the impact of generative AI on signalling in education

When I was writing yesterday's post on generative AI and the economics major, I really wished I had read this post by David Deming on generative AI and learning, and then I could have linked the two together. Instead, I'll use this post to draw on Deming's ideas and flesh out why I think that generative AI makes signalling in education harder, and why that is a problem (in contrast with Matthew Kahn, who as noted in yesterday's post thinks that generative AI reduces problems of information asymmetry).

First, Deming writes about the tension in education between students' desire to learn, and their desire to make life easier (the 'divided self', drawing on the example of Odysseus:

A vivid illustration of our divided self comes from a famous behavioral economics paper called “Tying Odysseus to the Mast: Evidence from a Commitment Savings Product in the Philippines”. They found that customers flocked to and greatly benefited from a bank product that prevented them from accessing their own savings in the future. Just like when Odysseus tied himself to the mast of his ship so that he would not be tempted by the alluring song of the Sirens...

The Sirens offer Odysseus the promise of unlimited knowledge and wisdom without effort. He survives not by resisting his curiosity, but by restricting its scope and constraining his own ability to operate. The Sirens possess all the knowledge that Odysseus seeks, but he realizes he must earn it. There are no shortcuts. This is the perfect metaphor for learning in the age of superintelligence.

The analogy to generative AI is obvious. Generative AI is a tool that offers unlimited knowledge without effort, but using that tool means that the effort necessary for genuine learning is not expended. As Deming concludes:

Learning is hard work. And there is now lots of evidence that people will offload it if given the chance, even if it isn’t in their long-run interest. After nearly two decades of teaching, I’ve realized that my classroom is more than just a place where knowledge is transmitted. It’s also a community where we tie ourselves to the mast together to overcome the suffering of learning hard things.

How does this relate to the quality of signalling? It is worth reviewing the role of signalling in education, as I discussed in this post:

On the other hand, education provides a signal to employers about the quality of the job applicant. Signalling is necessary because there is an adverse selection problem in the labour market. Job applicants know whether they are high quality or not, but employers do not know. The 'quality' of a job applicant is private information. High-quality (intelligent, hard-working, etc.) job applicants want to reveal to employers that they are hard-working. To do this, they need a signal - a way of credibly revealing their quality to prospective employers.

In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality job applicants, would provide the signal), and it must be costly in a way that makes it unattractive for the lower quality job applicants to attempt (such as being more costly for them to engage in).

Qualifications (degrees, diplomas, etc.) provide an effective signal (they are costly, and more costly for lower quality applicants who may have to attempt papers multiple times in order to pass, or work much harder in order to pass). So by engaging in university-level study, students are providing a signal of their quality to future employers. The qualification signals to the employer that the student is high quality, since a low-quality applicant wouldn't have put in the hard work required to get the qualification.

What does generative AI like ChatGPT do to this signalling? When students can outsource much of the effort required to complete assessments, then not-so-good students no longer need to spend more time or effort to complete their qualification than do good students. Take-home assignments, essays, or written reports might be completed to a passing standard with little effort from the student at all. Completing a qualification is no longer costly in a way that makes it unattractive for lower quality job applicants to attempt. That means that employers would no longer be able to infer a job applicant's quality from whether they completed a qualification or not.

A solution suggested by Deming's post is for students to find some way of committing themselves to not using generative AI in assessment. For this to solve the signalling problem, the commitment has to be credible (believable), such as being verifiable by potential employers later. While students could commit themselves to not using generative AI, and maintaining effortful learning, it is difficult to see how students who do so could credibly reveal that they have done so. They require some way of ensuring that potential employers could verify that the student didn't use generative AI. This is where universities could step in. If universities can certify that particular qualifications were 'AI-resistant', such as where assessment includes substantial supervised, in-person components (for example, tests or examinations), then that would help maintain the quality of the education signal. There are other options of course, including oral examinations, group or individual presentations, or supervised practice assessments that make learning harder to fake. However, anything that falls short of being AI-resistant in the eyes of employers is unlikely to work. However, limiting assessment styles in order to certify effortful learning doesn't come without a trade-off. AI-resistant assessment is likely to be less accessible, less flexible, less authentic, and potentially more likely to promote anxiety in students.

Kahn suggested in his post that "AI-proctored assessments and virtual tutors suddenly make effort and mastery visible in real time". That could work. However, AI proctoring by itself is not a solution. In order to retain its status as a signal of quality for students, assessments need to require more effort to complete well for not-so-good students than for good students. Having an assessment where an AI proctors while a student uses a generative AI avatar to make an AI-generated presentation is not going to work. I'm sure that's not what Kahn was envisaging. Proctoring of online assessment (either by humans or by AI) is not as easy as it sounds. Last year I was part of a group tasked with evaluating online proctoring tools, to be rolled out for our new graduate medical school, and I was left thoroughly underwhelmed. All of the tools that we evaluated seemed to have simple workarounds that moderately tech-savvy students could easily employ. The solution that was offered (when the demonstrators could even offer a solution) was to have students complete assessments on-site, which more or less defeats the purpose of online proctoring.

Anyway, the point is that generative AI reduces the signalling value of education. There are solutions where that signalling value can be retained, but that requires students to commit to effortful learning, and universities to certify that effort in a way that students who don’t expend it cannot mimic.

[HT: Marginal Revolution]

Read more:

No comments:

Post a Comment