Saturday, 23 November 2024

Jared Cooney Horvath on how generative AI could harm learning

In a post last month about generative AI, I expressed some scepticism towards those among my colleagues who are trying to integrate generative AI into assessment (an "if you can't beat them, join them" solution to the impact of generative AI on assessment). I also expressed some hope that generative AI can be used in sensible ways to assist in student learning. Both of those views are contested. They certainly are not universally held among teachers.

In a recent article on the Harvard Business Publishing website, Jared Cooney Horvath outlines three critical problems generative AI poses for learning: (1) AI tools lack empathy, and an empathetic learner-teacher relationship is a strong contributor to learning; (2) while AI tools are good at retrieving information, in so doing they make having internal knowledge less important for students, and yet it is a broad internal knowledge that helps us to understand and solve problems; and (3) generative AI encourages multitasking, which is bad for learning.

On the latter point, Horvath concludes that:

It’s not that computers can’t be used for learning; it’s that they so often aren’t used for learning that whenever we attempt to shoehorn this function in, we place a very large (and unnecessary) obstacle between the learner and the desired outcome—one many struggle to overcome.

Finally, Horvath notes one positive for generative AI and learning:

There is one area of learning where generative AI may prove beneficial: cognitive offloading. This is a process whereby people employ an external tool to manage “grunt work” that would otherwise sap cognitive energy.

However, as noted above, when novices try to offload memorization and organization, learning is impaired, the emergence of higher-order thinking skills is stifled, and without deep-knowledge and skill, they’re unable to adequately vet outputs.

Experienced learners or experts can benefit from cognitive offloading. Imagine a mathematician using a calculator to avoid arithmetic, an event planner using a digital calendar to organize a busy conference schedule, or a lawyer using a digital index to alphabetize case files. In each of these scenarios, the individual has the requisite knowledge and skill to ensure the output meaningfully matches the desired outcome.

Horvath hasn't really changed my views on generative AI and learning. He does give some food for thought though, especially in relation to the value of created a finetuned AI designed to help with a particular course. If students use it as an interactive tutor, to help them develop their internal knowledge, then it is likely positive. However, if they use it purely to ask contingent questions, it may impair their ability to develop that internal knowledge and make them worse off. I wonder if there are particular learning tasks that can be used to encourage the former behaviour without too many students resorting to the latter? Clearly I have more thinking to do on this before I roll something like that out for my students.

{HT: Mary Low]

Read more:

No comments:

Post a Comment