In May last year, my university's Centre for Tertiary Teaching and Learning organised a seminar by Barbara Oakley of Oakland University, with the grand title 'The Science of Learning'. It was a fascinating seminar about the neuroscience of learning, and in my mind, it justified several of my teaching and learning practices, such as continuing to have lectures, to emphasise students' learning basic knowledge in economics, and retrieval practice and spaced repetition as learning tools.
Now, I've finally read the associated working paper by Oakley and co-authors (apparently forthcoming as a book chapter), and I've been able to pull out further insights that I want to share here. The core of their argument is in the Introduction to the paper. First:
Emerging research on learning and memory reveals that relying heavily on external aids can hinder deep understanding. Equally problematic, however, are the pedagogical approaches used in tandem with reliance on external aids—that is, constructivist, often coupled with student-centered approaches where the student is expected to discover the insights to be learned... The familiar platitude advises teachers to be a guide on the side rather than a sage on the stage, but this oversimplifies reality: explicit teaching—clear, structured explanations and thoughtfully guided practice—is often essential to make progress in difficult subjects. Sometimes the sage on the stage is invaluable.
I have resisted the urge to move away from lectures as a pedagogical tool, although I'd like to think that my lectures are more than simply information dissemination. I actively incorporate opportunities for students to have their first attempts at integrating and applying the economic concepts and models they are learning - the first step in an explicit retrieval practice approach. Oakley et al. note the importance of both components, because:
...mastering culturally important academic subjects—such as reading, mathematics, or science (biologically secondary knowledge)—generally requires deliberate instruction... Our brains simply aren’t wired to effortlessly internalize this kind of secondary knowledge—in other words, formally taught academic skills and content—without deliberate practice and repeated retrieval.
The paper goes into some detail about the neuroscience underlying this approach, but again it is summarised in the Introduction:
At the heart of effective learning are our brain's dual memory systems: one for explicit facts and concepts we consciously recall (declarative memory), and another for skills and routines that become second nature (procedural memory). Building genuine expertise often involves moving knowledge from the declarative system to the procedural system—practicing a fact or skill until it embeds deeply in the subconscious circuits that support intuition and fluent thinking...
Internalized networks form mental structures called schemata, (the plural of “schema”) which organize knowledge and facilitate complex thinking... Schemata gradually develop through active engagement and practice, with each recall strengthening these mental frameworks. Metaphors can enrich schemata by linking unfamiliar concepts to familiar experiences... However, excessive reliance on external memory aids can prevent this process. Constantly looking things up instead of internalizing them results in shallow schemata, limiting deep understanding and cross-domain thinking.
This last point, about the shallowness of learning when students rely on 'looking things up' instead of relying on their own memory of key facts (and concepts and models, in the case of economics), leads explicitly to worries about learning in the context of generative AI. When students rely on external aids (known as 'cognitive offloading'), then learning becomes shallow, because:
...deep learning is a matter of training the brain as much as informing the brain. If we neglect that training by continually outsourcing, we risk shallow competence.
Even worse, there is a feedback loop embedded in learning, which exacerbates the negative effects of cognitive offloading:
Without internally stored knowledge, our brain's natural learning mechanisms remain largely unused. Every effective learning technique—whether retrieval practice, spaced repetition, or deliberate practice—works precisely because it engages this prediction-error system. When we outsource memory to devices rather than building internal knowledge, we're not just changing where information is stored; we're bypassing the very neural mechanisms that evolved to help us learn.
In short, internalized knowledge creates the mental frameworks our brains need to spot mistakes quickly and learn from them effectively. These error signals do double-duty: they not only help us correct mistakes but also train our attention toward what's important in different contexts, helping build the schemata we need for quick thinking. Each prediction error, each moment of surprise, thus becomes an opportunity for cognitive growth—but only if our minds are equipped with clear expectations formed through practice and memorization...
Learning works through making mistakes, recognising those mistakes, and adapting to reduce those mistakes in future. Ironically, this is analogous to how generative AI models are trained (through 'reinforcement learning'). When students offload learning tasks to generative AI, they don't get an opportunity to develop the underlying internalised knowledge that allows them to recognise mistakes and learn from them. Thus, it is important for significant components of student learning to happen without resorting to generative AI (or other tools that allow students to cognitively offload tasks).
Now, in order to encourage learning, teachers must provide students with the opportunity to make, and learn from, mistakes. Oakley et al. note that:
...cognitive scientists refer to challenges that feel difficult in the moment but facilitate deeper, lasting understanding as “desirable difficulties... Unlike deliberate practice, which systematically targets specific skills through structured feedback, desirable difficulties leverage cognitive struggle to deepen comprehension and enhance retention...
Learning is not supposed to be easy. It is supposed to require effort. This is a point that I have made in many discussions with students. When they find a paper relatively easy, it is likely that they aren't learning much. And tools that make learning easier can hinder, rather than help, the learning process. In this context, generative AI becomes potentially problematic for learning for some (but not all) students. Oakley et al. note that:
Individuals with well-developed internal schemas—often those educated before AI became ubiquitous—can use these tools effectively. Their solid knowledge base allows them to evaluate AI output critically, refine prompts, integrate suggestions meaningfully, and detect inaccuracies. For these users, AI acts as a cognitive amplifier, extending their capabilities.
In contrast, learners still building foundational knowledge face a significant risk: mistaking AI fluency for their own. Without a robust internal framework for comparison, they may readily accept plausible-sounding output without realizing what’s missing or incorrect. This bypasses the mental effort—retrieval, error detection, integration—that neuroscience shows is essential for forming lasting memory engrams and flexible schemas. The result is a false sense of understanding: the learner feels accomplished, but the underlying cognitive work hasn’t been done.
The group that benefits from AI as a complement for studying is not just those who were educated before AI became ubiquitous, but also those who learn in an environment where generative AI is explicitly available as a complement to learning (rather than a substitute). To a large extent, it depends on how generative AI is used as a learning tool. Oakley et al. do provide some good examples (and I have linked to some in past blog posts). I'd also like to think the AI tutors I have created for my ECONS101 and ECONS102 students assist with, rather than hamper, learning (and I have some empirical evidence that seems to support this, which I have already promised to blog about in the future).
Oakley et al. conclude that:
Effective education should balance the use of external tools with opportunities for students to internalize key knowledge and develop rich, interconnected schemata. This balance ensures that technology enhances learning rather than creating dependence and cognitive weakness.
Finally, they provide some evidence-based strategies for enhancing learning (bolding is mine):
- Embrace desirable difficulty—within limits: Encourage learners to generate answers and grapple with problems before turning to help... In classroom practice, this means carefully calibrating when to provide guidance—not immediately offering solutions, but also not leaving students floundering with tasks far beyond their current capabilities...
- Assign foundational knowledge for memorization and practice: Rather than viewing factual knowledge as rote trivia, recognize it as the glue for higher-level thinking...
- Use procedural training to build intuition: Allocate class time for practicing skills without external aids. For instance, mental math exercises, handwriting notes, reciting important passages or proofs from memory, and so on. Such practices, once considered old-fashioned, actually cultivate the procedural fluency that frees the mind for deeper insight...
- Intentionally integrate technology as a supplement, not a substitute: When using AI tutors or search tools, structure their use so that the student remains cognitively active...
- Promote internal knowledge structures: Help students build robust mental frameworks by ensuring connections happen inside their brains, not just on paper... guide students to identify relationships between concepts through active questioning ("How does this principle relate to what we learned last week?") and guided reflection...
- Educate about metacognition and the illusion of knowledge: Help students recognize that knowing where to find information is fundamentally different from truly knowing it. Information that exists "out there" doesn't automatically translate to knowledge we can access and apply when needed.
I really like those strategies as a prescription for learning. However, I am understandably biased, because many of the things I currently do in my day-to-day teaching practice are encompassed within (or similar to) those suggested strategies. I'll work on making 'guided reflection' a little more interactive in my classes this year, as I have traditionally made the links explicit for the students, rather than inviting them to make those links for themselves. We have been getting our ECONS101 students to reflect more on learning, and we'll be revising that activity (which happens in the first tutorial) this year to embrace more of a focus on metacognition.
Learning is something that happens (often) in the brain. It should be no surprise that neuroscience has some insights to share on learning, and what that means for pedagogical practice. Oakley et al. take aim at some of the big names in educational theory (including Bloom, Dewey, Piaget, and Vygotsky), so I expect that their work is not going to be accepted by everyone. However, I personally found a lot to vindicate my pedagogical approach, which has developed over two decades of observational and experimental practice. I also learned that there are neuroscientific foundations for many aspects of my approach. And, I learned that there are things I can do to potentially further improve student learning in my classes.
No comments:
Post a Comment