There doesn't appear to be much of a consensus on how to adapt higher education to generative AI. I have my own thoughts, which I have shared here several times already (see the links at the end of this post). However, I am open to the ideas of others. So, I was interested to read this new paper by Matthew Kahn (University of Southern California), where he discusses his views on the future of the economics major. Specifically:
I present an optimistic outlook on the evolution of our economics major over the coming decade, centered on the possibility of highly tailored, student-specific training that fully acknowledges the rich diversity of our students’ abilities, interests, and educational goals.
Kahn is correct in laying out the challenge that we face:
Faculty now face a steeper challenge in helping students see the value of investing sustained effort in a demanding subject like economics, especially when AI tools can produce quick answers and when attention is pulled in countless directions by social media, short-form video, gaming, and other digital platforms...
If students are not prepared for rigorous material, then the easy path for them to follow is to rely on the AI as a crutch. AI creates a moral hazard effect. In recent years, I have stopped assigning class papers because it was obvious to me that the well written papers were being written by the AI. Each economics professor faces the challenge of how to use the incentives we control to nudge students to make AI a complement (not a substitute) for their own time investment in their studies.
The challenge of making AI a complement rather than a substitute for learning has been a common theme in my writing on generative AI in education. Kahn's proposed solutions are not dissimilar from mine too. For instance, in introductory economics:
Large language models can now go much further, acting as tireless, patient coaches that deliver truly adaptive “batting practice.” The AI begins with simple exercises and progressively escalates in difficulty, adjusting in real time to the student’s performance. This is exactly the repetitive, low-stakes practice every introductory economics student needs to build intuition. Going forward, I expect that we will see a growing number of economics educators introducing specialized AI economics tools...
And that is exactly what I have done in my ECONS101 and ECONS102 classes this year. Both classes had AI tutors that were pre-trained with a knowledge base of the lecture material, and students could chat with the tutors, ask them questions, develop study guides, practice multiple choice questions, and probably a dozen other use cases I haven't considered. The flexibility of these AI tutors, both for myself and for the students, made them a huge contributor to students' learning this year (at least, that's what students said in their course evaluations at the end of each paper).
Unfortunately, Kahn's prescription for changes at higher levels of the economics major are much weaker. For instance, for intermediate microeconomics he advocates for making use of short skills videos, then:
AI will help here. Students can take the written transcripts from these video presentations and feed these to AI and ask for more examples to make it more intuitive for them. Students can explain their logic to AI and allow the AI to patiently tutor them. Students can ask the robot to generate likely exam questions for them to practice on.
That isn't much of an advance on what he advocates at the introductory level, because it is still simply content plus discussion with an AI tutor. I think there is much more potential value at the intermediate level of getting students to engage in more back-and-forth exploratory discussions with generative AI, and making those discussions a small part of the assessment. That works in theory-based courses (intermediate microeconomics) and econometrics. Kahn could have thought deeper here about the possibilities. However, for intermediate macroeconomics, I really like this suggestion:
AI tools make it possible to immerse students in the real-time decisions faced by figures such as Ben Bernanke in 2008. What information was available at each moment? What nightmare scenarios kept policymakers awake? Interactive simulations can let students experience economic policymaking “on the fly,” combining partial scientific knowledge with radical uncertainty. Such exercises tend to be far more memorable and engaging than static diagrams.
Some 'scripted' AI tools, built on top of ChatGPT (like my AI tutors are) would be wonderful tools for simulation. The AI could be instructed to maintain certain relationships through the simulation, introduce particular shocks, and help the students to evaluate different monetary and fiscal policy responses (or, evaluate the impact of fiscal policy changes). This would be a much more tailored approach than the simulation modelling that Brian Silverstone used when I studied intermediate macroeconomics some twenty years ago. Kahn also has great suggestions for field classes:
Professors teaching field classes often assign a textbook. Such a textbook offers both the professor and the students a linear progression structure but this teaching approach can feel dated as the professor delegates the course structure to a stranger who does not have experience teaching at that specific university. Textbooks are not often updated and the material (such as specific box examples) can quickly feel dated. AI addresses this staleness challenge...
In recent months, I have experimented with loading many interesting readings to a shared Google LM Notebook website and encouraging my students to ask the AI for summaries about these writings and to ask their own questions...
This year, I'll be teaching graduate development economics, for the first time in about a decade, and Kahn has pre-empted almost exactly the approach I was intending to adopt, with students engaged in conversation with a generative AI model (I wasn't sure if I would use NotebookLM or ChatGPT for this purpose), then expanding on that conversation within class. I'm also considering the feasibility of getting students in that class to work with generative AI on a short research project - collating and analysing data to answer some particular research questions, or to replicate some specific study. The paper is in the B Trimester, so I still have time to flesh out the details.
Kahn then goes on to discuss the impacts of generative AI on research assistant and teaching assistant opportunities. I think he is a bit too pessimistic though, since he concludes that human research assistants will only be useful for developing new (spatial) datasets. I think there are many more use cases for human research assistants still, and not just for data collection or data cleaning. Finally, Kahn addresses information asymmetry, noting that:
For far too long, students have been choosing majors in the dark—picking “prestigious” fields without really knowing what the degree will do for them, while universities have been able to hide behind vague reputations and opaque classrooms. Parents write enormous checks with almost no idea what they’re buying, employers wonder if the diploma still means anything, and everyone quietly suspects a lot of the game is just expensive signaling.
AI changes that. Cheap, frequent, AI-proctored assessments and virtual tutors suddenly make effort and mastery visible in real time. Professors discover whether students are actually learning the material. Parents can peek at meaningful progress dashboards instead of just getting billing statements. Employers can ask for verifiable records of real skills instead of trusting a transcript that could have been gamed.
I'm not sold on AI proctoring as a solution. In fact, I worry that it will simply lead to an 'arms race' of student AI tools vs. faculty AI tools. The advent of AI avatars and agentic AI simply makes this even more likely across a wider range of assessment types. However, I do agree with Kahn that a lot of education is signalling to employers, and that generative AI is going to change the dynamics of education away from signalling. Kahn seems to think that is a good thing. I worry the opposite! Without signalling, it is difficult for good students to distinguish themselves, and that limits the value proposition of higher education. Kahn wants "verifiable records of real skills instead of... a transcript that could have been gamed". However, generative AI makes it much easier for students to game the record of real skills, rendering those records less reliable.
There isn't a consensus on the best path forward. Kahn's paper is a work in progress, and he is inviting others to share their thoughts. I have offered a few of mine in this post, and I look forward to sharing more of my explorations of generative AI in teaching as we go through this year.
[HT: Marginal Revolution]
Read more:
- Some notes on generative AI in higher education
- Some notes on generative AI and assessment (in higher education)
- More notes on generative AI and teaching (in higher education)
- Universities' (and teachers') cheap talk on generative AI and assessment
- Simas Kucinskas on AI, university education, and the 'mushy middle'
- The wicked problem of generative AI and assessment
No comments:
Post a Comment