Over the last six months, I have become increasingly disappointed by the response of higher education to generative AI. Don't get me wrong. There are lots of individual academics doing great things with generative AI, in teaching and research. However, the sector as a whole is, in my view, not fully grappling with it. At least, not in a way that is helpful to most academics who are on the frontline and worried about what they should be doing.
I've written a couple of posts on this topic (see here and here), plus a number of other posts related to various aspects of generative AI. The paid versions of ChatGPT and Elicit have increasingly become part of my regular workflows (although I'm almost certainly not making optimal use of either tool, and I'm resistant to having them take over too many aspects of my work, because I enjoy it).
I'm not an expert on AI and higher education, but what I have seen coming out of those that are being held up as experts, is pretty underwhelming. What I'm seeing is a lot of frameworks and white papers that are incredibly light on solutions - what should we, as teachers, be doing in order to prepare our students for a future labour market where working with generative AI is the norm. Early work like this TEQSA paper from late 2023 can probably be forgiven for a narrow focus on generative AI's impact on assessment. That was where my thinking was initially drawn to (see here). However, we should be doing better by now.
I was very disappointed in reading APRU's recent white paper which was big on framework, and light on detail. It is stacked full of banalities like:
...an existential threat is felt by higher education researchers and educators who may see their functions or parts of their roles being diminished or replaced by AI, may not know how to adapt from more traditional approaches, and are already under significant workload pressures... Early student perspectives suggest, however, that despite students being open to receiving assistance from AI, they still value the human elements of teacher-student relationships...
And this bit demonstrates that thinking hasn't really moved on from TEQSA's paper over one year ago, despite the rapid advances in generative AI that have occurred since then:
With generative AI increasingly able to perform well in assessments... unsupervised assessments are no longer able to assure attainment of learning outcomes. This does not mean that every assessment must now be supervised; rather, it means that assessment redesign is needed so that there is a pedagogically beneficial mixture of ‘secured’ assessment of learning and ‘open’ assessment for learning.
I want to pick up on two bits from the APRU white paper though. First:
...recent reports suggest that universities are not providing the necessary familiarity-building activities that students need...
And second:
...universities need to prepare learners for an AI-driven world and shift from a focus on knowledge to values and skills...
The problem is that the white paper is light on solutions to those problems. So, I'm going to do so here, at the risk of providing yet another framework. However, this approach that I will outline is grounded in a belief that higher education should be preparing students to work with AI, and should do so in an intentional way, scaffolding students through their studies to ensure that they are well prepared. My comments are focused on education across three years within a single discipline (a major within a degree), but can easily be extended to consider a whole degree programme.
In the first year, developing core knowledge and basic principles remains important. Even though generative AI can do a better job of answering questions about basic knowledge (and more complex knowledge) in any discipline than humans can, learning basic principles is about more than knowledge. It socialises ways of thinking that have developed within the discipline. In economics for example, we talk about the 'economic way of thinking'. If, in educating future students, we try to leapfrog this key step, in my view we generate graduates who lack a clear framework of understanding and interpreting the world. Even worse, they lack the foundation for interpreting generative AI model outputs, and this will hamper them from working effectively with generative AI.
Previously, I have pointed to the problems of models hallucinating (see here), and how knowledge of basic principles could help students in recognising hallucinations. However, the latest models, and especially those that can employ complex reasoning, are much less prone to hallucinations. However, I don't think this means that students can forego learning the basics.
This approach to the first year of a programme has consequences for how it can be taught or assessed. While generative AI can be used in a tutoring role (as I am doing in my ECONS101 class), that comes with risk in terms of assessment. Students should not be able to outsource their assessment activities to a generative AI that can easily answer questions about basic knowledge and principles, and to a high standard. That means that in the first year, more invigilated assessment may be required. Indeed, in my ECONS101 class we have moved from 84 percent invigilated in-person tests to 88 percent. Much of the rest of the assessment is based on completion of tutorial tasks, which is linked to learning, but supported in class by human tutors. We have built in a lot of learning opportunities for students in that class, in order to ensure that the basic principles are learned well.
In the second year, students can build on their basic knowledge, and begin working with AI. Hopefully, students will have taken the opportunities to be exposed to generative AI tools in their first year, although not having used them in high-stakes assessment, this means that students need to develop more specific skills in task-oriented prompting of generative AI. One of my colleagues, Pedram Nourani, demonstrated some excellent examples of students working with AI, step-by-step, through a very tightly structured assessment task. The assessment of the task in that example was based on their final output (of an Excel spreadsheet that analysed a bespoke dataset for each student), but it could easily be based on a combination of final output and transcripts of the conversations with the AI. Invigilated assessment, therefore, potentially has less of a role to play. Having students work directly with AI, in a tightly structured format, allows them to build their prompting skills in a task where the teacher can be fairly sure about the outcome.
In the third year then, students can progress to more unstructured tasks, working with one or more generative AI tools on fairly open-ended assessment tasks that more closely mimic the types of tasks that students will encounter after graduating. In these tasks, students might be given a research question to answer or an objective to meet, and decide for themselves, using the generative AI tools that are available, how best to answer the research question or meet the objective. Students might even be encouraged to consult with multiple generative AI tools for suggestions on how to complete the assessment, then execute the assessment in collaboration with the generative AI. Students' performance can be judged based on the final output, as well as transcripts of their various interactions with generative AI tools.
As with my proposal for the second year, invigilated assessment may not be required. However, it is worth noting that at both second and third years, ensuring that it is the students themselves that are interacting with the generative AI tools, and not some proxy (such as another human, or another generative AI!) remains a challenge. An intermediate approach then may be to have students complete the assessment in a supervised manner within a computer lab setting on campus.
As you can hopefully see, despite some remaining challenges, this overall approach to embedding generative AI into teaching across a programme (a major, or perhaps a degree) scaffolds the student through their learning journey, giving them the core disciplinary knowledge and skills necessary to work in their chosen field, as well as the transferable skills of working effectively with generative AI. All that's left is for me to convince my colleagues that this is a sensible way forward, and that it will ultimately lead to more employable and more successful graduates.
[HT: Karyn Rastrick for the APRU white paper]