Somehow, this report languished in my to-be-read pile for over a year. By Natasha Ziebell and Jemma Skeat (both University of Melbourne), it explores the relatively early use of generative AI by university students and staff, based on a small (110 research participants - 78 students and 32 academic staff) survey conducted in April-May 2023.
While the results are somewhat dated now, given the pace of change in generative AI and the ways that university students and staff are engaging with it, some aspects are still of interest. For example, Ziebell and Skeat found that while over 78 percent of academic staff had used generative AI to that point, only 52 percent of students had done so. I think many of us would be surprised that students were not more experimental in their early use of generative AI. On the other hand, perhaps they were simply reluctant to admit to having used it, given that this was a study undertaken by a university that may sanction students for the use of generative AI in assessment?
The other aspect of the paper that still warrants attention are the opportunities and challenges identified by the research participants, which still seem to be very current. In terms of opportunities:
There were a range of opportunities identified for using generative AI as a tool for teaching and learning:
• to generate study materials (e.g. revision materials, quiz/practice questions)
• to generate resources (e.g. as a teacher)
• to summarise material (e.g. coursework material, research papers)
• to generate information (e.g. similar to Wikipedia)
• to provide writing assistance (e.g. develop plans and outlines, rewording and refining text, editing)
• for learning support (e.g. explaining questions and difficult content, as an additional resource, ‘using it like a tutor’)
• as a research tool (e.g. potential for integrating generative AI with library search tools)
• as a high efficiency time-saving tool (e.g. to sort data, gather information, create materials)
• to encourage creative thinking, critical thinking and critical analysis (e.g. students generate work in an AI program and critique it)
I don't think we've moved on substantially from that list of opportunities, and if a similar survey was conducted now, we would see many of the same opportunities are still apparent. In terms of challenges:
The key challenges identified by respondents can be summarised according to the following categories:
- Reliability of generative AI (inaccurate information and references, difficulty fact checking, misinformation)
- Impact on learning (e.g. misusing generative AI, not understanding limitations of the technology)
- Impact on assessment (e.g. cheating, difficulty detecting plagiarism, assessment design)
- Academic integrity and authenticity (e.g. risk of plagiarism, collusion, academic misconduct)
- Trust and control (reliance on technology rather than human thinking, concerns about future advancements)
- Ethical concerns (e.g. copyright breaches, equitable access, impact on humanity)
Unfortunately, just as the opportunities remain very similar, we are still faced with many of the same challenges. In particular, universities have been fairly poor at addressing the impact on learning and assessment, and in my view there is a distinct 'head-in-the-sand' approach to issues of academic integrity and authenticity. Many universities seem unwilling to step back and reconsider whether the wholesale move to online and hybrid learning and assessment remains appropriate in an age of generative AI. The support available to academic staff who are on the frontline dealing with these issues is superficial.
However, academic integrity and authenticity of assessment are only an issue if students are using generative AI tools in assessment. This report suggests that, in early 2023, only a minority of students were doing so. I don't think we can rely on that being the case anymore. One example from my ECONS101 class in B Trimester serves as an illustrative case.
This year (and for many prior years going back to at least 2005), we've had weekly quizzes in ECONS101 (and before that, ECON100). These quizzes this year had 12 questions, generally consisting of ten multiple choice questions and two (often challenging) calculation questions. These quizzes are each worth about one percent of students' grades in the paper, so they are fairly low stakes. Students have generally taken them seriously, and the median time to complete a quiz has been fairly stable at 15-20 minutes over the last few years. Until B Trimester this year, when the median time to complete started the trimester at over 17 minutes, but by the end of the trimester was down to 7 minutes. It isn't clear to me that it is possible to genuinely complete the 12 questions in 7 minutes. Around 16 percent of students completed the last Moodle quiz in four minutes or less. And it wasn't that those students were rushing the test and performing badly. The average score in the quiz for students completing it in four minutes or less was 86 percent (only slightly below the 92 percent average for students who took longer than four minutes). I'm almost certain that the culprit was one of the (now several) browser extensions that will automatically answer Moodle quizzes using generative AI. Needless to say, this year sees the end of Moodle quizzes that contribute to grades in ECONS101.
Anyway, I digress. It would be really interesting to see this sort of study replicated in 2025, and with a decent sample size - it is hard to say much with a sample of 110, split across students and staff. I imagine that we would see many of the same opportunities and challenges would be salient, but that the uses of generative AI have changed in the meantime, and students would now be at least as prolific as users of generative AI as are staff.
[HT: The Conversation, last year]