Tuesday, 11 November 2025

Simas Kucinskas on AI, university education, and the 'mushy middle'

Simas Kucinskas has an interesting Substack post on university education in the age of AI. His TL;DR summary of the post is:

AI now solves university assignments perfectly in minutes. Students often use LLMs as a crutch rather than as a tutor, getting answers without understanding. To address these problems, I propose a barbell strategy: pure fundamentals (no AI) on one end, full-on AI projects on the other, with no mushy middle. Universities should focus on fundamentals.

Kucinskas starts by making the point that "take-home assignments are obsolete", and that students are outsourcing too much of their learning to generative AI. I have to agree. When ChatGPT can write an essay, solve problem sets, draft reports, and answer online test questions, the options for assessment that provides a genuine evaluation of whether students have met particular learning outcomes narrow significantly. That's why, in my classes, we've moved back to predominantly in-person assessment (or oral examinations online). They're not bulletproof assessments, but they are better than the alternatives that are far more vulnerable to generative AI.

Kucinskas's solution is what he terms the "barbell strategy":

One end of the barbell: courses that are deliberately non-AI. Work through proofs by hand. Read academic papers. Write essays without AI. It’s hard, but you build mental strength.

The other end of the barbell: embrace AI fully for applied projects. Attend vibecoding hackathons. Build apps with Cursor. Use Veo to create videos. Master these tools effectively.

 Kucinskas dismisses the "mushy middle":

...where students “use AI responsibly” or instructors teach basic prompting as an afterthought. That’s the worst of both worlds. Students don’t build thinking skills, but they also don’t learn the full potential of AI.

Here, I differ with Kucinskas. I agree about the starting point. We need the basic courses that teach the fundamentals of a discipline to be designed to be AI-free, at least in terms of the assessment (AI can still be a useful learning tool, such as the AI tutors in my papers). And I agree with Kucinskas about the end point. We need students to be embracing AI fully for applied projects by the end of their degree. Where we differ is how we get students from the starting point to the end point, and I prefer a much more scaffolded approach (as I outlined briefly in this post).

The problem is that Kucinskas has a rosy view of how self-directed students will be in learning how best to use generative AI. Highly self-directed (and/or tech-savvy) students will be fine without any direction from universities or lecturing staff. They will spend the time and effort to figure it out themselves, and will excel because of the learning that they engage in along the way. Those are the students that Kucinskas is thinking about. However, not all students are like that. Some (perhaps many) won't know what they are doing, may fail more than they succeed, and eventually try to wholesale outsource the applied projects to AI. This is exactly what Kucinskas is worried about for university education. His approach doubles down on what is happening already, for students who are least self-directed.

Students who are less self-directed by definition require a more directive approach from lecturing staff. These students need to be scaffolded through the process of recognising the value of generative AI, learning to use generative AI within a narrowly-scoped set of activities, and gradually building their skills with prompting and learning from each other and from the generative AI, before being let loose on the applied projects that are the end-point of the learning journey.

So, there is definitely a role for the 'mushy middle' in university education. However, by making it more directive we can hopefully reduce the degree of mushiness.

[HT: Marginal Revolution]

Read more:

No comments:

Post a Comment