Wednesday, 11 February 2026

Did employers value an AI-related qualification in 2021?

Many universities are rapidly adapting to education in the age of generative AI by trying to develop AI skills in their students. There is an assumption that employers want graduates with AI skills across all disciplines, but is there evidence to support that? This recent discussion paper by Teo Firpo (Humboldt-Universität zu Berlin), Lukas Niemann (Tanso Technologies), and Anastasia Danilov (Humboldt-Universität zu Berlin) provides an early answer. I say it's an early answer because their data come from 2021, before the wave of generative AI innovation that became ubiquitous following the release of ChatGPT at the end of 2022. The research also focuses on AI-related qualifications, rather than the more general AI skills, but it's a start.

Firpo et al. conduct a correspondence experiment, where they:

...sent 1,185 applications to open vacancies identified on major UK online job platforms... including Indeed.co.uk, Monster.co.uk, and Reed.co.uk. We restrict applications to entry-level positions requiring at most one year of professional experience, and exclude postings that demand rare or highly specialized skills...

Each identified job posting is randomly assigned to one of two experimental conditions: a "treatment group", which receives a résumé that includes additional AI-related qualifications and a "control group", which receives an otherwise identical résumé without mentioning such qualifications.

Correspondence experiments are relatively common in the labour economics literature (see here, for example), and involve the researcher making job applications with CVs (and sometimes cover letters) that differ in known characteristics. In this case, the applications differed by whether the CV included an AI-related qualification or not. Firpo et al. then focus on differences in callback rates, and they differentiate between 'strict callbacks' (invitations to interview), and 'broad callbacks' (any positive employer response, including requests for further information). Comparing callback rates between CVs with and without AI-related qualifications, they find:

...no statistically significant difference between treatment and control groups for either outcome measure...

However, when they disaggregate their results by job function, they find that:

In both Marketing and Engineering, résumés listing AI-related qualifications receive higher callback rates compared to those in the control group. In Marketing, strict callback rates are 16.00% for AI résumés compared to 7.00% for the control group (p-value = 0.075...), while broad callback rates are 24.00% versus 12.00% (p-value = 0.043...). In Engineering, strict callback rates are 10.00% for AI résumés compared to 4.00% for the control group (p-value = 0.163...), while broad callback rates are 20.00% versus 8.00% (p-value = 0.024...).

For the other job functions (Finance, HR, IT, and Logistics) there was no statistically significant effect of AI qualifications on either measure of callback rates. Firpo et al. then estimate a regression model and show that:

...including AI-related qualifications increases the probability of receiving an interview invitation for marketing roles by approximately 9 percentage points and a broader callback by 12 percentage points. Similarly, the interaction between the treatment dummy and the Engineering job function dummy in the LPM models is positive and statistically significant, but only for broad callbacks. AI-related qualifications increase the probability of a broad callback by at least 11 percentage points...

The results from the econometric model are only weakly statistically significant, but they are fairly large in size. However, I wouldn't over-interpret them because of the multiple-comparison problem (around five percent of results would show up as statistically significant just by chance). At best, the evidence that employers valued AI-related qualifications in 2021 is pretty limited, based on this research.

Firpo et al. were worried that employers might not have noticed the AI qualifications in the CVs, so they conducted an online survey of over 700 professionals with hiring experience and domain knowledge, but that survey instead shows that the AI-related qualification was salient and a signal of greater technical skills, but lower social skills. These conflicting signals are interesting, and suggestive that employers are looking for both technical skills and social skills in entry-level applicants. Does this, alongside the earlier results for different job functions, imply that technical skills are weighted more heavily than social skills for Engineering and Marketing jobs? I could believe that for Engineering, but for Marketing I have my doubts, because interpersonal skills are likely to be important in Marketing. Again though, it's probably best not to over-interpret the results.

Firpo et al. conclude that:

...our findings challenge the assumption that AI-related qualifications unambiguously enhance employability in early-career recruitment. While such skills might be valued in abstract or strategic terms, they do not automatically translate into interview opportunities, at least not in the entry-level labor market in job functions such as HR, Finance, Marketing, Engineering, IT and Logistics.

Of course, these results need to be considered in the context of their time. In 2021, AI-related skills might not have been much in demand by employers. That is unlikely to hold true now, given that generative AI use has become so widespread. It would be interesting to see what a more up-to-date correspondence experiment would find.

[HT: Marginal Revolution]

Read more:

  • ChatGPT and the labour market
  • More on ChatGPT and the labour market
  • The impact of generative AI on contact centre work
  • Some good news for human accountants in the face of generative AI
  • Good news, bad news, and students' views about the impact of ChatGPT on their labour market outcomes
  • Swiss workers are worried about the risk of automation
  • How people use ChatGPT, for work and not
  • Generative AI and entry-level employment
  • Survey evidence on the labour market impacts of generative AI
  • Tuesday, 10 February 2026

    Who on earth has been using generative AI?

    Who are the world's generative AI users? That is the question addressed in this recent article by Yan Liu and He Wang (both World Bank), published in the journal World Development (ungated earlier version here). They use website traffic data from Semrush, alongside Google Trends data, to document worldwide generative AI use up to March 2024 (so, it's a bit dated now, as this is a fast-moving area, but it does provide an interesting snapshot up to that point). In particular, Liu and Wang focus on geographical heterogeneity in generative AI use (measured as visits to generative AI websites, predominantly, or in some of their analyses, entirely ChatGPT), and they explore how that relates to country-level differences in institutions, infrastructure, and other variables.

    Some of the results are fairly banal, such as the rapid increase in website traffic to AI chatbot websites, a corresponding decline in traffic to sites such as Google, and Stack Overflow, and that the users skew younger, more educated, and male. Those demographic differences will likely become less dramatic over time as user numbers increase. However, the geographic differences are important and could be more persistent. Liu and Wang show that:

    As of March 2024, the top five economies for ChatGPT traffic are the US, India, Brazil, the Philippines, and Indonesia. The US share of ChatGPT traffic dropped from 70 % to 25 % within one month of ChatGPT’s debut. Middle-income economies now contribute over 50 % of traffic, showing disproportionately high adoption of generative AI relative to their GDP, electricity consumption, and search engine traffic. Low-income economies, however, represent less than 1 % of global ChatGPT traffic.

    So, as of 2024, most generative AI use was in middle-income countries, but remember that those are also high-population countries (like India). Generative AI users are disproportionately from high-income countries once income and internet use (proxied by search engine traffic) are accounted for. Figure 12 in the paper illustrates this nicely, showing generative AI use, measured as visits per internet user:

    Notice that the darker-coloured countries, where a higher proportion of internet users used ChatGPT, are predominantly in North America, western Europe, and Australia and New Zealand. On that measure, Liu and Wang rank New Zealand 20th (compared with Singapore first, and Australia eighth). There are a few interesting outliers like Suriname (sixth) and Panama (17th), but the vast majority of the top twenty countries are high-income countries.

    What accounts for generative AI use at the country level? Using a cross-country panel regression model, Liu and Wang find that:

    Higher income levels, a higher share of youth population, bet-ter digital infrastructure, and stronger human capital are key predictors of higher generative AI uptake. Services’ share of GDP and English fluency are strongly associated with higher chatbot usage.

    Now, those results simply demonstrate correlation, and are not causal. And website traffic could be biased due to use of VPNs, etc., not to mention that it doesn't account very well for traffic from China or Russia (and Liu and Wang are very upfront about that limitation). Nevertheless, it does provide a bit more information about how countries with high generative AI use differ from those with low generative AI use. Generative AI has the potential to level the playing field somewhat for lower-productivity workers, and lower-income countries. However, that can only happen if lower-income countries access generative AI. And it appears as if, up to March 2024 at least, they are instead falling behind. As Liu and Wang conclude, any catch-up potential from generative AI:

    ...depends on further development as well as targeted policy interventions to improve digital infrastructure, language accessibility, and foundational skills.

    To be fair, that sounds like a general prescription for development policy in any case.

    Read more:

    Monday, 9 February 2026

    The promise of a personalised, AI-augmented textbook, and beyond

    In the 1980s, the educational psychologist Benjamin Bloom introduced the 'two-sigma problem' - that students who were tutored one-on-one using a mastery approach performed on average two standard-deviations (two-sigma) better than students educated in a more 'traditional' classroom setting. That research is often taken as a benchmark for how good an educational intervention might be (relative to a traditional classroom baseline). The problem, of course, is that one-on-one tutoring is not scalable. It simply isn't feasible for every student to have their own personal tutor. Until now.

    Generative AI makes it possible for every student to have a personalised tutor, available 24/7 to assist with their learning. As I noted in yesterday's post though, it becomes crucial how that AI tutor is set up, as it needs to ensure that students engage meaningfully in a way that promotes their own learning, rather than simply being a tool to 'cognitively offload' difficult learning tasks.

    One promising approach is to create customised generative AI tools, that are specifically designed to act as tutors or coaches, rather than simple 'answer-bots'. This new working paper by the LearnLM team at Google (and a long list of co-authors) provides one example. They describe an 'AI-augmented textbook', which they call the 'Learn Your Way' experience, which:

    ...provides the learner with a personalized and engaging learning experience, while also allowing them to choose from different modalities in order to enhance understanding.

    Basically, this initially involves taking some source material, which in their case is a textbook, but could just as easily be lecture slides, transcripts, and related materials from a class. It then personalises those materials to the interests of the students, adapting the examples and exercises to fit a context that the students find more engaging. For example, if the student is an avid football fan, they might see examples drawn from football. And if the student is into Labubu toys, they might see examples based on that.

    The working paper describes the approach, reports a pedagogical evaluation performed by experts, and finally reports on a randomised controlled trial (RCT) evaluating the impact of the approach on student learning. The experts rated the Learn Your Way experience across a range of criteria, and the results were highly positive. The only criterion where scores were notably low was for visual illustrations. That accords with my experience so far with AI tutors, which are not good at drawing economics graphs, in particular (and is an ongoing source of some frustration!).

    The RCT involved sixty high-school students in Chicago area schools, who studied this chapter on brain development of adolescents. Half of the students were assigned to Learn Your Way, and half to a standard digital PDF reader. As the LearnLM Team et al. explain:

    Participants then used the assigned tool to study the material. Learning time was set to a minimum of 20 minutes and a maximum of 40 minutes. After this time, each participant had 15 minutes to complete the Immediate Assessment via a Qualtrics link.

    They then did a further assessment three days later (a 'Retention Assessment'). In terms of the impact of Learn Your Way:

    The students who used Learn Your Way received higher scores than those who used the Digital Reader, in both the immediate (p = 0.03) and retention (p = 0.03) assessments.

    The difference in test outcomes was 77 percent vs. 68 percent in the Immediate Assessment, and 78 percent vs. 67 percent in the Retention Assessment. So, the AI-augmented textbook increased student learning and retention by about 10 percentage points in both immediate learning and in the short term (three days). Of course, this was just a single study with a relatively small sample size of 60 students in a single setting, but it does offer some promise for the approach.

    I really like this idea of dynamically adjusting content to suit students' interests, which is a topic I have published on before. However, using generative AI in this way allows material to be customised for every student, creating a far more personalised approach to learning than any teacher could offer. I doubt that even one-on-one tutoring could match the level of customisation that generative AI could offer.

    This paper has gotten me thinking about the possibilities for personalised learning. Over the years, I have seen graduate students with specific interests left disappointed by what we are able to offer in terms of empirical papers. For example, I can recall students highly interested in economic history, the economics of education, and health economics in recent years. Generative AI offers the opportunity to provide a much more tailored education to students who have specific interests.

    This year, I'll be teaching a graduate paper for the first time in about a decade. My aim is to allow students to tailor that paper to their interests, by embarking on a series of conversations about research papers based on their interests. The direction that leads will be almost entirely up to the student (although with some guidance from me, where needed). Students might adopt a narrow focus on a particular research method, a particular research question, or a particular field or sub-field of economics. Assisted by a custom generative AI tool, they can read and discuss papers, try out replication packages, and/or develop their own ideas. Their only limits will be how much time they want to put into it. Of course, some students will require more direction than others, but that is what our in-class discussion time will be for.

    I am excited by the prospects of this approach, and while it will be a radical change to how our graduate papers have been taught in the past, it might offer a window to the future. And best of all, I have received the blessing of my Head of School to go ahead with this as a pilot project that might be an exemplar for wider rollout across other papers. Anyway, I look forward to sharing more on that later (as I will turn it into a research project, of course!).

    The ultimate question is whether we can use generative AI in a way that moves us closer to Bloom’s two-sigma benefit of one-on-one tutoring. The trick will be designing it so that students still do the cognitive work. My hope (and, it seems, the LearnLM team’s) is that personalisation increases students' engagement with learning rather than replacing it. If it works, this approach could be both effective and scalable in a way that human one-on-one tutoring simply can’t match.

    [HT: Marginal Revolution, for the AI-augmented textbook paper]

    Sunday, 8 February 2026

    Neuroscientific insights into learning and pedagogy, especially in the age of generative AI

    In May last year, my university's Centre for Tertiary Teaching and Learning organised a seminar by Barbara Oakley of Oakland University, with the grand title 'The Science of Learning'. It was a fascinating seminar about the neuroscience of learning, and in my mind, it justified several of my teaching and learning practices, such as continuing to have lectures, to emphasise students' learning basic knowledge in economics, and retrieval practice and spaced repetition as learning tools.

    Now, I've finally read the associated working paper by Oakley and co-authors (apparently forthcoming as a book chapter), and I've been able to pull out further insights that I want to share here. The core of their argument is in the Introduction to the paper. First:

    Emerging research on learning and memory reveals that relying heavily on external aids can hinder deep understanding. Equally problematic, however, are the pedagogical approaches used in tandem with reliance on external aids—that is, constructivist, often coupled with student-centered approaches where the student is expected to discover the insights to be learned... The familiar platitude advises teachers to be a guide on the side rather than a sage on the stage, but this oversimplifies reality: explicit teaching—clear, structured explanations and thoughtfully guided practice—is often essential to make progress in difficult subjects. Sometimes the sage on the stage is invaluable.

    I have resisted the urge to move away from lectures as a pedagogical tool, although I'd like to think that my lectures are more than simply information dissemination. I actively incorporate opportunities for students to have their first attempts at integrating and applying the economic concepts and models they are learning - the first step in an explicit retrieval practice approach. Oakley et al. note the importance of both components, because:

    ...mastering culturally important academic subjects—such as reading, mathematics, or science (biologically secondary knowledge)—generally requires deliberate instruction... Our brains simply aren’t wired to effortlessly internalize this kind of secondary knowledge—in other words, formally taught academic skills and content—without deliberate practice and repeated retrieval.

    The paper goes into some detail about the neuroscience underlying this approach, but again it is summarised in the Introduction:

    At the heart of effective learning are our brain's dual memory systems: one for explicit facts and concepts we consciously recall (declarative memory), and another for skills and routines that become second nature (procedural memory). Building genuine expertise often involves moving knowledge from the declarative system to the procedural system—practicing a fact or skill until it embeds deeply in the subconscious circuits that support intuition and fluent thinking...

    Internalized networks form mental structures called schemata, (the plural of “schema”) which organize knowledge and facilitate complex thinking... Schemata gradually develop through active engagement and practice, with each recall strengthening these mental frameworks. Metaphors can enrich schemata by linking unfamiliar concepts to familiar experiences... However, excessive reliance on external memory aids can prevent this process. Constantly looking things up instead of internalizing them results in shallow schemata, limiting deep understanding and cross-domain thinking.

    This last point, about the shallowness of learning when students rely on 'looking things up' instead of relying on their own memory of key facts (and concepts and models, in the case of economics), leads explicitly to worries about learning in the context of generative AI. When students rely on external aids (known as 'cognitive offloading'), then learning becomes shallow, because:

    ...deep learning is a matter of training the brain as much as informing the brain. If we neglect that training by continually outsourcing, we risk shallow competence.

    Even worse, there is a feedback loop embedded in learning, which exacerbates the negative effects of cognitive offloading:

    Without internally stored knowledge, our brain's natural learning mechanisms remain largely unused. Every effective learning technique—whether retrieval practice, spaced repetition, or deliberate practice—works precisely because it engages this prediction-error system. When we outsource memory to devices rather than building internal knowledge, we're not just changing where information is stored; we're bypassing the very neural mechanisms that evolved to help us learn.

    In short, internalized knowledge creates the mental frameworks our brains need to spot mistakes quickly and learn from them effectively. These error signals do double-duty: they not only help us correct mistakes but also train our attention toward what's important in different contexts, helping build the schemata we need for quick thinking. Each prediction error, each moment of surprise, thus becomes an opportunity for cognitive growth—but only if our minds are equipped with clear expectations formed through practice and memorization...

    Learning works through making mistakes, recognising those mistakes, and adapting to reduce those mistakes in future. Ironically, this is analogous to how generative AI models are trained (through 'reinforcement learning'). When students offload learning tasks to generative AI, they don't get an opportunity to develop the underlying internalised knowledge that allows them to recognise mistakes and learn from them. Thus, it is important for significant components of student learning to happen without resorting to generative AI (or other tools that allow students to cognitively offload tasks).

    Now, in order to encourage learning, teachers must provide students with the opportunity to make, and learn from, mistakes. Oakley et al. note that:

    ...cognitive scientists refer to challenges that feel difficult in the moment but facilitate deeper, lasting understanding as “desirable difficulties... Unlike deliberate practice, which systematically targets specific skills through structured feedback, desirable difficulties leverage cognitive struggle to deepen comprehension and enhance retention...

    Learning is not supposed to be easy. It is supposed to require effort. This is a point that I have made in many discussions with students. When they find a paper relatively easy, it is likely that they aren't learning much. And tools that make learning easier can hinder, rather than help, the learning process. In this context, generative AI becomes potentially problematic for learning for some (but not all) students. Oakley et al. note that:

    Individuals with well-developed internal schemas—often those educated before AI became ubiquitous—can use these tools effectively. Their solid knowledge base allows them to evaluate AI output critically, refine prompts, integrate suggestions meaningfully, and detect inaccuracies. For these users, AI acts as a cognitive amplifier, extending their capabilities.

    In contrast, learners still building foundational knowledge face a significant risk: mistaking AI fluency for their own. Without a robust internal framework for comparison, they may readily accept plausible-sounding output without realizing what’s missing or incorrect. This bypasses the mental effort—retrieval, error detection, integration—that neuroscience shows is essential for forming lasting memory engrams and flexible schemas. The result is a false sense of understanding: the learner feels accomplished, but the underlying cognitive work hasn’t been done.

    The group that benefits from AI as a complement for studying is not just those who were educated before AI became ubiquitous, but also those who learn in an environment where generative AI is explicitly available as a complement to learning (rather than a substitute). To a large extent, it depends on how generative AI is used as a learning tool. Oakley et al. do provide some good examples (and I have linked to some in past blog posts). I'd also like to think the AI tutors I have created for my ECONS101 and ECONS102 students assist with, rather than hamper, learning (and I have some empirical evidence that seems to support this, which I have already promised to blog about in the future).

    Oakley et al. conclude that:

    Effective education should balance the use of external tools with opportunities for students to internalize key knowledge and develop rich, interconnected schemata. This balance ensures that technology enhances learning rather than creating dependence and cognitive weakness.

    Finally, they provide some evidence-based strategies for enhancing learning (bolding is mine):

    • Embrace desirable difficulty—within limits: Encourage learners to generate answers and grapple with problems before turning to help... In classroom practice, this means carefully calibrating when to provide guidance—not immediately offering solutions, but also not leaving students floundering with tasks far beyond their current capabilities...
    • Assign foundational knowledge for memorization and practice: Rather than viewing factual knowledge as rote trivia, recognize it as the glue for higher-level thinking...
    • Use procedural training to build intuition: Allocate class time for practicing skills without external aids. For instance, mental math exercises, handwriting notes, reciting important passages or proofs from memory, and so on. Such practices, once considered old-fashioned, actually cultivate the procedural fluency that frees the mind for deeper insight...
    • Intentionally integrate technology as a supplement, not a substitute: When using AI tutors or search tools, structure their use so that the student remains cognitively active...
    • Promote internal knowledge structures: Help students build robust mental frameworks by ensuring connections happen inside their brains, not just on paper... guide students to identify relationships between concepts through active questioning ("How does this principle relate to what we learned last week?") and guided reflection...
    • Educate about metacognition and the illusion of knowledge: Help students recognize that knowing where to find information is fundamentally different from truly knowing it. Information that exists "out there" doesn't automatically translate to knowledge we can access and apply when needed.

    I really like those strategies as a prescription for learning. However, I am understandably biased, because many of the things I currently do in my day-to-day teaching practice are encompassed within (or similar to) those suggested strategies. I'll work on making 'guided reflection' a little more interactive in my classes this year, as I have traditionally made the links explicit for the students, rather than inviting them to make those links for themselves. We have been getting our ECONS101 students to reflect more on learning, and we'll be revising that activity (which happens in the first tutorial) this year to embrace more of a focus on metacognition.

    Learning is something that happens (often) in the brain. It should be no surprise that neuroscience has some insights to share on learning, and what that means for pedagogical practice. Oakley et al. take aim at some of the big names in educational theory (including Bloom, Dewey, Piaget, and Vygotsky), so I expect that their work is not going to be accepted by everyone. However, I personally found a lot to vindicate my pedagogical approach, which has developed over two decades of observational and experimental practice. I also learned that there are neuroscientific foundations for many aspects of my approach. And, I learned that there are things I can do to potentially further improve student learning in my classes.