Sunday, 9 November 2025

Book review: In This Economy

Kyla Scanlon rose to some prominence during and after the pandemic, through her short explanatory videos about the economy, money, and finance. She may not have been the first, but certainly is one of the most prominent members of the #EconTok community on TikTok (as well as being active on other social media as well). Certainly, she has developed a large following, particularly among younger people. So, I was really interested to read her 2024 book, In This Economy.

I have to say that I was quite disappointed though. On the plus side, Scanlon plays to her strengths, and the early parts of the book are strong on exploring the role of vibes on the economy (Scanlon coined the term 'vibecession', to mean "a period of temporary vibe decline during which economic data such as trade and industrial activity are okay-ish"). Those chapters are generally good (although see my later comments). However, significant parts of chapters are less explainers about "how money and markets really work", which is the subtitle for the book, and more a commentary on current US policy on housing, immigration, clean energy, and the like. This is not just apparent in the final chapter, which is supposed to be more policy focused. The parts of the book where Scanlon held forth on her views were far less compelling to me, because the role of vibes was largely forgotten. It would have been more interesting to know how vibes may play a role in housing policy, or immigration policy, and whether a change in vibes might change policy. The book could have been tightened up significantly, and made an interesting contribution that other authors are less well equipped to make.

What put me off most though, were the inaccuracies in the book. The worst offence (to a New Zealand economist) was this, about inflation targeting:

That's because the 2 percent figure is sort of random. The idea originally came from Arthur Grimes, the Labour Party finance minster [sic] of New Zealand in the 1980s. He went on TV and said, "Two percent should be our inflation target," and now everybody goes after that magic number.

Arthur Grimes was never an MP, let alone finance minister (I checked this with him!). Scanlon might owe Arthur an apology for confusing him with Roger Douglas. One of my colleagues ventured that perhaps ChatGPT wrote those sentences. It is the sort of hallucination we might expect from an LLM, but who knows if that was the source. Sadly, it is indicative of the inaccuracies in the book. Consider this one:

In one example of the extremity of market moves, the yield on thirty-year U.K. inflation-linked bonds jumped by more than 250% (meaning that they fell 250% in price) after the Bank made the announcement that it was not going to intervene.

If something falls in price by more than 100 percent, that means that the seller pays the buyer to buy it from them. The correct figure here should be 60 percent I think, not 250 percent. Similarly:

So when news headlines say, "Inflation Rate Falls to 3 Percent," that doesn't mean that prices fell three percent; it just means that the rate of change of price increases fell three percent.

No, it means that the rate of change of prices fell to three percent (from whatever it was before). There is unfortunately a lot of this sort of lack of attention to detail. At one point, Scanlon provides an estimate of GDP for the 'Gingerbread Yeti economy', then converts it to 'real nominal GDP' by dividing by one plus the current year's inflation rate. First, there's no such thing as 'real nominal GDP'. There is 'nominal GDP' and there is 'real GDP'. And second, the calculation does provide a measure of real GDP, measured in terms of dollars from the year before. However, the calculation that is presented gives the impression that dividing by one plus the current year's inflation rate is the standard way of calculating real GDP. It isn't. It's not just the current year's inflation that matters in calculating real GDP, but the inflation in every year between the current year and the base year. The base year matters, and the base year is not always the year before the current year.

Despite my grumpiness, there are some good aspects to the book. Scanlon does have a good way with words that I think connects with younger people (and that much is clear from her success on social media). She also provides some interesting examples to illustrate her explanations, such as the 'economics kingdom' (which illustrates how parts of the economy are related), the 'cake of uncertainty' (which relates expectations, theory, and reality), and the aforementioned 'Gingerbread Yeti economy'. Scanlon also refers to a lot of memes, probably many more than I would recognise. And yet I found the explanation of how 'meme stocks' worked to be a bit underdone.

Sadly, I don't think I can recommend this book, even to my younger students who might connect with the contemporary material more than they would with earlier pop economics books. There are simply too many bits where I worry that the book would steer them wrong. Normally, I find that Tyler Cowen makes excellent book recommendations. In this case, I'm really not seeing whatever he saw in this one.

Saturday, 8 November 2025

Survey evidence on the labour market impacts of generative AI

A picture of the labour market impacts of generative AI is slowly emerging. At this stage, there is little consensus on what the impacts will be. I just stumbled across this working paper, by Jonathan Hartley (Stanford University) and co-authors, which I had put aside to read earlier this year. Unlike some of the research I have discussed in recent posts (linked at the end of this post), Hartley et al. make use of a nationally representative survey of US workers.

The survey has had three waves in the US (plus one Canadian wave), and the first US wave had over 4200 respondents (Hartley et al. don't report how many respondents there were for the other waves). The results make for interesting reading. First, in terms of who is using generative AI, they report that:

...LLM adoption at work among U.S. survey respondents above 18 has increased rapidly from 30.1% as of December 2024, to 43.2% as of March/April 2025, and to 45.9% as of June/July 2025...

Conditional on using Generative AI at work, about 33% of workers use Generative AI five days per week at work (every weekday). Roughly 12% of Generative AI users use such tools at work only 1 day at work. About 17% and 18% of Generative AI users use Generative AI tools at work two and three days per week respectively...

That is a lot of people using generative AI for work, and using it often when they do. It is interesting to sit these results alongside those of Chatterji et al. (whose paper I discussed in this post). They found growth in both work-related and non-work-related ChatGPT messages over time.

Who is using generative AI at work, though? Hartley et al. find that:

...Generative AI tools like large language models (LLMs) are most commonly used in the labor force by younger individuals, more highly educated individuals, higher income individuals, and those in particular industries such as customer service, marketing and information technology.

These results are similar to those of Chatterji et al., except that Hartley et al. also report gender differences (with greater use of generative AI by men), whereas Chatterji et al. report that the gender gap that was apparent among early adopters of ChatGPT has closed completely.

Hartley et al. then move on to estimating the productivity gains from generative AI. Given that this is survey-based, and not observational or experimental, we should take these results with a very large grain of salt. Hartley et al. ask their respondents how long it takes then to complete various tasks with and without generative AI. The results are summarised in Figure 12 in the paper:

Notice that every task is reported to take less time with generative AI (the green dots) than without (the blue dots). The productivity gains are different for different tasks. However, I find this figure and the data to be very fishy. How could generative AI create a huge decrease in time on 'Persuasion' tasks? Or 'Repairing' (which has one of the biggest productivity gains). Also, notice how almost every task takes between 25 and 39 minutes with generative AI. I strongly suspect that the research participants are anchoring their responses to this question on 30 minutes with GenAI for some reason. Without seeing the particular questions that are being asked though, it is hard to tell why. [*]

Hartley et al. then try to estimate the impact of generative AI on job postings, employment, and wages, using a difference-in-differences research design. They find no impact on job postings or employment, but significant impacts on wages. However, here things get strange. The coefficients that they report in Tables 6 and 7 of the paper are clearly negative, and yet Hartley et al. write that:

Our estimated coefficients... imply economically meaningful wage effects: a one-standard deviation increase in occupational Generative AI exposure corresponds to a significant increase in median annual wages...

Going back to their regression equations, their 'exposure to generative AI variable' is more positive when exposure is high, so a negative coefficient should imply that more exposure to generative AI is associated with lower wages. I must be missing something?

Given the deficiencies in the data and the regression modelling, I don't think that this paper really adds much to our understanding of the labour market effects of generative AI. Which is disappointing, because survey-based evidence would provide us with a complementary data source that would help us to triangulate with the results from other data sources and methods.

[HT: Marginal Revolution]

*****

[*] On a slightly more technical note, we might expect there to be as much variation (in relative terms) in the 'with GenAI' data as in the 'without GenAI' data. However, the coefficient of variation (the standard deviation expressed as a percentage of the mean) is 0.109 for the 'with GenAI' data, but 0.226 for the 'without GenAI' data. So, there is less than half the variation in the reported task times with GenAI than without. Again, that suggests that this data is fishy.

Read more:

  • ChatGPT and the labour market
  • More on ChatGPT and the labour market
  • The impact of generative AI on contact centre work
  • Some good news for human accountants in the face of generative AI
  • Good news, bad news, and students' views about the impact of ChatGPT on their labour market outcomes
  • Swiss workers are worried about the risk of automation
  • How people use ChatGPT, for work and not
  • Generative AI and entry-level employment
  • Friday, 7 November 2025

    This week in research #100

    Here's what caught my eye in research over the past week:

    • Barr and Castleman (with ungated earlier version here) demonstrate that intensive advising during high school and college significantly increases bachelor’s degree attainment among lower-income students, primarily driven by improvements in initial enrolment quality (enrolling in a four-year programme rather than a two-year programme)
    • Nyarko and Pozen (open access) find that joining Twitter increases citation counts by an average of 22% per year and improves article placements by up to 10 ranks for law professors, relative to a synthetic control group
    • Matusiewicz finds, using data on European countries, that while GDP per capita correlates positively with the human development index and income equality, it does not guarantee higher life satisfaction
    • Branilović and Rutar (open access) find, using data from 2011 to 2022, that increases in both neoliberalism and globalization are associated with increases in democracy, and that it is freedom of international trade, modesty of regulation, legal system and property rights, and social globalisation that drive the relationship at the aggregate level

    Thursday, 6 November 2025

    The economics of maps

    I have always liked maps. When I was growing up, one of my favourite books was my Rand McNally atlas. I may even still have it, tucked away with its spine held together by masking tape (after years of overuse by my primary-school-aged self). When I'm reading some fantasy novel that has a map on the inside cover, I can find myself lost in the map before even getting to read the book, and then flicking back to the map any time some new location is mentioned. Right next to my laptop while I'm writing this is a sepia-toned desk globe than, in truth, takes up too much space on the desk but will not be foregone.

    Given my interest in maps, I've been planning to read this 2020 article by Abhishek Nagaraj (University of California at Berkeley) and Scott Stern (MIT), published in the Journal of Economic Perspectives (open access), for some time (like many articles that have sat in my digital to-be-read pile for a long time). Nagaraj and Stern explain the economics of maps. This isn't the economics that uses maps, such as in the field of economic geography, but two other aspects. First, they review the economic and social consequences of maps. Second, they review the economics of mapmaking. Most of the article is devoted to the latter, and that's what I want to focus on as well.

    First though, what is a map? In my classes, I use maps as an example of a model - an abstraction or simplification of reality. Nagaraj and Stern note that maps are composed of two elements: (1) spatial data; and (2) a design. As they explain:

    At its core, a map takes selected attributes attached to a specific positional indicator (spatial data) and pairs it with a graphical illustration or visualization (design)...

    Having separated a map into its constituent elements, Nagaraj and Stern then look at the economics of spatial data, and the economics of design. On data, they note that:

    ...mapping data is in many respects a classical public good. Almost by definition, mapping data is non-rival insofar as the use of data for a map by any one person does not preclude its use by others; moreover, the information underlying a given database is non-excludable because copyright law does not protect the copying of factual information. While the precise expression included within a database can be protected through copyright, the underlying geographical facts reflected in the database cannot be protected.

    And just like most other public goods:

    The combination of non-rivalry and non-excludability of mapping data makes its production prone to private underinvestment, providing a rationale for government support. Indeed, many of the most widely used maps rely on publicly funded geospatial data, including US Geological Survey topographical maps, Census demographic information, and local land-use and zoning maps.

    On the other hand:

    ...there are important cases where mapping data is in fact excludable, either through secrecy or contract... Mapping data that allows for excludability exhibits properties more akin to a club good than a traditional public good. Specifically, the significant fixed costs of data collection combined with relatively cheap reproducibility creates entry barriers that supports natural monopolies or oligopolistic competition. It may be efficient for only a single firm to engage in data collection and for the industry to simply license these data (under agreed-upon contractual terms) from this monopoly provider.

    Now, even when spatial data is protected and excludable:

    ...in the absence of perfect price discrimination, private entities may only provide mapping data at a high price (relative to near-zero marginal cost), reducing efficient access. Beyond pricing, the private provision of mapping data may additionally be concentrated in locations with high demand (such as urban areas) to the exclusion of less concentrated regions.

    And that all accords with what we see. There are free sources of spatial data, which are public goods supported by governments or universities, alongside proprietary spatial databases that are club goods and only available at relatively high cost (to the dismay of researchers such as me!).

    Turning to map designs, Nagaraj and Stern note that:

    Like data, designs are also a knowledge good in that multiple individuals can use a particular map design (and so a design is non-rival) and the degree of excludability for a given design may vary with the institutional and intellectual property environment. With that said, a striking feature of a map design is that, almost by construction, a map is created for the purpose of visual inspection, and it is much easier to copy than a database (which might be protected by secrecy or contract). One consequence of this is that there may be underinvestment in high-quality and distinct designs for a given body of geospatial data.

    They use this to explain why there is a lot of competition in the provision of map designs, which is why so many maps for particular purposes look the same. As Nagaraj and Stern explain:

    A potential consequence of the non-excludability of mapping data and designs is inefficient overproduction of mapping products that compete with each other. Once a given map is produced for a particular location and application (say, a city-level tourist map), copycat maps can be produced at a lower sunk cost; because demand for maps of a given quality and granularity is largely fixed, free entry based on a given map involves significant business-stealing...

    Taking both spatial data and map designs together, the role of intellectual property protection is important:

    On the one hand, an absence of formal intellectual property protection leads to underinvestment in mapping data and high-quality map design, but inefficient entry by copycat mapmakers. On the other hand, a high level of formal intellectual property protection can shift the basis of competition away from imitation and towards duplicative investment. For example, over the past two decades, no less than four different organizations—including Google Street View, Microsoft StreetSide, OpenStreetCam project, and TomTom—have undertaken comprehensive and qualitatively similar initiatives to gather street-level imagery and mapping coordinates for the entire US surface road system.

    So that explains why there are multiple Street View clones available. The firms are over-investing in goods that are protected by intellectual property. Do we really need multiple copycats of Google Street View? Also, in terms of intellectual property protection, I found this interesting:

    In addition to employing copyright, firms often invest in additional strategies to protect their intellectual property. In particular, mapmakers have devised the idea of inserting fictional “paper towns” or “trap streets” in maps... This strategy allows them to detect rivals who might copy their data (rather than collecting similar data through an original survey) and thereby protect costly investment in original data collection. Such strategies are commonly deployed by mapmakers to this day for factual data...

    Does that help to explain why people have been caught out following roads that don't exist, or trying to find towns that are misplaced? I guess that 'trap streets' or 'paper towns' are a good idea on a paper map, which requires a certain amount of attention to follow, but less suitable for digital maps that people follow blindly.

    Nagaraj and Stern's article opens our eyes to the economics of maps, as well as their consequences. And now, I'm going to search my garage for my beloved Rand McNally atlas. If only I had a map to guide me as to where it is hiding!