Thursday, 31 October 2024

Book review: Wonderland (Steven Johnson)

When I think about the dramatic changes in society that have occurred since the end of the Industrial Revolution, one of the trends that stands out (to me) is the massive increase in leisure time. In the 19th Century, most people worked far more hours than they do today. The recent decades of that trend were well-described in Daniel Hamermesh's book Spending Time (which I reviewed here). What was left unexplored in that book was the way that leisure pursuits have affected the economy and society.

That is the purpose of Steven Johnson's book Wonderland, which is subtitled "How play made the modern world". Johnson describes the book as:

...a history of play, a history of the pastimes that human beings have concocted to amuse themselves as an escape from the daily grind of subsistence. This is a history of what we do for fun.

The book is comprised of chapters devoted to fashion and shopping, music, food, entertainment, games, and our use of public space. Each chapter is well written and well resourced, and a pleasure to read. Johnson is a great storyteller and the stories he presents are interesting and engaging.

However, from the first chapter, I struggled with the overall thesis of the book, which is that changes in leisure pursuits drove broader societal changes and economic changes. This is most glaringly demonstrated in the first chapter, where Johnson contends that it was the desire for fashion that drove the Industrial Revolution:

When historians have gone back to wrestle with the question of why the industrial revolution happened, when they have tried to define the forces that made it possible, their eyes have been drawn to more familiar culprits on the supply side: technological innovations that increased industrial productivity, the expansion of credit networks and financing structures; insurance markets that took significant risk out of global shipping channels. But the frivolities of shopping have long been considered a secondary effect of the industrial revolution itself, and effect, not a cause... But the Calico Madams suggest that the standard theory is, at the very least, more complicated than that: the "agreeable amusements" of shopping most likely came first, and set the thunderous chain of industrialization into motion with their seemingly trivial pursuits.

In spite of the excellent prose, I'm not persuaded by the demand-side argument for the Industrial Revolution, which flies in the face of lots of scholarship in economic history (as well as in history). Now, it may be that the first chapter just made me grumpy. But Johnson draws several conclusions which are, at best, a selective interpretation of the evidence. And at times, he makes comparisons that are somewhat odd, such as a comparison between the tools and technologies available to artists and scientists and those available to musicians in the 17th Century, concluding that there were fewer and less advanced tools available to artists and scientists than for musicians. There doesn't seem to be any firm basis to make such a comparison (how does one measure how advanced technologies in different disciplines are, in order to compare them?).

The final chapter, though, was a highlight to me. There was a really good discussion of the role of taverns in the American Revolution. And in that discussion, Johnson acknowledges that it is difficult to establish a causal relationship (which made me again wonder why he was unconcerned about the challenges of causality between shopping and the industrial revolution earlier in the book). I really appreciated the discussion of the work of Jürgen Habermas, Ray Oldenburg, and the "third places" (places of gathering that are neither work, nor home). It reminded me of my wife's excellent PhD thesis on cafés.

Overall, I did enjoy the book in spite of my griping about the overall thesis and the way that Johnson sometimes draws conclusions from slim evidence. If you are interested in the history of leisure pursuits, I recommend it to you.

Wednesday, 30 October 2024

Some notes on generative AI and assessment (in higher education)

Last week, I posted some notes on generative AI in higher education, focusing on positive uses for AI for academic staff. Today, I want to follow up with a few notes on generative AI and assessment, based on some notes I made for a discussion at the School of Psychological and Social Sciences this afternoon. That discussion quickly evolved into more of a discussion on intentional design of assessment more generally, rather than focusing on the risks of generative AI to assessment more specifically. That's probably a good thing. Any time academic staff are thinking more intentionally about the assessment design, the outcomes are likely to be better for students (and for the staff as well).

Anyway, here are a few notes that I made. Most importantly, the impact of generative AI on assessment, and the robustness of any particular item of assessment to generative AI, depends on context. As I see it, there are three main elements of the context of assessment that matter most.

First, assessment can be formative, or summative (see here, for example). The purpose of formative assessment is to promote student learning, and provide actionable feedback that students can use to improve. Formative assessment is typically low stakes, and the size and scope of any assessment item is usually quite small. Generative AI diminishes the potential for learning from formative assessment. If students are outsourcing (part of) their assessment to generative AI, then they aren't benefiting from the feedback or the opportunity for learning that this type of assessment provides.

Summative assessment, in contrast, is designed to evaluate learning, distinguish good students from not-so-good students from failing students, and award grades. Summative assessment is typically high stakes, with a larger size and scope of assessment than formative assessment. Generative AI is a problem in summative assessment because it may diminish the validity of the assessment, in terms of its ability to distinguish between good students and not-so-good students, or between not-so-good students and failing students, or (worst of all) between good students and failing students.

Second, the level of skills that are assessed is important. In this context, I am a fan of Bloom's taxonomy (which has many critics, but in my view still captures the key idea that there is a hierarchy of skills that students develop over the course of their studies). In Bloom's taxonomy, the 'cognitive domain' of learning objectives is separated into six levels (from lowest to highest): (1) Knowledge; (2) Comprehension; (3) Application; (4) Analysis; (5) Synthesis; and (6) Evaluation.

Typically, first-year papers (like ECONS101 or ECONS102 that I teach) predominantly assess skills and learning objectives in the first four levels. Senior undergraduate papers mostly assess skills and learning objectives in the last three levels. Teachers might hope that generative AI is better at the lower levels - things like definitions, classification, understanding and application of simple theories, models, and techniques. And indeed, it is. Teachers might also hope that generative AI is less good at the higher levels - things like synthesising papers, evaluating arguments, and presenting its own arguments. Unfortunately, it also appears that generative AI is also good at those skills. However, context does matter. In my experience, and this is subject to change because generative AI models are improving rapidly, generative AI can mimic the ability of even good students at tasks at low levels of Bloom's taxonomy, which means that tasks at that end lack any robustness to generative AI. However, at tasks higher on Bloom's taxonomy, generative AI can mimic the ability of failing and not-so-good students, but is still outperformed by good students. So, many assessments like essays or assignments that require higher-level skills may still be a robust way of identifying the top students, but will be much less useful for distinguishing between students who are failing and students who are not-so-good.

Third, authenticity of assessment matters. Authentic assessment (see here, for example) is assessment that requires students to apply their knowledge in a real-world contextualised task. Writing a report or a policy brief is a more authentic assessment than answering a series of workbook problems, for example. Teachers might hope that authentic assessment would engage students more, and reduce the use of generative AI. I am quite sure that many students are more engaged when assessment is authentic. I am less sure that generative AI is used less when assessment is authentic. And, despite any hopes that teachers have, generative AI is just as good in an authentic assessment as it is in other assessments. It might be better in fact. Consider the example of a report or a policy brief. The training datasets of generative AI no doubt contain lots of reports and policy briefs, so it has lots of experience with exactly the types of tasks we might ask students to complete in an authentic assessment.

So, given these contextual factors, what types of assessment are robust to generative AI. I hate to say it, and I'm sure many people will disagree, but in-person assessment cannot be beaten in terms of robustness to generative AI. In-person tests and examinations, in-person presentations, in-class exercises, class participation or contributions, and so on, are assessment types where it is not impossible for generative AI to influence, but where it is certainly very difficult for it to do so. Oral examinations are probably the most robust of all. It is impossible to hide your lack of knowledge in a conversation with your teacher. This is why universities often use oral examinations at the end of a PhD.

In-person assessment is valid for formative and summative assessment (although the specific assessments used will vary). It is valid at all levels of learning objectives that students are expected to meet. It is valid regardless of whether assessment is authentic or not. Yes, in case it's not clear, I am advocating for more in-person assessment.

After in-person assessment, I think the next best option is video assessment. But not for long. Using generative AI to create a video avatar to attend Zoom tutorials, or to make a presentation, is already possible (HeyGen is one example of this). In the meantime though, video reflections (as I use in ECONS101), interactive online tutorials or workshops, online presentations, or question-and-answer sessions, are all valid assessments that are somewhat robust to AI.

Next are group assessments, like group projects or group assignments, or group video presentations. The reason that I believe group assessments are somewhat robust is that it requires a certain amount of group cohesion to make a sustained effort at 'cheating'. I don't believe that most groups that are formed within a single class are cohesive enough to maintain this (although I am probably too hopeful here!). Of course, there will be cases when just one group member's contribution to a larger project was created with generative AI, but generally it would take the entire group to do so. When generative AI for video becomes more widespread, group assessments will become a more valid assessment alternative than video assessment.

Next are long-form written assessments, like essays. I'm not a fan of essays, as I don't think they are authentic as assessment, and I don't think they assess skills that most students are likely to use in the real world (unless they are going onto graduate study). However, they might still be a valid way of distinguishing between good students and not-so-good students. To see why, read this New Yorker article by Cal Newport. Among other issues, the short context window of most generative AI models means that it is not great at long-form writing, at least compared with shorter pieces. However, generative AI's shortcomings here will not last, and that's why I've ranked long-form writing so low.

Finally, online tests, quizzes, and the likes should no longer be used for assessment. The development of browser plug-ins that can be used to answer multiple-choice, true/false, fill-in-the-blanks, and short-answer-style questions automatically, with minimal student input (other than perhaps to hit the 'submit' button), makes these types of assessments invalid. Any attempts to thwart generative AI in this space (and I've seen things like using hidden text, using pictures rather than text, and other similar workarounds) are at best an arms race. Best to get out of that now, rather than wasting lots of time trying (but generally failing) to stay one step ahead of the generative AI tools.

Finally, I know that many of my colleagues have become attracted to getting students to use generative AI in assessment. This is the "if you can't beat them, join them" solution to generative AI's impact on assessment. I am not convinced that this is a solution, for two reasons.

First, as is well recognised, generative AI has a tendency to hallucinate. Users know this, and can recognise when a generative AI has hallucinated in a domain in which they (the user) have specific knowledge. If students, who are supposed to be developing their own knowledge, are being asked to use or work with generative AI in their assessment, at what point will those students develop their own knowledge that they can use to recognise when the generative AI tool that they are working with is hallucinating? Critical thinking is an important skill for students to develop, but criticality in relation to generative AI use often requires the application of domain-specific knowledge. So, at the least, I wouldn't like to see students encouraged to work with generative AI until they have a lot of the basics (skills that are low on Bloom's taxonomy) nailed first. Let generative AI help them with analysis, synthesis, or evaluation, while the student's own skills in knowledge, comprehension, and application allow them to identify generative AI hallucinations.

Second, the specific implementations of assessments that involve students working with generative AI are not often well thought through. One common example I have seen is to give students a passage of text that was written by AI in response to some prompt, and ask students to critique the AI response. I wonder, in that case, what stops the students from simply asking a different generative AI model to critique the first model's passage of text?

There are good examples of getting students to work with generative AI though. One involves asking students to write a prompt, retrieve the generative AI output, and then engage in a conversation with the generative AI model to improve the output, finally constructing an answer that combines both the generative AI output and the student's own ideas. The student then submits this final answer, along with the entire transcript of their conversation with the generative AI model. This type of assessment has the advantage of being very authentic, because it is likely that this is how most working people engage with generative AI for competing work tasks (I know that it's one of the ways that I engage with generative AI). Of course, it is then more work for the marker to look at both the answer and the transcript that led to that answer. But then again, as I noted in last week's post, generative AI may be able to help with the marking!

You can see that I'm trying to finish this post on a positive note. Generative AI is not all bad for assessment. It does create challenges. Those challenges are not insurmountable (unless you are offering purely online education, in which case good luck to you!). And it may be that generative AI can be used in sensible ways to assist in students' learning (as I noted last week), as well as in students completing assessment. However, we first need to ensure that students are given adequate opportunity to develop a grounding on which they can apply critical thinking skills to the output of generative AI models.

[HT: Devon Polaschek for the New Yorker article]

Read more:

Monday, 28 October 2024

Generative AI may increase global inequality

As I noted in a post earlier this month, the general public appears to be worried about the impact of generative artificial intelligence on jobs and inequality. Some economists are clearly worried as well. Consider this post on the Center for Global Development blog, by Philip Schellekens and David Skilling. They note three reasons why generative AI might increase global inequality, because: (1) richer countries are better equipped to harness AI’s benefits; (2) poorer countries may be less prepared to handle AI’s disruptions; and (3) AI is intensifying pressure on traditional development models.

I have a lot of sympathy for these arguments, but it is worth exploring them in a bit more detail. Here's part of what Schellekens and Skilling said on the first reason:

High-income countries, along with wealthier developing nations, hold a distinct advantage in capturing economic value from AI thanks to superior digital infrastructure, abundant AI development resources, and advanced data systems...

When many people may think about economic growth, we think about catch-up growth. Developing countries often have growth rates that exceed those in developed countries. There are vivid examples of catch-up growth, like the way many developing countries were able to bypass copper telephone lines and move straight to mobile telecommunications. Could AI be like that? It's a hopeful vision. However, the problem with that argument is that AI isn't quite the same as the telecommunications example. There is no outdated technology that is being replaced by AI (unless humans count?). So, developing countries can't leapfrog technology and catch up. If a country doesn't have the technology infrastructure and capital necessary to develop their own AI models, they will be forced to use models developed in other countries. That creates problems for developing countries, and Schellekens and Skilling note two particular concerns:

First, AI could reinforce the dominance of wealthier nations in high-value sectors like finance, pharmaceuticals, advance manufacturing, and defense. As richer countries use AI to enhance productivity and innovation, it becomes harder for poorer countries to penetrate these markets.

Second, while AI is poised to primarily disrupt skill-intensive jobs more prevalent in advanced economies, it can also undermine lower-cost labor in developing countries. Automation in manufacturing, logistics, and quality control would enable wealthier nations to produce goods more efficiently, reducing the need for low-wage foreign workers. This shift, supported by AI-driven predictive analytics and customization capabilities, may allow richer countries to outcompete on cost, speed, and product desirability.

Note that second argument says that in spite of any increase in inequality within developed countries (which is what the general public was most concerned about in my previous post), there would be increases in global inequality because of the differential impact on different labour markets. This is a consequence of past labour market polarisation, where different countries have become reliant on employment in different sectors.

On their second point, Schellekens and Skilling note that, while the social safety net in developed countries may insulate their populations from the negative impacts of AI (a point that I'm not sure that many would agree with), the situation in developing countries is quite different:

Limited resources and underdeveloped social protection systems mean they are less equipped to absorb the economic and social shocks caused by AI-driven disruptions. Many lower-income countries already struggle with high rates of informal employment and fragile labor markets, leaving workers highly vulnerable to sudden economic shifts.

The lack of fiscal space also restricts these countries from investing in crucial areas like reskilling programs, infrastructure upgrades, or targeted welfare schemes to support affected communities. Without such mechanisms, the impact of AI-related job losses could exacerbate unemployment and deepen poverty.

It would be interesting to see some research on the expected impact of generative AI on informal sector employment, but I except that Schellekens and Skilling are largely correct about the impacts on formal sector employment in developing countries.

Finally, on their third point, Schellekens and Skilling note that the model of development that many countries have followed in recent decades, moving first from an agrarian economy, into low-technology manufacturing (like garments), and then into higher-technology manufacturing over time, has become less viable for developing countries over time, and that generative AI may impact the obvious alternative, which is export-oriented service industries:

Countries like the Philippines and India have seen success in business process outsourcing, thanks to booming call center industries and IT services. But AI poses a threat to this model as well. AI has the potential to reduce the labor intensity of these activities, eroding the competitive edge in the international marketplace of lower-cost service providers.

If AI were to undermine labor-intensive service industries, developing countries may find it harder to identify viable pathways for growth, posing a significant challenge to long-term development and dampening the prospects of convergence.

The conclusion here is that generative AI may not only increase within-country inequality, but because of the differential impact on developed and developing countries, it may increase between-country inequality as well. This would potentially reverse decades of declining global inequality (see here and here).

Sunday, 27 October 2024

Airlines have to pay more compensation for death or injury, but it probably still isn't enough

The value of a preventable fatality (a more palatable term than the value of a statistical life) for New Zealand was increased last year to $12.5m (see here). That is the value that Waka Kotahi New Zealand Transport Agency uses in evaluating the benefits of road safety improvements, for example. The new value was a substantial increase from the previous value of $4.88 million.

So, I was interested to read this week that the International Civil Aviation Organisation (ICAO) has revised the amount that airlines must pay in compensation in the event of a death or injury, to just $335,000. As the New Zealand Herald reported:

Travellers will be eligible for higher compensation for international flights, with the International Civil Aviation Organisation (ICAO) setting new liability limits for death, injury, delays, baggage and cargo issues.

This means airlines must pay out at least $335,000 for death or “bodily injury” on flights as a result of the review of payment levels that come into force late this year.

While liability limits are set by the international Montreal Convention agreement, there are no financial limits to the liability for passenger injury or death if a court rules against an airline.

Why is the ICAO value so much lower? After some fruitless searching, I haven't been able to find anything to say how the ICAO sets its value. It dates back to 1999, where the value was set as 100,000 SDRs (Special Drawing Rights - an international reserve asset created by the International Monetary Fund, based on a basket of five currencies).

One reason that might account for this difference is the way that the two estimates are measured. The value of statistical life for New Zealand noted above is measured using the willingness-to-pay approach. Essentially, that method involves working out how much people are willing to pay for a small reduction in the risk of death, then scaling that value up to work out how much they would be willing to pay for a 100 percent reduction in the risk of death, which becomes the estimated value of a statistical life.

An alternative is to use the human capital approach, which involves estimating the value of life as the total amount of economic production remaining in the average person's life. The value of that production is estimated as their wages. Essentially then, this approach involves working out the total amount of wages that the average person will earn in their remaining lifetime. Typically, the human capital approach will lead to a much smaller estimate than the willingness-to-pay (WTP) approach (and for an unsurprising reason - people are worth more than just the value they generate in the labour market!).

So, this difference in approach might account for the different estimates. Why might the ICAO use the human capital approach? One reason may be that the human capital approach leads to lower liability for compensation (in cases where the airline is not found to be at fault - if the airline is found by courts to be at fault, then the compensation is uncapped). Given that many airlines that belong to ICAO are national carriers, each country has an incentive to try and limit the liability of their own airline to paying compensation. A second reason is explained in Kip Viscusi's book Pricing Lives (which I reviewed here). In the book, Viscusi argues that the WTP approach is more appropriate when considering what society is willing to pay to prevent deaths (e.g. in road safety improvements), and that the human capital approach is more appropriate approach when considering a particular life (e.g. in calculating a legal penalty for wrongful death). If we believe Viscusi's argument, then the human capital approach should be used by ICAO.

However, even if we believe that the human capital approach is the right approach (and I'm not convinced that it is), it probably still underestimates the compensation that should be paid, at least for New Zealanders. Consider the following details. The median age in New Zealand is 38.1 years (at the 2023 Census). Life expectancy (at birth) is 80 years for males, and 83.5 years for females. The median weekly earnings (from wages and salaries) was $1343 in June 2024, or $69,836 per year. Using those numbers, and assuming that the median-aged person works only until age 65, and using a social discount rate of 3 percent per year, the discounted value of future wages for the average New Zealander is $1.35 million. That is more than four times higher than ICAO's figure, and is estimated using the human capital approach. Even if we used a discount rate of 10 percent, rather than 3 percent, the value is still about $715,000, more than double the ICAO value.

The ICAO is seriously understating the value of compensation that should be paid in the case of a death on a flight (and where the airline is not at fault). It's just as well that these are rare events!

Saturday, 26 October 2024

If airlines priced all tickets the same, then that would create other problems

Dynamic pricing has been in the news again this week, with Consumer NZ labelling Air New Zealand ticket prices a "rip off". As the New Zealand Herald reported:

Consumer NZ has found that Air New Zealand flights across the Tasman around school holidays increased 43% - almost twice the rate of rival Qantas.

It says it might not be worth flying Air New Zealand to Australia, with evidence that our national carrier is exploiting its market share and demand during the school holidays, giving travellers cause to question if what they’re paying is fair...

A recent Consumer investigation into domestic flights found dynamic pricing could increase the price of the same ticket from Auckland to Dunedin by up to four times as much...

Consumer says while supply and demand do impact dynamic pricing algorithms, “we’re not convinced it’s that simple. We think it’s likely that dynamic pricing allows Air New Zealand to make up profit margins, and it certainly looks like its practices are capitalising on New Zealanders wanting to travel during the school holidays.

“Compared to Qantas, which was consistently cheaper and didn’t have comparable price hikes during either New Zealand or Queensland school holidays, flying with our national carrier to Brisbane looks like a rip off.”

The issue here is the difference in price between a ticket purchased well in advance, and one purchased closer to the date of travel, with the latter being much more expensive. This is an example of price discrimination - selling the same good or service to different consumers for different prices. And price discrimination by airlines is a topic I have posted on before. Here's the explanation I gave then:

Some consumers will buy a ticket close to the date of the flight, while others buy far in advance. That is information the airline can use. If you are buying close to the date of the flight, the airline can assume that you really want to go to that destination on that date, and that few alternatives will satisfy you (maybe you really need to go to Canberra for a meeting that day, or to Christchurch for your aunt's funeral). Your demand will be relatively inelastic, so the airline can increase the mark-up on the ticket price. In contrast, if you buy a long time in advance, you probably have more choice over where you are going, and when. Your demand will be relatively elastic, so the airline will lower the mark-up on the ticket price. This intertemporal price discrimination is why airline ticket prices are low if you buy far in advance.

Similarly, if you buy a return ticket that stretches over a weekend, or a flight that leaves at 10am rather than 6:30am, you are more likely to be a leisure traveller (relatively more elastic demand) than a business traveller (relatively more inelastic demand), and will probably pay a lower price.

The solution is simple. If you want to pay a lower price for an airline ticket, book in advance. That's the advice that Air New Zealand gives in the article:

Customers should book early to secure the best deals, said the (Air New Zealand] spokesperson.

Consumer NZ is of course trying to do the best by consumers. They want lower prices for airline tickets, even when purchased close to the date of travel. However, taking aim at dynamic pricing might be counterproductive. Even putting aside the infeasibility of regulating dynamic pricing, if airlines were to eliminate dynamic pricing, that isn't without cost to travellers.

One thing that an escalating ticket price over time does is manage demand for airline tickets. As price increases, fewer consumers are willing and able to buy tickets. That means that there will generally be more airline tickets available close to the date of travel than there would have been if airline ticket prices remained low all along. Would it be worse to have to pay a high price for an airline ticket purchased at the last minute, or to have no tickets available at all, because the low price encouraged more people to buy, selling out planes sooner? It's not clear to me that is a better outcome.

Even in the case where tickets remain available, a second issue is that it isn't clear that ticket prices would remain low. A profit-maximising airline that no longer price discriminates would set a lower price for tickets purchased close to the date of travel, but a higher price for tickets purchased well in advance. Essentially, they would average the price out over time, meaning that some travellers would end up paying a lower price. That would likely be the leisure travellers, purchasing their tickets well in advance. Business travellers, who are more likely to purchase tickets at the last minute, would benefit greatly from airlines no longer using dynamic pricing.

Consumer NZ is trying to look after the interests of airline travellers (it's not the first time either). However, it isn't clear that they have thought through all of the implications of their attack on dynamic pricing.

Read more:

Friday, 25 October 2024

This week in research #46

With my marking out of the way and provisional grades released to students, I can turn my attention to research once again. Sadly, it was a very quiet week, but here's what caught my eye in research:

  • Charmetant, Casara, and Arvaniti (open access) document the extent of treatment of climate change in introductory economics textbooks (and the CORE text The Economy, which I use in ECONS101, looks pretty good overall, ranking top among US textbooks, and second overall)
  • Miller, Shane, and Snipp (open access working paper) look at the impact of the 1887 Dawes Act in the US (which made Native Americans citizens of the United States with individually-titled plots of land rather than members of collective tribes with communal land), and find that it increased various measures of Native American child and adult mortality from nearly 20% to as much as one third (implying a decline in life expectancy at birth of about 20%)

Wednesday, 23 October 2024

Some notes on generative AI in higher education

I've been a little quiet on the blog this week, as I've been concentrating on reducing my end-of-trimester marking load. However, I came out of exile today to contribute to a discussion on generative artificial intelligence in higher education, for staff of Waikato Management School. The risk with any discussion of AI is that it degenerates into a series of gripes about the minority of students who are making extensive use of AI in completing their assessment. I was type-cast into being the person to talk about that aspect (in part because I will be doing that next week in a discussion at the School of Psychological and Social Sciences). However, I wanted to be a bit more upbeat, and focus on the positive aspects of AI for academics.

I don't consider myself an expert on AI. However, I have read a lot, and I pay attention to how others have been using AI. I've used it a little bit myself (and I'm sure there is much more use that I could make of it). I made some notes to use in the discussion, and thought I would share them. I link to a few different AI tools below, but those are by no means the only tools that can be used for those purposes. Where I haven't linked to a specific tool, then a general-purpose generative AI like ChatGPT, Claude, or Gemini will do the job.

I see opportunities for generative AI in four areas of academic work. First, and perhaps most obviously, generative AI can be used for improving productivity. There are many tasks that academics do that are essentially boring time-sinks. If we adopt the language from David Graeber's book Bullshit Jobs (which I reviewed here), these are tasks that are essentially 'box-ticking'. Where I am faced with a task that I really don't want to do, but I know that I can't really say no to, my first option is to outsource as much of it as possible to ChatGPT. "You want me to write a short marketing blurb for X? Sure, I can do that." [Opens ChatGPT].

Aside from avoiding bullshit tasks, there is lots of scope for using generative AI for improving productivity. I'm sure that a quick Google search (or a ChatGPT query) will find lots of ideas. A few that I have used, or advocated for others to use (because I'm useless at following advice that I freely give to others), are:

  • Brainstorming - coming up with some ideas to get you started on a project. If you have the idea, but are looking for some inspiration, generative AI will give you some starting points to get you underway.
  • Writing drafts - sometimes generative AI can be used to create the first draft of a common task, or to create templates for future use. For example, I got ChatGPT to re-write the templates that I use for reference letters for students, and for supervision reports for my postgraduate students. I can then adapt those templates as needed in the future.
  • Editing - sometimes you have an email that you need to send, and you want to use a particular tone. With a suitable prompt, generative AI can easily change the tone of your email from 'total dick' to 'critical but helpful' (I may need to use this much more!).
  • Condensing or expanding - Academics will often use ten words when four words would be enough. Generative AI can do a great job of condensing a long email or piece of text. On the other hand, if you need to expand on something, generative AI can help with that too.
  • Summarising or paraphrasing - On a similar note, generative AI can help with paraphrasing long pieces of text, or summarising one or more sources. Some good tools here are Quillbot for paraphrasing, Genei for summarising text, or summarize.ing for summarising YouTube videos.
  • Translation - Going from one language to another is a breeze. It may not always be 100% accurate, but it is close enough.

Second, generative AI can be used for teaching. Here's a few use cases in the teaching space:

  • Writing questions or problem sets - generative AI can write new questions, but they aren't always good questions. However, it can be used to generate new context or flavour text on which to base a question, which is pretty important if you want something new but are feeling uninspired. Also, creating a problem set or quiz questions (multiple choice, fill-in-the-blanks, true/false) is fairly straightforward, by uploading your notes or lecture slides. However, I wouldn't use those questions in an online testing format (more on that when I post about assessment next week).
  • Writing marking rubrics - With a short prompt outlining the task, the number of cut-points and marks, ChatGPT created the first draft of all of the marking rubrics for the BUSAN205 paper in A Trimester this year. I had to cut back on the number of criteria that ChatGPT was using, and modify the language a little bit, but otherwise they were pretty good.
  • Marking to a rubric - Once you have the rubric and the student's submitted assessment, generative AI can easily mark the work against the rubric. You would want to check a good sample of the work to ensure you were getting what you expected, but this could be a huge time-saver for marking long written work (provided you can believe that the work is the student's own, and not written by generative AI!). In case you are wondering, I didn't do this in BUSAN205 (it didn't occur to me until this week!).
  • Lesson plans - Creating lesson plans (which is more often a primary or secondary school approach to teaching than in higher education) is a breeze with generative AI. Just tell it what you want, and how much time you have, and it can create the plan for you. One useful tool is lessonplans.ai.
  • Lecture slides - Most of us probably write our slides first, and write notes second. However, if you have the notes and want slides, then generative AI can save you the hassle. And the end product will likely be better than anything you or I could create (as well as conforming to recommendations like limits on the number of bullet points on a single page, etc.).

Third, generative AI can be used for assisting student learning (this is separate from students using it for completing assessment tasks). I can see two good use cases here:

  • As a personal tutor - Using a tool like coursable.io or yippity.ai, students can create personalised flash cards or quizzes for any content that they upload. A link from your learning management system could point students to these useful tools
  • Creating your own finetuned AI - This is one use case where I am very excited. By uploading my lecture slides, tutorials, transcripts of my lecture recordings, and posts from my blog, I think I can probably finetune my own AI. What better way for students to learn that from the chatbot based on their lecturer? I will likely be playing with this option over the summer.

Fourth, generative AI can be used as a credible research assistant. However, like any human research assistant, you would be wise to avoid taking anything that generative AI provide you uncritically. Applying high standards of due diligence will help to minimise problems of hallucination, for example (for comparison, I'm not sure what the rate of hallucination is among human research assistants, but I'm pretty sure it is not zero). Aside from some of the use cases above, which could apply to research as well, I can see these options:

  • Literature review - It's far from perfect, but tools like Elicit or Consensus do a credible job of drafting literature reviews. It would provide a good base to build on, or a good way to identify literature that you might otherwise miss.
  • Qualitative data analysis - Some of the most time-consuming research is qualitative. However, using a tool like atlas.ti, you can automate (or semi-automate) thematic analysis, narrative analysis, or discourse analysis (and probably other qualitative methods that I don't know the names of).
  • Sentiment analysis - Sentiment analysis is increasingly being used in quantitative and qualitative research, and generative AI can be used to easily derive measures of sentiment from textual data. I'm sure there are lots of other uses cases for textual data analysis as well.
  • Basic statistics - I've seen examples of generative AI being used to generate basic statistical analyses. This is particularly useful if you are not quantitatively inclined, and yet want to present some statistics to provide some additional context or additional support for your research.
  • Coding - Writing computer code has never been easier. Particularly useful for users of one statistics package (like R or Stata or Python) wanting to write code to run in a different package.

Anyway, I'm sure that there are many other use cases as well, but those are the ones that I briefly touched on in the session today. I'll be talking about the negative case (the risks of generative AI for assessment, and how to make assessment more AI-robust) next week, and I'll post on that topic then. In the meantime, try out some of these tools, and enjoy the productivity and work quality benefits they provide.

Friday, 18 October 2024

This week in research #45

Here's what caught my eye in research over the past (fairly quiet) week:

  • Angrist et al. (open access) analyse the effectiveness and cost-effectiveness of education interventions from over 200 impact evaluations across 52 countries (using learning-adjusted years of schooling (LAYS) as a unified measure across all studies)
  • Grant and Üngör develop a theoretical model that shows that rising use of automation in production will cause a rise in the skill premium (increasing wages of high-skilled traditional workers and high-skilled workers with an AI background) and the AI skill premium (wages of high-skilled labour with an AI-based education relative to those with a traditional education background), which will likely increase income inequality

Wednesday, 16 October 2024

The economic welfare gains from the introduction of generic weight-loss drugs

The Financial Times reported this week (paywalled):

India’s powerful copycat pharmaceutical industry is set to roll out generic weight-loss drugs in the UK within weeks, with one leading producer forecasting a “huge price war” that could widen access to the popular medicines.

Bengaluru-based Biocon is the first company to win UK authorisation to offer a generic version of Novo Nordisk’s Saxenda weight treatment and is ready to launch sales by November.

Saxenda is an older drug of the same GLP-1 drug class as the Danish company’s popular Ozempic diabetes treatment and Wegovy weight-loss medication.

In an interview with the Financial Times, Biocon chief executive Siddharth Mittal declined to comment on his pricing strategy for generic Saxenda, but predicted his company’s sales of the drug would reach £18mn annually in the UK after the expiry of its patent protection there next month. Mittal said he expected Biocon’s generic version of Saxenda to be approved by the EU this year and in the US by 2025.

“When the generics come in there will be a huge price war,” he said. “There is a huge demand for these drugs at the right price.”

To see how the introduction of generic medicines affects the market, consider the diagram of the market for Saxenda below. When the active ingredient in Saxenda is protected by a patent, the market is effectively a natural monopoly. That means that the average cost curve (AC in the diagram) is downward sloping for all levels of output. This is because, as the quantity sold increases, the large up-front cost of developing Saxenda (see here for example) will be spread over more and more sales, lowering the cost on average. If Novo Nordisk (the producer of Saxenda) is maximising its profits, it will operate at the quantity where marginal revenue meets marginal cost, i.e. at QM, which it can obtain by setting a price of PM (this is because at the price PM, consumers will demand the profit-maximising quantity QM). Novo Nordisk makes a profit from Saxenda that is equal to the area PMBKL. [*]


Now consider what happens in this market when the patent expires and generic versions of Saxenda enter the market. We end up with a market that is more competitive, which would operate at the point where supply (MC) meets demand. This is at a price of PC, and the quantity of QC. Notice that the price of Saxenda falls dramatically - this is how the price war that Mittal mentions will play out.

Now consider what happens to the other areas of economic welfare. Before the patent expires, the consumer surplus is equal to the area GBPM. After the patent expires, the consumer surplus increases to the area GEPC. Consumers are made much better off by the patent expiry, because they can buy Saxenda at a much lower price, and they respond by buying much more of it. The producer surplus, which was PMBHPC, becomes zero. [**] The competition between the producers drives this producer surplus down. Total welfare (the sum of consumer and producer surplus) increases from GBHPC to GEPC. So, society is better off after the patent expiry.

Now, you could argue based on this that expiring the patent earlier would be even better, given the economic welfare gain that would result. And while I have some sympathy for that view, governments should be a little cautious here. The large producer surplus from having the patent in place creates an incentive for the big pharmaceutical firms to develop these pharmaceuticals in the first place. So, an appropriate balance between patent protection and incentives for pharmaceutical development needs to be found. Nevertheless, it is clear that once patents expire, there is a large welfare gain to society at that point.

*****

[*] This is different from the producer surplus, which is the area PMBHPC. The difference between producer surplus and profits arises because of the fixed cost - in this case, the cost of development of Saxenda.

[**] If we treat this as continuing to be a natural monopoly after the patent expiry, the market makes a negative profit of -JFEPC (because the price PC is less than the average cost of production ACC). However, you could argue that because the firms producing the generic version didn't face the up-front cost of development, this is no longer a natural monopoly once the patent has expired.

Tuesday, 15 October 2024

Nobel Prize for Daron Acemoglu, Simon Johnson, and James Robinson

Many economists had been picking this prize for a few years. Daron Acemoglu (MIT), Simon Johnson (MIT), and James Robinson (University of Chicago) were awarded the 2024 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka the Nobel Prize in Economics) yesterday, "for studies of how institutions are formed and affect prosperity".

While many, if not most, Nobel Prize winners in economics are largely unknown outside the discipline, having toiled away publishing papers only read by other economists, this award recognises three academics whose key contributions on which the award are based are contained within several best-selling books, including Why Nations Fail (by Acemoglu and Robinson, which I reviewed here), The Narrow Corridor (by Acemoglu and Robinson, which I reviewed here), and Power and Progress (by Acemoglu and Johnson, which I haven't read yet, but it is close to the top of my pile of books to-be-read). The Nobel Prize Committee's citation noted:

The laureates have shown that one explanation for differences in countries’ prosperity is the societal institutions that were introduced during colonisation. Inclusive institutions were often introduced in countries that were poor when they were colonised, over time resulting in a generally prosperous population. This is an important reason for why former colonies that were once rich are now poor, and vice versa.

Some countries become trapped in a situation with extractive institutions and low economic growth. The introduction of inclusive institutions would create long-term benefits for everyone, but extractive institutions provide short-term gains for the people in power. As long as the political system guarantees they will remain in control, no one will trust their promises of future economic reforms. According to the laureates, this is why no improvement occurs.

However, this inability to make credible promises of positive change can also explain why democratisation sometimes occurs. When there is a threat of revolution, the people in power face a dilemma. They would prefer to remain in power and try to placate the masses by promising economic reforms, but the population are unlikely to believe that they will not return to the old system as soon as the situation settles down. In the end, the only option may be to transfer power and establish democracy.

Notice that citation really is the theme across their three books. Of course, there is an academic base that those books are founded on as well, and which no doubt contributed to their prize. Alex Tabarrok at Marginal Revolution gives a good summary of their work, as does John Hawkins at The Conversation. As those two posts make clear, all three prize winners have made contributions beyond those in the citation.

However, Acemoglu is clearly a standout performer, and has been for a long time. He is one of the most cited economists in the world, with contributions across a number of areas. Tabarrok points to joint work between Acemoglu and pascual Restrepo on technological change. I have on my list of interesting ideas to go back and look at a different paper by Acemoglu and Restrepo, on the impacts of population ageing on economic growth, but using different measures of population ageing (as in my article here). I also pointed to Acemoglu's views on the impact of generative AI on inequality yesterday, which he has also researched recently.

In my ECONS102 class, I've been including more of a focus on economic and political institutions over time, and this prize may prompt me to even include a bit more (or at least, to point more explicitly to the work of Acemoglu, Johnson, and Robinson). And hopefully it will encourage even more people to read their books.

Monday, 14 October 2024

Generative AI and expectations about inequality

In the last week of my ECONS102 class, we covered inequality. In discussing the structural causes of inequality, I go through a whole bunch of causes grouped together under a heading of 'structural changes in the labour market', one of which is skills-biased technological change. The basic idea is that over time, some technology (like computers) has made people in professional, managerial, technical, and creative occupations more productive or allowed them to reach larger audiences at low cost. However, other technology (like robots) has tended to replace routine jobs in sectors like manufacturing. This has increased the premium for skilled labour, increasing the ‘gap’ between skilled and unskilled wages.

In discussing this idea of skills-biased technological change this year, I mused about the potential impact of generative artificial intelligence, and whether skills-biased technological change was about to reverse, leading to job losses in professional, managerial, technical, and creative occupations, while jobs in activities that might broadly be grouped into manual and dexterous labour (like plumbers, electricians, or baristas) would remain. A change like that would likely reduce inequality (but not necessarily in a good way!).

The truth is, I don't think that economists have a good handle on what the impacts of generative AI will be on the labour market. On the one hand, you have some economists like Stanford's Nick Bloom, claiming that a lot of jobs (in particular tasks or occupations or sectors) are at risk. The loss of low-productivity, low-wage jobs that Bloom considers at risk, like call centre workers, will likely increase inequality further. On the other hand, you have other economists like MIT's Daron Acemoglu, claiming that the impact of generative AI on inequality will be small.

Given that economists can't agree on this, it is interesting to know what the general public thinks. That's the question that this post on Liberty Street Economics by Natalia Emanuel and Emma Harrington addresses. Using data from the February 2024 Survey of Consumer Expectations, they report that:

In general, a substantial share of respondents did not anticipate that genAI tools would affect wages: 47 percent expected no wage changes. These beliefs did not differ significantly based on prior exposure to genAI tools.

However, respondents believed that genAI tools would reduce the number of jobs available. Forty-three percent of survey respondents overall thought that the tools would diminish jobs. This expectation was slightly more pronounced among those who had used genAI tools, a statistically significant difference.

And specifically in terms of inequality:

We find that those who have used genAI tools tend to be more pessimistic about future inequality. Specifically, we asked people whether they thought there would be more, less, or about the same amount of inequality as there is today for the next generation... while 33 percent of those who have not used genAI tools think there will be more inequality in the next generation, 53 percent of those who have used genAI tools think there will be more inequality. This gap persists and is statistically significant, even after controlling for other observable traits. 

So, a large minority of the general public seems to be concerned about generative AI's impact on inequality, and that concern is greater among those with experience (where a small majority believe inequality will increase). Now, it could be that those with greater experience are better able to accurately assess the risks to their own (and others') jobs from generative AI. Or maybe people who use generative AI are simply more likely to have read the AI doomers' predictions of an AI apocalypse (or equally, they could be more likely to read the bullish views of AI proponents). The general public may not know that they fear skills-biased technological change, but they may intuitively understand the potential risks. The real question, which we still cannot answer, is whether those risks are real or not.

Friday, 11 October 2024

This week in research #44

Here's what caught my eye in research over the past week:

  • Hirschberg and Lye (open access) provide suggested guidelines for economists for writing computer code
  • Allen (open access) investigates the causes and the consequences of the emergence of agriculture in the Middle East (and if you're into big questions, those are really big questions)
  • Ferguson et al. (with ungated version here) provide a new meta-analysis of 46 studies on the correlation between time spent on social media and adolescent mental health, finding that there is little support for claims of harmful effects

Thursday, 10 October 2024

How income inequality in New Zealand compares with other OECD countries

Colin Campbell-Hunt (University of Otago) wrote an interesting article on The Conversation last week on inequality in New Zealand. It was well-timed, given that I've been covering poverty and inequality with my ECONS102 class this week. Campbell-Hunt compares income inequality in New Zealand with inequality in other OECD countries.

However, what Campbell-Hunt does here is interesting. First, he looks at income inequality (measured using the Gini coefficient) before accounting for taxes and transfers. Then, he looks at income inequality after accounting for taxes and transfers. The difference in ranking gives us a sense of how redistributive the tax and transfer system is, relative to other countries. Campbell is most interested in the difference between those two measures:

The Gini before taxes and transfers is a measure of the inequality produced by the structures of a country’s economy: the way value chains operate, the markets for products and services, the scarcity of certain skills, rates of unionisation, and so on.

This gives us a measure of structural inequalities in a country. Governments, however, use taxes and transfers to shift income between households. They take taxes from some and boost incomes of the more disadvantaged.

Ginis of incomes after taxes and transfers give us a measure of how well members of a society can support similar standards of living... These give us a measure of social inequalities.

So, how does New Zealand compare? Before taxes and transfers, New Zealand is quite middling, ranked 16th-lowest for inequality (Iceland is first, Japan is 37th and last). After taxes and transfers, New Zealand's ranking looks far worse, being ranked 24th (Slovakia is first, Costa Rica is 37th). Campbell-Hunt interprets this as:

As we can see, New Zealand’s structural inequality, shaped by the economic reforms of the mid-1980s, is middling by comparison to other OECD countries.

But New Zealand’s social inequality lies near the bottom third of OECD measures. A halving of top income tax rates in the mid-1980s and the rollback of the welfare state in the 1990s (after then finance minister Ruth Richardson’s 1991 “mother of all budgets”) significantly contributed to this.

Now, I'm sure we can argue about why New Zealand's tax and transfer system does a poor job (compared with other OECD countries) of reducing income inequality. In my view, laying the blame on governments in the 1980s and 1990s (as Campbell-Hunt does) absolves thirty years of subsequent governments from their role in perpetuating the inequality. Regardless of which government/s may be to blame here, we find ourselves with a tax and transfer system that is nowhere near as redistributive as other countries that we might compare ourselves to. Zeroing in just on the effect of taxes and transfers on inequality, Campbell-Hunt's data shows that New Zealand's system is the 11th-least redistributive (Finland's is most redistributive, Mexico's is least), behind the US, the UK, and Canada, but slightly ahead of Australia.

Given that our closest peer countries have more redistributive tax and transfer systems than New Zealand does, that suggests that we can do more to reduce inequality. As Campbell-Hunt notes:

New Zealand can aspire to goals for social equality matching those in the upper half of OECD countries. Beyond revisions to taxation and transfers, inequalities in health and education would also need to come down to reduce the social and economic costs of poverty and disadvantage that should bring shame to us all.

Campbell-Hunt's data doesn't have anything to say about inequality in health and education, but certainly a more generous and less restrictive benefit system, and more progressivity in taxation, would go some way towards ensuring that New Zealand's inequality looked more like the countries that we compare ourselves to.

Read more:

Sunday, 6 October 2024

Book review: How to Be a Successful Economist

There seems to have been a succession of books on how to be an economist in recent years. First, there was Michael Weisbach's 2021 book The Economist's Craft (which I reviewed here). And then Marc Bellamare's 2022 book Doing Economics (which I reviewed here). In 2022, the book How to Be a Successful Economist was published, written by Vicky Pryce, Andy Ross, Alvin Birdi, and Ian Harwood.

Unlike the former two books, How to Be a Successful Economist isn't pitched at explaining how to succeed as an academic economist. Instead, it aims to explain how to craft a career in economics outside of academia. That aim is much more difficult, given the variety of career paths that economists may choose. However, the book takes a unique approach. Instead of the authors' take alone, the book is built on a foundation of interviews undertaken with nearly 30 economics graduates, ranging from new graduates who have only been in the workforce for a few years, to top businesspeople and government economists, to luminaries such as Professor Lord Nicholas Stern. Quotes from those interviews suffuse the text, giving it both depth and credibility. Even better, the interviews themselves are available online at https://learninglink.oup.com/access/pryce-ross1e (and click on the 'student resources' link from there - unfortunately, the videos and other resources are only available to those who have purchased the book).

The focus of the book on becoming a private sector or public sector, and not an academic, economist is refreshing and welcome. Obviously, the skillset required of economists is very different between academia. In that respect, the book summarised the results of a survey of employers undertaken by the Economics Network in the UK:

The most highly valued of the skills reported by employers who took part in the 2019 Economics Network survey were, in order of importance: the ability to apply theories to the real world; communication skills; and data analysis. Also highly regarded were collaborative skills and more general transferable skills such as time keeping and critical thinking.

The book provides a lot of excellent advice, as well as pointing to some excellent online resources. Unfortunately, as might be the case for many books that point to resources online, some of those resources are no longer accessible. For example, a website www.communicatingeconomics.com sounded excellent to me, but the website appears to no longer exist. Given that the book is only two years old, this was extra disappointing. Nevertheless, other resources are available (albeit at different web addresses than those in the book), such as the ONS Guide to data visualisation (now available here) and the Government Statistical Service's guide to data visualisation (now available here). I also especially liked this advice to economics students, about mathematics and statistics:

Although mathematical skills always complement economics, the level of mathematics you will need depends on your economist career path, but you will need good data skills wherever you go as an economist.

There is also good advice on what an economist does, in terms of providing advice to others, especially that:

...to be useful to decision makers you must not only be interesting but also provide agency... "Agency" means helping others to prepare for and/or to decide what to do next.

Good practical advice for graduating students is very welcome. Where the book falls a bit short is a section where it advocates for eclecticism in economics (as part of a broader and more diverse approach to economics), and a section that explains the basics of, and importance of, cost-benefit analysis. These two sections felt a bit 'forced' in this book, and didn't really gel with the rest of the narrative. While both things are no doubt important, I'm not convinced that they are important in a book giving advice to new economists, since more detailed treatments are available elsewhere. Similarly, while the book also provides some good advice on how to approach the job application and interviewing process, I feel that there are better more general guides available. There simply isn't enough that is 'unique' about the process of applying for jobs within economics to justify a section devoted to it. Nevertheless, some students might find it helpful.

For an international reader, the book also suffers from a strong focus on the UK. Many of the specific examples will not be helpful for students outside of the UK context. The authors are attuned to this though, and note in a footnote that future editions of the book may broaden the scope.

I enjoyed reading the book for the range of resources that were provided. I liked the initial sections attempting to sell economics to a casual student, and I will definitely use some of their approach in my Open Day presentation to students next year. I also liked Figure 10.2 in the concluding section, which summarised "mainstream economic reasons for intervention by the state", which read very much like the syllabus of my ECONS102 class! However, while the resources that the book highlights are useful, and there is lots of good advice, I'm not sure that I would recommend the whole book to students to read. Instead, reading some sections, and accessing the other resources directly, would probably serve them better.

Saturday, 5 October 2024

This couldn't backfire, could it?... Dead possums edition

The New Zealand Herald reported earlier this week:

A conservationist keen to do his bit for the country’s Predator Free 2050 goal is urging gardeners to purchase dead possums from him instead of buying blood and bone for fertiliser this spring.

Wayne Parsonson lives beside the Maungataniwha Forest and is a member of its guardianship project group Honeymoon Valley Landcare. However, he says the possum initiative is his independent venture, which he hopes will inspire others nationwide to follow suit...

Parsonson had caught 1250 possums in the past year.

The fur was plucked and sold to be blended with merino wool for warm, natural clothing.

The ungutted carcasses were rich with nutrients to enliven soil ecology, he said.

For fertilising purposes, he was offering 11 frozen possum carcasses for $35.

Sales were “ticking over” nicely, Parsonson said.

First, good on Wayne Parsonson for doing something about pest possums. And I hope he continues in his efforts. The only good possum is a dead one in my view, except in Australia where, for some reason, they are beloved by many locals. However, I would not like to see a thriving market in possum carcasses.

To see why, we first need to talk a little bit about cobras. As I wrote back in 2015:

One of the most famous (possibly apocryphal) stories of unintended consequences took place in British colonial India. The government was concerned about the number of snakes running wild (er... slithering wild) in the streets of Delhi. So, they struck on a plan to rid the city of snakes. By paying a bounty for every cobra killed, the ordinary people would kill the cobras and the rampant snakes would be less of a problem. And so it proved. Except, some enterprising locals realised that it was pretty dangerous to catch and kill wild cobras, and a lot safer and more profitable to simply breed their own cobras and kill their more docile ones to claim the bounty. Naturally, the government eventually became aware of this practice, and stopped paying the bounty. The local cobra breeders, now without a reason to keep their cobras, released them. Which made the problem of wild cobras even worse.

Now, think about the case of possums. The government isn't providing a bounty for killing possums (which I've already written about). However, if a thriving market in possum carcasses develops, then possum hunters will have a strong incentive to kill possums for profit. That sounds like a great thing. However, killing wild possums takes a lot of effort. It would be much less effort for 'hunters' to raise their own possums, and then kill them in cages. So, some entrepreneurial folks will effectively start 'farming' possums. It is entirely possible that there would be more possums overall as a result.

So, while I admire Parsonson's backyard efforts in killing possums, and I'm happy for him to profit a little from the activity, I wouldn't like to see this market grow too much.

Read more:

Friday, 4 October 2024

This week in research #43

Here's what caught my eye in research over the past week:

  • Müller and Watson (with ungated earlier version here) investigate the consequences of strong spatial dependence in economic variables, applying what are effectively time series methods to accounting for spatial autocorrelation (quite a technical paper, but of interest to those doing spatial econometrics)
  • Joshanloo (with ungated version here) shows that zodiac birth signs are unrelated to subjective wellbeing (a result that shouldn't need to be investigated, surely?)
  • Kampanelis and Elizalde (open access) find that the number of lynchings between 1882 and 1929 (at the county level) is associated with intergenerational upward economic mobility among African American men (measured in the late 20th and early 21st Centuries)
  • Lee, Liu, and Yu (with ungated earlier version here) find that Facebook usage significantly increased the frequency with which users experienced negative emotions, including envy, feelings of inferiority, and depression, using data from Taiwan in 2017

Wednesday, 2 October 2024

The Foodstuffs merger is rejected, so the wholesale market remains an oligopsony

Yesterday we learned the Commerce Commission's decision on the merger application by Foodstuffs North Island and Foodstuffs South Island (which I posted about last month). As NBR reported yesterday (paywalled, but you can read this briefer New Zealand Herald story instead, or the Commerce Commission's decision here):

Foodstuffs wanted to see the co-ops merged within and under the management of a single national grocery entity, which it claimed would be better able to compete with the national Woolworths NZ chain.

It argued the proposed merger would lead to cost reductions (including overhead and product costs), efficiency gains, increased agility and innovation, and a more cohesive national offering, which would ultimately deliver better value for retail consumers at the checkout.

But ComCom chair John Small said today the proposed merger would reduce the number of major buyers of grocery products in New Zealand from three to two, reducing the number of buyers to which many suppliers can supply their products, and creating the largest acquirer of grocery products in the country.

“This would result in the merged entity having greater buyer power than Foodstuffs North Island and Foodstuffs South Island each do individually, which would harm the competitive process, and we consider is likely to substantially lessen competition in many acquisition markets.

“As a consequence of the substantial lessening of competition and the associated increase in buyer power, the merged entity would likely be able to extract lower prices from suppliers and/or otherwise adversely impact suppliers in the relevant markets.

“We are also concerned that the consolidation with the proposed merger would lead to reduced investment and innovation by suppliers, meaning reduced consumer choice and/or quality of grocery products in New Zealand for consumers.”

As I noted in my previous post, it is interesting that the Commerce Commission does appear to be considering competition as it relates to suppliers, and not just consumers. The problem is that, when there are fewer supermarkets, there is less competition among the buyers of suppliers' products. And the supermarkets could use their market power as a buyer to drive down the prices that they pay to suppliers. The Commerce Commission seems to consider there to be a real risk that the merger would lead to a substantial lessening of competition among the supermarkets in buying from their suppliers.

One thing that has disappointed me about the coverage is the lack of the use of a rarely-used word in economics: the oligopsony. What's an oligopsony?

When a market has a single seller, it is a monopoly. When a market has a single buyer, it is a monopsony. When a market has just two sellers, it is a duopoly. When a market has just two buyers, it is a duopsony (which is essentially what we have avoided by this merger not being approved). When a market has a few sellers, it is an oligopoly. The retail market for supermarket products is an oligopoly, since consumers can only buy from one of a few sellers. When a market has just a few buyers, it is an oligopsony. With only three large supermarket chains (Foodstuffs North Island, Foodstuffs South Island, and Woolworths), New Zealand has a supermarket oligopsony in the wholesale market for supermarket products, since suppliers can only sell to one of a few buyers. The Commerce Commission decision ensures that the wholesale market remains an oligopsony.

Read more:

Tuesday, 1 October 2024

Noah Smith on why imports do not subtract from GDP

I am not a macroeconomist. Against my protestations, I have taught macroeconomics in the past, but now I exclusively teach microeconomics. There are certain aspects of macroeconomics that I thought I knew well, like how GDP is calculated. There is a simple method of calculating GDP that we teach in first-year economics, which we call the expenditure method: Y=C+I+G+(X-M). Y is GDP, C is consumption spending, I is investment spending, G is government spending, X is exports, and M is imports. It all seems rather straightforward. However, I genuinely learned something new and important this week about that formula.

Noah Smith has a great post explaining why imports do not subtract from GDP. Check the formula above, and then read that sentence again. Imports do not subtract from GDP. But it's right there in the formula! The thing I learned from Noah's post is that imports are included in C, I, and G. And so, the subtraction of M in the expenditure method formula prevents us from counting imports in measured GDP, by zeroing them out. Imports are not subtracted from GDP. Because they are both added to GDP (through C, I, and G), then subtracted (through -M), the net effect of imports on GDP is zero.

As further explanation, it is worth quoting Smith's post at length (the strikethrough and underline in one of the formulas is my correction of it):

Let’s talk about what GDP is. GDP is the total value of everything produced in a country:

GDP = all the stuff we produce

Imports aren’t produced in the country, so they just don’t count in the formula above. And they aren’t alone. There are plenty of other important things in the Universe that have don’t get counted in GDP, simply because they have nothing to do with domestic economic production. The number of asteroid impacts in the Andromeda galaxy is probably important to someone, but it doesn’t count in U.S. GDP. The population of the beluga sturgeon in Kazakhstan is probably important to someone, but it doesn’t count in U.S. GDP. Imports don’t count in U.S. GDP because, like asteroid impacts in the Andromeda Galaxy and the population of beluga sturgeon in Kazakhstan, they don’t involve domestic economic production in the United States.

In fact we can divide GDP up a different way from the Econ 101 breakdown. Let’s divide it up according to all the categories of people who might ultimately use the stuff... we produce in the U.S.:

GDP = Capital goods we produce for companies + Consumer goods we produce for consumers + Stuff we produce for the government + Stuff we produce for foreigners

Again, imports are nowhere to be seen. But exports are in here! Exports are just all the stuff we produce for foreigners. So the formula is:

GDP = Capital goods we produce for companies + CapitalConsumer goods we produce for consumers + Stuff we produce for the government + Exports

This is a perfectly good formula for GDP. But instead, here’s what economists do. They add imports to the first three categories, and then subtract them again at the end:

GDP =

(Capital goods we produce for companies + Capital goods we import for companies)

+ (Consumer goods we produce for consumers + Consumer goods we import for consumers)

+ (Stuff we produce for the government + Stuff we import for the government)

+ Exports - Capital goods we import for companies - Capital goods we import for consumers - Stuff we import for the government

This type of equation adds in three different types of imports, then subtracts them all again at the end. It’s mathematically equivalent to the formula above it, because if you add imports and then subtract them out again, you’ve just added 0. And adding 0 does nothing. Imports still don’t count in GDP in this equation.

OK, now let’s realize what the terms in the equation mean:

(Capital goods we produce for companies + Capital goods we import for companies) is just Investment.

(Consumer goods we produce for consumers + Consumer goods we import for consumers) is just Consumption.

(Stuff we produce for the government + Stuff we import for the government) is just Government purchases.

Exports - Stuff we import for companies - Stuff we import for consumers - Stuff we import for the government is just Exports - Imports.

So the equation is now:

GDP = Consumption + Investment + Government purchases + Exports - Imports

This is just our good old Econ 101 equation. It looks like imports are being subtracted from GDP, but now you (hopefully realize) that this is because imports are also being added to consumption, investment, and government purchases! Consumption, Investment, and Government purchases include imports, so we subtract out imports at the end so that the total effect of imports on GDP is zero.

Smith leverages that explanation to explain why a focus on reducing imports will not increase measured GDP (at least, not directly). That is an important argument in an era of increasing trade protection, aimed at reducing imports to the benefit of the domestic economy. That trade protection is not likely to work - at least, not through the simple mechanism of reducing something that is subtracted from GDP. Because, as Smith tells us, imports are not subtracted from GDP.

[HT: Marginal Revolution]