Friday, 31 March 2023

The employment effects of the minimum wage, from American Community Survey data

The minimum wage in New Zealand goes up tomorrow, from $21.20 to $22.70 per hour. At times like this, it is worth considering what impacts increasing the minimum wage will have. In terms of the effects of increasing the minimum wage on employment, the literature is not settled. It does appear that the effect depends a lot on context (see the links at the end of this post for more, as well as how the minimum wage affects a lot of things other than employment). However, my reading of the literature does suggest that increasing the minimum wage reduces employment, and that reduction in employment is concentrated among young and less educated workers (see my most recent post on that point here).

That most recent post drew on three articles, one of which was by Jeffrey Clemens (University of California, San Diego) and Michael Strain (American Enterprise Institute). I've now gone back read some of their earlier research (which had been sitting in my to-be-read pile), published in 2018 in the journal Contemporary Economic Policy (ungated version here). Like their more recent research, this paper uses data from the American Community Survey, and compares four groups of states:

  1. States that increased their minimum wage by more than $1 between January 2013 and January 2015;
  2. States that increased their minimum wage, but by less than $1, between January 2013 and January 2015;
  3. States that indexed their minimum wage to inflation between January 2013 and January 2015; and
  4. States that did none of those things (as a control group).
Those are quite similar to the 'policy groups' of states that Clemens and Strain looked at in their more recent research. They then apply a difference-in-differences strategy, comparing the difference in employment rates (separately for people under 25 with less than a high school education, people aged 16-21 years, and teenagers) between states in each group, before and after 2013. Their dataset covers the period from 2011 to 2015. Importantly, they control for a number of economic variables that may affect employment, including median house prices, state income per capita, and employment among higher skilled population groups. Clemens and Strain find that:

...minimum wage increases exceeding $1 reduced employment by just over 1 percentage point among groups including teenagers, individuals ages 16-21, and individuals ages 16-25 with less than a completed high school education. By contrast, smaller minimum wage increases (including those linked to inflation indexation provisions) appear to have had much smaller (and possibly positive) effects on employment.

In other words, the results are consistent with large minimum wages changes leading to disemployment. However, there is reason for caution with interpreting these results, because they didn't test for pre-trends between the policy group states. Clemens and Strain do note that:

...economic conditions were moderately stronger in states that enacted minimum wage increases relative to other states. Prime age employment, for example, grew by an average of 2.3 percentage points in states that either enacted minimum-wage changes exceeding $1 or that index their minimum wage rates for inflation. Across states that enacted no minimum wage increases, prime age employment increased by a more modest average of 1.6 percentage points.

However, that difference is evaluated across the entire period of the data. Looking at the time from 2011 to 2013 would be more helpful. Eyeballing Figure 3 from the paper does make it seem like the pre-trend for large statutory increases in the minimum wage (more than $1) is flatter than for the other groups (compare the bottom line with the top three lines, for the period from 2011 to 2013):


So maybe we should expect the group of states with large minimum wages not to experience as big an increase in employment as other states after 2013, because they weren't experiencing as big an increase in employment as other states before 2013. Clemens and Strain do report some additional analyses in the supplementary materials to the paper that they claim account for pre-trends, but I don't think that the approach that they report there really does account for pre-trends. That should temper any enthusiasm we have for these particular results, and given that they have been supplanted by more recent results from the same authors using similar data, we should discount them somewhat. Nevertheless, they do make a small contribution to the side of the evidence that supports the disemployment effects of the minimum wage.

Read more:

Wednesday, 29 March 2023

Why Teslas are like paperback books

Tesla has lowered the price of its cars. This New Zealand Herald article from earlier this month asks whether it is because of flagging demand or a tactic to boost sales:

In explaining why Tesla Inc keeps cutting prices on its electric vehicles, the auto industry is pretty much divided into two camps.

On one side are analysts who see an aggressive move by the leading manufacturer of EVs to gobble up sales and market share from its competitors just as they’re beginning to bring more vehicles to market.

On the other side are critics who argue that with demand for Tesla’s older vehicles beginning to wane, the company feels forced to slash prices to attract buyers.

Over the weekend, Tesla cut the prices of its two costliest vehicles by between US$5000 and $10,000, or 4.3 per cent to just over 9 per cent. A Model S two-motor sedan now starts at US$89,990, with the Plaid “performance” version beginning at US$109,990. A Model X SUV dual motor starts at $99,990, the performance version at $109,990. 

Chances are, it is neither of those explanations (but, if anything, it is closer to the second). To see why, we need to recognise that this is a form of price discrimination. Price discrimination occurs when a firm charges different prices to different groups of consumers for the same product or service, and where the difference in price does not arise from a difference in costs to the firm.

In order for a firm to practice price discrimination, it needs to meet three conditions:

  1. Groups of consumers that have different price elasticities of demand (heterogeneous demand);
  2. Different groups of customers can be identified; and
  3. No transfers across submarkets.

Let's think about Tesla's situation. Do they have different groups of consumers with different price elasticities of demand? I would say yes. When Teslas were first released, many consumers were excited and anxious to buy a shiny new-release Tesla. The waitlists were long, but those consumers didn't care. For those consumers who wanted a newly released Tesla, there were few substitutes. They didn't want just any old car. They wanted a shiny new-release Tesla. The consumers on the waitlist for a new-release Tesla had demand that was relatively inelastic, meaning that they were not very responsive to a change in price (when a good has few substitutes, it has more demand that is relatively more inelastic). With relatively inelastic demand, Tesla could charge a high price, and it wouldn't cause those consumers to walk away. Prices of new-release Teslas were high.

Fast-forward to now, and those first consumers' demand for a Tesla has been satisfied. The remaining customers aren't nearly as keen on a Tesla. For current consumers, the choice isn't a Tesla or nothing at all. There are many other EVs that would be as good as, or nearly as good as, having a Tesla. The current consumers' demand is more elastic, meaning that they are more responsive to a change in price (when a good has more substitutes, it has more demand that is relatively more elastic). Tesla cannot charge as high a price without those consumers going somewhere else for a car. Prices of Teslas should fall, which is exactly what we are seeing.

What about the other two conditions? Tesla can tell which consumers are in which group. The consumers who sign up to the waitlist before a new car is released are clearly signalling to Tesla that they have relatively inelastic demand, and are willing to pay a higher price. Consumers who are willing to wait until later have more elastic demand, and are willing to pay a lower price.

What about no transfers across submarkets? Clearly, a Tesla owner can sell their Tesla second-hand to another buyer. However, the purpose of the this condition is so that consumers who buy at a low price don't turn around and sell to consumers who would otherwise buy at a high price. That isn't possible in this case, since Tesla sells first at the high price, to the consumers with more inelastic demand. Those consumers can resell their Tesla, but the remaining consumers are only willing to pay a lower price.

Tesla isn't alone in adopting this strategy for price discrimination. There are lots of similar examples. New season fashion clothing is sold at a high price, to consumers with relatively inelastic demand, and then is sold at a lower price at the end of the season to consumers with relatively elastic demand. Books are initially released in hardcover, and sold at a high price to consumers with relatively inelastic demand, before being released as paperbacks and sold at a low price to consumers with relatively elastic demand. When a new musical is released, tickets for the musical are initially sold at a high price, to consumers with relatively inelastic demand, and then when the musical is older, tickets are sold at a lower price to consumers with relatively elastic demand. And so on. All of these are examples where the first consumers believe that there are few substitutes for the good or service, so their demand is more inelastic, while later consumers believe that there are more substitutes, so their demand is more elastic.

Price discrimination is everywhere, once you know what to look for. Tesla is selling its cars in much the same way that publishers sell books. And in the same way as for new release books, when Tesla releases a new model car, it can restart the process for that new model from a high initial price.

Tuesday, 28 March 2023

A simple production model of my breakfast

This morning, I had a vegetarian quiche for breakfast. It was delicious, but also surprising. The filling of the quiche was mostly mushroom and spinach, with very little egg. I love mushrooms. However, the defining feature of a quiche is egg, and normally the egg to mushroom ratio in a quiche would be much more in favour of egg, rather than mushroom. What led to this delicious and unexpected treat? It turns out that a very simple model of production, as covered in the first week of my ECONS101 class, can help explain.

That model is shown in the diagram below. It shows different combinations of egg and mushroom that can be used to make a vegetarian quiche, with mushroom measured on the y-axis and egg measured on the x-axis. Let's say that there are only two recipe options for the bakery, A and B [*]. Recipe A uses a lot of egg (EA) and not much mushroom (MA), while Recipe B uses not much egg (EB) and a lot of mushroom (MB).

Which recipe should the bakery use? If the bakery is trying to maximise profits, then given that both recipes produce the same quantity (one quiche), the bakery should choose the recipe that has the lowest cost. [**] We can represent the bakery's costs with iso-cost lines, which are lines that represent all the combinations of mushroom and egg that have the same total cost. The iso-cost line that is closest to the origin is the iso-cost line that has the lowest total cost. The slope of the iso-cost line is the relative price between mushroom and egg - it is equal to -Pe/Pm (where Pe is the price of eggs, and Pm is the price of mushrooms).

Now think about that relative price. Eggs are currently priced very high (see here and here), so the relative price (-Pe/Pm) is a large number (in absolute terms). That makes the iso-cost lines relatively steep, as shown in the diagram below by ICA and ICB. In this case, the iso-cost line that passes through B (ICB) is the lowest iso-cost line. It is closer to the origin than the iso-cost line that passes through A (ICA). So, Recipe B is the lowest cost recipe, and the bakery should choose Recipe B.

However, eggs are not usually as expensive as they are right now. In 'normal' times, the relative price (-Pe/Pm) would be a smaller number (in absolute terms), because the price of eggs would be lower. That would mean that the iso-cost lines would be relatively flatter, as shown in the diagram above by ICA' and ICB'. In that case, the iso-cost line that passes through A (ICA') is the lowest iso-cost line. It is closer to the origin than the iso-cost line that passes through B (ICB'). So, Recipe A is normally the lowest cost recipe, and the bakery should normally choose Recipe A.

So, that explains my surprising and delicious breakfast. Normally, if I got a vegetarian quiche, it would be mostly egg with a bit of mushroom (Recipe A). But, now that eggs are relatively more expensive, I got a quiche that was mostly mushroom with a bit of egg (Recipe B). Delicious!

*****

[*] In the standard version of this model, A and B would be referred to as production technologies. In this case, it makes more sense to refer to them as recipes.

[**] We're assuming here that the bakery wouldn't change its price, depending on which recipe they used. It is costly to change prices (economists refer to these as menu costs), so the bakery would prefer to keep prices the same, even if the prices of eggs and mushrooms are fluctuating from week to week.

Monday, 27 March 2023

This couldn't backfire, could it?... Regulating teen's access to social media edition

The New Zealand Herald reported over the weekend:

Utah became the first state to enact laws limiting how children can use social media after Republican Governor Spencer Cox signed a pair of measures today that require parental consent before kids can sign up for sites like TikTok and Instagram.

The two bills Cox signed into law also prohibit kids under 18 from using social media between the hours of 10.30pm and 6.30am, require age verification for anyone who wants to use social media in the state and seek to prevent tech companies from luring kids to their apps using addictive features...

Tech giants like Facebook and Google have enjoyed unbridled growth for over a decade, but amid concerns over user privacy, hate speech, misinformation and harmful effects on teens’ mental health, lawmakers have begun trying to rein them in.

This law, and others like it proposed in other US states and elsewhere, have a noble purpose of reducing young people's mental health problems, which are associated with social media use. By restricting teens' access to social media, it is thought that teens will have better mental health as a result.

However, I can already see a potential problem here (aside from the real possibility that this policy solution doesn't actually address how social media affects subjective wellbeing - see for example, this post). Some teens' parents are quite permissive, and those teens will have access to social media under these regulations, as their parents will consent to their teens using social media (and perhaps some of these teens would have access regardless of the regulations). Other teens' parents will try to prevent their teens' access to social media (to the extent that they can do so). This second group of teens could actually be at risk of worse mental health, not better mental health. Consider this: those teens are being excluded from access to social media, when many of their peers are not. For a teen, is there anything worse than feeling socially excluded? This isn't just fear of missing out (FoMO), but genuine anxiety about what their friends and peers are talking about on a platform that they don't have access to. Moreover, giving parents access to teens' social media accounts violates the teens' privacy in a way that may also lead to increased anxiety for some teens, who feel hypervigilant in terms of what their parents might see (many teens already have multiple social media accounts, only some of which are known to their parents). It seems likely then that some teens (not all teens) may actually be made worse off by this policy.

Maybe I'm scaremongering unnecessarily. However, I can easily see how this regulation leads to worse mental health for some teens (even if it may improve mental health for teens on average). I guess we will see how this plays out over time.

Sunday, 26 March 2023

ChatGPT (and other large language models) for teaching and learning, especially in economics

ChatGPT has certainly been the topic of conversation in university hallways since it was released. It would be fair to say that a lot of the conversations have had negative undertones - what does ChatGPT mean for assessment and academic integrity, for example. However, ChatGPT and other large language models (like Bing Chat) also offer opportunities for teachers to improve their practice (for example, see this earlier post on assessment). Many teachers have been experimenting with ChatGPT, and developing new approaches, or adapting their existing approaches to take advantage of the opportunities presented by the large language models (LLMs). Personally, I've done a lot of playing with ChatGPT, but haven't really made full use of it. Yet.

Fortunately, the learning curve is being shortened by teachers and academics writing about their experiences. One example is this new working paper by Ethan Mollick and Lilach Mollick (both University of Pennsylvania). They discuss five teaching strategies that can be advanced by using ChatGPT, including:

...helping students understand difficult and abstract concepts through numerous examples; varied explanations and analogies that help students overcome common misconceptions; low-stakes tests that help students retrieve information and assess their knowledge; an assessment of knowledge gaps that gives instructors insight into student learning; and distributed practice that reinforces learning.

Mollick and Mollick helpfully provide prompts for both ChatGPT and Bing Chat that can be used for these purposes. It is clear from their choices of applications that these tools can help when creativity is lacking (new examples, explanations, analogies), saving teachers from having to come up with new applications (also in writing test questions). Distributed practice requires a lot of questions, and having a tool that can simply create new questions is helpful in this respect.

Mollick and Mollick's example prompts will get a teacher a lot of the way towards making the best use of ChatGPT. However, crafting prompts has become an art in itself. Fortunately, other teachers are offering help on this as well. In another new working paper, Tyler Cowen and Alex Tabarrok (both George Mason University, and perhaps best known as the authors of the Marginal Revolution blog) provide a lot of guidance on how to craft prompts that will assist with teaching and learning economics. Their advice is fairly high level, but will probably be more useful as LLMs develop over time. Their advice is sumarised as:

1. Surround your question with lots of detail and specific keywords

2. Make your question sound smart, and ask for intelligence and expertise in the answer

3. Ask for answers in the voice of various experts.

4. Ask for compare and contrast.

5. Have it make lists

6. Keep on asking lots of sequential questions on a particular topic.

7. Ask it to summarize doctrines

8. Ask it to vary the mode of presentation

9. Use it to generate new ideas and hypotheses

In my own use of ChatGPT, I have found making lists to be incredibly useful (and ChatGPT often seems to make lists without specifically being prompted to do so), as well as asking for answers to questions in the voice of various experts.

However, one part of Cowen and Tabbarok's paper caused me some concern (and some colleagues as well, after I shared it). They show ChatGPT solving some calculus problems step-by-step (as we would expect a student to do), including solving a system of demand equations for the perfectly competitive outcome, Cournot outcome, and monopoly outcome. I had though that we were still some time from having LLMs solve maths problems, but these things are moving so fast. LLMs can't draw graphs yet, but this recent post by Bryan Caplan shows that ChatGPT can explain a graphical solution to a problem in words, and do so very well. Yikes! I've been quite bullish about my assessment strategies being robust to LLMs for now, but I may have to reconsider weekly assignments (with many questions involving graphs) in my ECONS102 class.

LLMs offer both challenges and opportunities for teachers. They may prompt us to adopt more authentic assessments, and at the same time provide us with the opportunity to add significant value to students' learning experiences. At least, until the AIs take our jobs completely.

[HT: Marginal Revolution, here and here]

Read more:

Saturday, 25 March 2023

Revisiting fire protection as a club good

Club goods are non-rival (one person’s use of the good doesn't diminish the amount of the good that is available for other peoples' use) and excludable (a person can be prevented from using or benefiting from the service). I've made the case that theoretically, fire protection could be a club good (see here). There is even some evidence (see that same post) that fire protection could have been a club good historically in New Zealand. There were also stories from other countries on the same point. For example, see this from the Zurich Insurance magazine:

Despite some reciprocal arrangements, firefighting units would ignore burning buildings once discovered it was not covered by their insurance company.

However, it turns out that the stories about London firefighters ignoring burning buildings that were not covered by their insurance company may have been false. See this recent YouTube video by Tom Scott:

That doesn't necessarily mean that the stories about New Zealand firefighters only fighting fires in buildings covered by insurance companies are false. Only that it seems likely that the similar stories about London firefighters are untrue.

So, perhaps fire protection has never been a club good, even though theoretically it could be.

Read more:

Thursday, 23 March 2023

MasterChef, fear of failure, and survivorship bias

In our most recent Waikato Economics Discussion Group session, we discussed this recent article by Alberto Chong (Georgia State University) and Marco Chong (Bethesda-Chevy Chase High School), published in the journal Kyklos (ungated earlier version here). They used hand-collected data from ten seasons of the US version of MasterChef (from 2010 to 2020), to investigated whether fear of failure leads to better, or worse, performance. This is an important question, as they note that:

...fear of failure is rather common and widespread in societies. In the United States, for instance, it has been estimated that around 30 % of the population is terrified of failure, and it ranks among the worst fears that the population endure in this country...

Fear of failure has previously been studied in a number of contexts, including sports, business, and education. Most of the literature finds that fear of failure reduces performance. That may be because fear of failure leads to emotional paralysis, inhibiting people from their 'usual' level of performance. On the other hand, fear of failure could lead to increased performance, by providing additional focus that leads to greater creativity and the ability to take calculated risks. MasterChef is a particularly interesting context in which to study this, because:

As the home cooks are judged by world class chefs and restaurateurs and watched by television audiences that range in the millions the potential shame and embarrassment of failing under these circumstances is very significant... The fact that the judges tend to be rather harsh with the contestants further compounds to this, more so given that these home cooks come with high self-esteem and egos, as they are typically considered as cooking luminaries in their immediate circles of friends, families, coworkers, neighbors or clubs and associations.

In other words, failure in MasterChef is very public, and directed at someone who is probably not used to failure. Chong and Chong collected data on the final ranking of 197 contestants across the ten seasons of MasterChef in their sample. They measured fear of failure as the sum of two variables:

The first is the number of times that a contestant ends up among the bottom three entries in any particular cooking challenge. The second is the number of times that a contestant ends up surviving a Pressure Test...

Chong and Chong also created a measure of 'extreme fear of failure', which was simply the number of times that a contestant survived a Pressure Test. They then look at the relationship between their measures of fear of failure and the contestants' final ranking in the season, controlling for individual characteristics, the number of rounds the contestant participated in, and a measure of positive reinforcement (made up of the number of times the contestant won a Mystery Box or elimination or team challenge, plus the number of times that they placed in the top three in a challenge). Chong and Chong find that:

...on average, individuals that are on the verge of being eliminated, but are able to survive and stay in the competition, end up doing better in the final rankings, all else being equal. In particular, we find that the higher the number of times a contestant is put in this situation, the higher his or her final placement will be among all the contestants. Overall, we find that depending on the measure used, an increase in one unit in our fear of failure index is linked to an increase of between almost one position to four positions in the final competition placement, the latter in situations of extreme fear of failure.

In other words, contestants who experience a greater fear of failure over the course of the season, perform better. Or do they?

Think carefully about how Chong and Chong measured fear of failure - the number of times that a contestant won a Mystery Box or elimination or team challenge, plus the number of times that they placed in the top three in a challenge. Contestants that last longer in the season will obviously find themselves in those situations more often than contestants who are eliminated early. So, contestants who last longer in the season will rank higher overall, and have a higher measure of fear of failure, simply because they lasted longer in the season. In other words, there is a clear survivorship bias in their analysis.

But wait! Didn't Chong and Chong control for the number of rounds that the contestant participated in? They did, and in theory that should mean that their results compare participants in terms of fear of failure, holding the number of rounds they participated in constant. However, that's not quite the case, because of the way that the number of rounds variable is included. The number of rounds variable is a measure of the exposure of each contestant to the potential for fear of failure. But that assumes that the exposure variable is linear, when it really isn't. If being at risk of elimination is randomly allocated to contestants, then there is a 15 percent (3/20) chance of being in the bottom three when there are 20 contestants, but a 50 percent (3/6) chance of being in the bottom three when there are just six contestants left. The effect of the number of rounds is clearly non-linear, but they control for a linear relationship.

To illustrate the problem here, I constructed some simulated data and ran some analyses (in Excel). I assumed that there were initially 20 contestants, and that each round, one of them was eliminated. I randomly determined which contestant was eliminated, and which two of the other contestants were in the bottom three. I ran this through until a 'final' 17th round, where there were four contestants remaining. I then calculated a measure of fear of failure (the number of times the contestant was in the bottom three), and the ranking of each contestant. Then, I ran a multiple regression model, with ranking as the dependent variable, and fear of failure as the explanatory variable, controlling for the number of rounds that the participant survived for. The outcome was that fear of failure had a coefficient of -0.350 with a p-value of 0.009 (highly statistically significant). In other words, in the data that was simulated totally at random, the result was a statistically significant relationship. It's picking up survivorship bias.

Then, instead of controlling for the number of rounds, I calculated a measure of expected exposure to fear of failure. This was the probability that a contestant would find themselves in the bottom three in a round, then summed up for each round that they participated in. When I run the same multiple regression, but controlling for expected exposure instead of the number of rounds, the coefficient on fear of failure was -0.21 and statistically insignificant (p-value of 0.38).

And, just in case anyone thinks all of this was purely coincidental, I created a whole new random dataset, and repeated the exercise. The second time, controlling for rounds the coefficient on fear of failure was -0.22, and statistically significant with a p-value of 0.035. When controlling for expected exposure, the coefficient on fear of failure was -0.27 and statistically insignificant (p-value of 0.13).

I'm sure I could run the analysis many times more and get similar results. While my data setup is not identical to theirs, it does enough to illustrate that their measure of fear of failure is susceptible to survivorship bias, because a randomly simulated dataset leads to similar results (albeit with a smaller coefficient).

Chong and Chong say that their data are available on reasonable request. This study is crying out for a replication with a better measure of fear of failure. That could either be a measure of expected exposure to fear of failure (as in my simulated dataset), or dummy variables to each number of rounds. Approaching the analysis either way (or both) would make a great Honours project for a suitably motivated student.

Tuesday, 21 March 2023

Spotify has artists playing chicken, but they can fight back if they can cooperate

The New Zealand Herald reported today:

Spotify has recently faced backlash over its newly-implemented Discovery Mode program.

The initiative, which gives artists greater exposure on the platform in exchange for a lower royalty rate, was announced during the company’s Stream On event in March 2021 and has continued to be criticised all the way up to its 2023 launch.

Under Discovery Mode, artists or their teams can submit tracks for consideration to be included on Spotify’s radio and autoplay features. In exchange for this greater algorithmic exposure, they agree to receive a lower royalty rate for streams of their music.

For some, it’s an inventive new way to link potential fans to new music, but others in the industry believe it to be yet another way to shave the pay cheque of hardworking musicians whose work is the lifeblood of an app that rakes in billions of dollars each year.

This is a smart ploy by Spotify. As noted by DJ Luca Lush on Twitter:

Ideally for spotify, EVERYONE opts in, they take 30% more revenue & no one gets more plays

It's not clear that every artist would opt into Discovery Mode though. To see why, consider the decision as part of a simultaneous game, played by some artist (Artist A) and all other artists. The game is laid out in the payoff table below, with the payoffs measured as a percentage of the 'normal' level of royalties. If all artists (including Artist A) choose no Discovery Mode, then they all continue to receive the normal level of royalties. If all artists (including Artist A) choose Discovery Mode, then they all lose 30 percent of their income (as Luca Lush noted). However, if Artist A chooses Discovery Mode and all others do not, Artist A benefits greatly (let's say that their royalties go up by 50 percent - in the New Zealand Herald article, Spotify says that "artists have seen an average 50 per cent increase in saves" when participating in Discovery Mode), and other artists are negatively affected, but only slightly (because even when Artist A's Spotify streams increase a lot, that doesn't much reduce every other artist's streams). On the other hand, if Artist A chooses not to participate in Discovery Mode and all other artists do, it is the other artists that benefit greatly, and Artist A is made worse off. [*]

To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:

  1. If Artist A chooses not to participate in Discovery Mode, the other artists' best response is to participate in Discovery Mode (since a payoff of 150 is better than a payoff of 100) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Artist A chooses to participate in Discovery Mode, the other artists' best response is not to participate in Discovery Mode (since a payoff of 99 is better than a payoff of 70);
  3. If the other artists choose not to participate in Discovery Mode, Artist A's best response is to participate in Discovery Mode (since a payoff of 150 is better than a payoff of 100); and
  4. If the other artists choose to participate in Discovery Mode, Artist A's best response is not to participate in Discovery Mode (since a payoff of 95 is better than a payoff of 70).

Notice that there are two Nash equilibriums in this game - where Artist A chooses to participate in Discovery Mode, and every other artist does not, and where Artist A chooses not to participate in Discovery Mode, and every other artist does.

We could repeat this exercise for any number of additional artists, rather than Artist A. We would come out with the same outcome. We could even try this as a multi-player game. We would find something similar. The artists are all better off if they can participate in Discovery Mode, but not too many of the other artists do so. Every artist would want to be participating in Discovery Mode. However, if all (or a large proportion) of them choose to participate in Discovery Mode, then every participant is made worse off. This is an example of the game of chicken. Two drivers driving towards each other can choose to speed ahead, or swerve out of the way. Both prefer to speed ahead, because they are trying to win the game of chicken. However, if they both follow that strategy, it all ends in a fiery crash (for a more complete explanation, see this post).

Usually, in the game of chicken, a player can get the outcome that they prefer if they can make a credible commitment to their strategy. In the classic game of chicken, a driver could commit to speeding ahead by disabling their brakes and throwing the steering wheel out the window. That is a pretty showy way of demonstrating that the driver won't change their strategy of speeding ahead.

However, in the game that Spotify has set up, there are too many players for a credible commitment to scare others off. Most artists will instead be thinking, 'I'm sure that one more artist choosing Discovery Mode isn't going to be the one that destroys the payoffs for everyone, so why shouldn't I?'. That's the sort of thinking that ends in a fiery crash, with all artists earning 30 percent lower royalties.

A cynic would interpret this as Spotify's strategy all along. They are profit maximising by steering the artists into a game of chicken that leads to all participating artists receiving lower royalties. However, the artists can fight back. This is a repeated game. In a repeated game, the players can cooperate in order to obtain a better outcome for them all. By cooperating, and choosing not to participate in Discovery Mode, the artists would be made better off collectively. This is essentially what the artists who have spoken out against Discovery Mode are trying to do. They are trying to coordinate a cooperative response that sees all artists boycotting Discovery Mode, which would be for the betterment of them all.

This sort of cooperative outcome is only possible if the artists can trust each other. There is an incentive for any artist to cheat on the agreement. That's because, if Artist A (or any other artist) knows for sure that the other artists will not participate in Discovery Mode, then Artist A can participate and make themselves better off. Once that starts to happen, the cooperative agreement can quickly break down.

The questions now are, will the artists be able to agree not to participate, and if they do, will they be able to maintain trust and cooperation?

*****

Another way of thinking about the payoffs in this game is to recognise that Artist A is probably made wildly worse off by every other artist opting into Discovery Mode. The game with these new payoffs is shown below.

The best responses for the other artists are unchanged. However, for Artist A, the best responses are now:

  1. If the other artists choose not to participate in Discovery Mode, Artist A's best response is to participate in Discovery Mode (since a payoff of 150 is better than a payoff of 100);
  2. If the other artists choose to participate in Discovery Mode, Artist A's best response is to participate in Discovery Mode (since a payoff of 70 is better than a payoff of 25).

Notice that now, Artist A has a dominant strategy to participate in Discovery Mode. Participating in Discovery Mode is better for Artist A, no matter what the other artists do. They should always choose to participate in Discovery Mode. And this would apply to any other artist, if we replaced Artist A with them instead. This provides an even stronger incentive for artists to participate in Discovery Mode than in the chicken game shown earlier (it wouldn't be a chicken game any more, but much more like a prisoners' dilemma game with multiple players). However, the other points I make about the repeated game, cooperation and trust, all still apply to this version of the game as well.

Monday, 20 March 2023

It's good to be different, or why firms differentiate their products

This week in my ECONS101 class, we are covering the decision-making of firms with market power. When firms have market power that means that they have some control over the price of their product. Some firms gain market power through barriers to entry into their market - there is something that keeps competitors out of the market. However, most firms don't benefit from barriers to entry. Instead, most firms derive market power from differentiating their product from the products sold by their competitors. In this post, I'll demonstrate why it is that firms differentiate their product, using the consumer choice model, and assuming that there are only two firms (one firm selling Good X, and one firm selling Good Y).

Before we get that far though, let's consider what would happen if we had two firms selling identical products (what we refer to as perfect substitutes). In that case, the consumer is indifferent between the two firms' products. Since they are identical, the consumer doesn't care which firm they buy from. In that case, the consumer will obviously buy from whichever firm is selling their product for a lower price, and will buy nothing from the firm that has the higher price.

This situation is shown in the diagram below. The consumer's indifference curves are shown by the two red lines, I0 and I1 (with I1 representing a higher level of utility, or satisfaction, for the consumer). The indifference curves are straight lines when the two goods are perfect substitutes (so the consumer doesn't care about how many of each good they have, only how much they have of both goods in total). The consumer's budget constraint is shown by the black line. The budget constraint is relatively steep, which means that the price of Good Y is relatively lower than the price of Good X. The consumer's best affordable choice (the consumer's optimum) is the bundle of goods E0, (it's on the highest indifference curve that they can reach, I1), where the consumer spends all of their income on Good Y, and spends nothing on Good X. This makes sense, given that Good Y and Good X are exactly the same good, and Good Y is relatively less expensive than Good X.

Obviously, the situation in the diagram above is not good for the seller of Good X. How could they respond? One thing that they could do is lower their price to match the price of Good Y. That would cause the consumer's budget constraint to pivot outwards and become flatter (just like in this example). They could even make their price lower than the price of Good Y, capturing all of the market. Of course, there is little to stop the seller of Good Y from then lowering their price as well. This sort of a price war is not good for either seller, and could continue until neither seller is able to make any profit at all (which is the case in a perfectly competitive market).

A better option for the seller of Good X arises when they recognise that the real problem here is that the two goods are identical in the mind of the consumer. The two goods are perfect substitutes. If the seller of Good X can somehow convince the consumer that the two goods are different rather than identical, then they may be able to keep some sales, even if the price of Good X is higher than the price of Good Y.

This situation is shown in the diagram below. When the goods are differentiated, the consumer's indifference curves are curves (not straight lines - straight line indifference curves only happen when the goods are perfect substitutes). The highest indifference curve that the consumer can get to is I1'. They will buy the bundle of goods E1, which contains Y1 of Good Y, and X1 of Good X. Even though Good X is relatively more expensive, the consumer chooses to buy some of it.

So, how do firms differentiate their products? There are many ways, but one of the most common is through branding. By branding their product, firms demonstrate that their product is different in at least a superficial way to the products of their competitors, setting it apart in the minds of consumers. For example, petrol sold by petrol stations from different chains is the same good, regardless of which chain it is purchased from. The petrol stations are differentiated from each other by their branding (as well as by their locations, and by the additional services that they offer in addition to petrol). Supermarket brand cornflakes are often the same cornflakes that are in the boxes of leading brands, just in a different coloured box. And so on.

There are lots of examples. Some firms are good at differentiating themselves, while others are not so good. Ceteris paribus (holding all else constant), the more differentiated a firm's product is from its competitors, the more market power it will have. And that makes the example in the photo below difficult to understand. This photo was taken at the Chapel Downs shopping centre in Flat Bush in January (although the situation there has been the same for many years [*]). You may need to zoom in to see the detail in the photo. However, let me explain what is going on. I've circled the names of three stores. The one on the left is called No 1 Supavalue Supermarket. The one in the middle is called Supavalue Supermarket. The one on the right is called Super Value Supermarket. To be clear, all three stores are in the same shopping centre. This is NOT how you differentiate yourself from your competitors.

*****

[*] I took this photo during fieldwork, that I have been repeating in January each year for the last fifteen-plus years. The situation I describe here has been present for most of that time. I only captured it in a photo this year.

Sunday, 19 March 2023

How incentives change when a university switches to pass/fail grading

When we first went into lockdowns in 2020, and teaching and assessment all shifted online for an extended period, I advocated for a shift to pass/fail grading. In my view, we couldn't have the same level of confidence that letter grades would adequately represent the relative strengths or weaknesses of students, either within or between papers. Lecturers were under pressure and assessment was sub-optimal in the new environment. Students were under pressure and adjusting to learning from home in conditions that were often not conducive for good learning. Under such circumstances, an A+ grade becomes almost meaningless.

The lockdowns clearly presented an extraordinary situation. In general, I am not a fan of pass/fail grading of papers (except for Masters and PhD theses, where letter grades make little sense). The reason is the incentive effects that pass/fail grading creates, relative to letter grading. When students receive a letter grade, students across the whole distribution have an incentive to work a little bit harder, because additional effort can raise their letter grade. However, when grading is pass/fail, only students at the margin of passing or failing the paper have an incentive to work a bit harder. Students who are clearly passing the paper have little incentive to work harder, since there is little reward for doing so. Or, at least, there is little extrinsic reward for doing so. Some students will remain intrinsically motivated to work hard, or see working harder and learning at a higher level to be rewarding in the sense of better job opportunities at graduation.

Is there evidence to support these incentive effects? This recent NBER Working Paper (ungated version here), by Kristin Butcher, Patrick McEwan, and Akila Weerapana (all Wellesley College) provides some evidence in support of the theory. They look at what happened when Wellesley College moved all first-year courses to mandatory pass/fail grading in Fall 2014. As they explain:

Beginning in Fall 2014, the College further implemented a shadow grading policy for first-year, first-semester students. Under this policy, transcripts record a pass if students receive a letter grade of D or above, and a fail if they receive an F. However, students are privately notified of the letter grade. The policy has two objectives. The first is to encourage students to take courses outside their usual preferences. Curricular exploration might foster—in the short- and longer-run—increased student engagement with fields in which they are under-represented (such as women in mathematically-intensive STEM majors). The second is to promote successful transitions from high school to college, thereby preventing leaves of absence and drop out.

Importantly, lecturers still recorded the underlying letter grade. So, Butcher et al. are able to compare the grades that students earned under pass/fail grading, with the grades that students in earlier semesters earned under letter grading. Based on a sample of 38,214 student-by-course observations covering the period from 2004 to 2019, they find that:

There are more consequential results for student grades. The policy lowered the average grade points of first-semester students by 0.13 or about 23% of a standard deviation in pre-policy grades, although effects on the cumulative grade point average were small and not statistically distinguishable from zero.

No surprises that letter grades were lower as a result of the change in policy. However, the lack of an effect on cumulative GPA is surprising. Butcher et al. offer that:

There are several possible explanations for the grade effects on first-year students. The first is compositional: students sorted into lower-grading STEM courses which mechanically lowered average grades of first-semester students. However, we show that compositional changes explain a reduction of less than 0.01, implying a substantial role for within-course effects.

We next consider the possibility that grade reductions are due to lower quality instruction... However, the estimates are not sensitive to the inclusion of controls for quality proxies. We further show that the policy did not affect the grade performance of all students in a course, as might be expected if quality uniformly declined. Rather, it lowered the grades of first-semester students relative to later-semester students enrolled in the same courses.

Finally, we rule out the possibility that the policy led instructors to arbitrarily modify their grading standards for first-semester students relative to later-semester students...

The remaining and most plausible explanation for the policy-induced reduction in grades is that students covered by mandatory pass/fail grading exerted less effort relative to letter-graded students.

Butcher et al. support their final assertion that lower grades were a result of reduced effort by showing that course evaluations did not change. They argue that if grades were lower, but students were exerting the same effort, students would respond by evaluating their courses more negatively. That argument is not entirely convincing for me. However, Butcher et al. provide some additional:

...descriptive evidence from a faculty survey in 2017, which provided detailed examples of how students reduced effort, including attendance and course preparation.

Of course, the perception of student effort by lecturing staff may be affected by students' grades, rather than the other way around. There is little that Butcher et al. can do without a more objective measure of student effort. Perhaps if class attendance had been recorded, they could have used that?

At first glance, the lack of a statistically significant effect on cumulative GPA is a little problematic. However, pass/fail grades don't contribute to cumulative GPA, so what that really shows is that there is no continuing effect of the change to pass/fail after students move on to courses that are letter graded. The incentive effects are purely concentrated in the courses that are graded pass/fail, and if those courses are prerequisites for courses that students complete later in their degree, the lower effort in their pass/fail courses doesn't appear to hold them back.

Overall, the takeaway message from this paper is that universities should be very cautious about employing a pass/fail grading system, as it appears to disincentivise student learning within the courses where it is applied.

[HT: Marginal Revolution]

Friday, 17 March 2023

Moral hazard, and bailing out failing banks

At the time of the Global Financial Crisis, in 2008-09, I was fresh out of completing my PhD and still fairly idealistic in terms of what it was possible to do with policy. At that time, it struck me as a bad move to be bailing out the banks that had created such a fragile system. My views have softened significantly since then (although, as noted in my previous post on windfall taxes, I think there is an asymmetry to the relationship between business and government), especially as we've seen the global financial system recover (albeit with a significant increase in central bank balance sheets that is proving difficult to unwind). And now we seem to be at it again, with the US stepping in to save uninsured depositors at Silicon Valley Bank and Signature Bank earlier this week.

Why would saving bank depositors be a bad thing? Moral hazard arises when one of the parties to an agreement has an incentive, after the agreement is made, to act differently than they would have acted without the agreement. Importantly, the agreement doesn't have to be a formal contract. It can be an implicit understanding or expectation.

In this case, if large depositors [*] at failed banks lose all of their deposits, then there is a strong incentive for the depositors to undertake due diligence on their bank. They will want to be sure that their money is safe, and if not, they will bank elsewhere. Or, at least, large depositors will spread their risk by having deposits at multiple banks, rather than banking at a single bank. Because banks know that large depositors are being careful, they will do everything they can to convince depositors that they are safe institutions (and at least some of those actions will actually make the banks less risky). However, when the government develops a reputation for bailing out large depositors, this reduces the incentives for large depositors to undertake due diligence, which in turn reduces the incentives for banks to be safe institutions. This is not to say that banks will be flagrantly risky, only that at the margin, banks will act a little more risky. The moral hazard problem here is that the risky actions of the banks end up costing the large depositors, in the case when the bank fails and if the government doesn't reimburse the large depositors. This would be less likely to happen if the government hadn't created an expectation that they would bail out the large depositors in the first place.

In the current situation in the US, bailing out the large depositors at Silicon Valley Bank and Signature Bank reinforces an expectation at all other banks that the US will bail out large depositors if the bank fails. We can expect riskier behaviour from the banks in the future.

So, should we be against these bailouts? On the Marginal Revolution blog, Tyler Cowen has the most clear-headed explanation of why, in spite of any moral hazard problems, bailing out large depositors was probably the right move. One argument that Cowen makes is that:

An unwillingness to guarantee all the deposits would satisfy the desire to penalize businesses and banks for their mistakes, limit moral hazard, and limit the fiscal liabilities of the public sector. Those are common goals in these debates. Nonetheless unintended secondary consequences kick in, and the final results of that policy may not be as intended.

Once depositors are allowed to take losses, both individuals and institutions will adjust their deposit behavior, and they probably would do so relatively quickly. Smaller banks would receive many fewer deposits, and the giant “too big to fail” banks, such as JP Morgan, would receive many more deposits. Many people know that if depositors at an institution such as JP Morgan were allowed to take losses above 250k, the economy would come crashing down. The federal government would in some manner intervene – whether we like it or not – and depositors at the biggest banks would be protected.

In essence, we would end up centralizing much of our American and foreign capital in our “too big to fail” banks. That would make them all the more too big to fail. It also might boost financial sector concentration in undesirable ways.

To see the perversity of the actual result, we started off wanting to punish banks and depositors for their mistakes. We end up in a world where it is much harder to punish banks and depositors for their mistakes.

Cowen makes additional points (and I encourage you to read his entire post), but the one quoted above is the kicker. If large depositors do not expect to be bailed out, they will only bank at large and safe banks. That will make smaller banks, and probably financial start-ups, less viable. This increases the risks to the financial system if one of the (fewer, larger) remaining banks was to fail. So, policymakers are left with a difficult trade-off: bail out the large depositors and create a moral hazard problem, or not bail out large depositors, and be left with a financial system that is more concentrated, and more vulnerable to future failures. An idealist, like my 2008-09 self, might prefer the second option. However, I'm much more comfortable with where this has gone this time.

One last important point to make is that it appears that the US actions are not bailing out the bankers themselves, only the large depositors. That marks a significant difference between now and the Global Financial Crisis, where it appeared that the bankers mostly got off scot-free. I'm sure that some will argue that the banks found themselves in an unfortunate situation that was unforeseeable and therefore the bankers themselves are not responsible for this outcome. We should reject those arguments, as higher interest rates are not unforeseeable, although they may be unanticipated. Bankers should be better at stress testing for a wide range of future interest rate costs. After all, if it results in higher costs, those costs simply get passed onto their customers. Although that's probably the problem here - passing on higher costs because your bank is stress testing at a more rigorous level than other banks simply makes you less competitive. I guess we will find out in the fullness of time whether there were any consequences for the bankers, and whether there are regulatory changes in relation to banks' testing their vulnerability to future interest rate changes.

*****

[*] Notice that I restrict the argument here to large depositors. It is unreasonable to expect small depositors to have the time or resources to undertake due diligence on their bank. Also, small depositors can't as easily spread their risk by having deposits at multiple banks. Small depositors are therefore at risk, and have limited means to reduce that risk. So, it is reasonable that small depositors are protected by the government (through deposit insurance, for example).

Wednesday, 15 March 2023

Prices, profits, and windfall taxes

In yesterday's post, I noted that when there is a decrease in supply (in that case, in the market for broccoli), prices increase but sellers' profits (producer surplus) actually decreases. That should make you wonder about some other markets that have been in the news recently, where supply has decreased, such as the market for oil. However, in that case, as 1News reported last month:

London-based BP said underlying replacement cost profit, which excludes one-time items and fluctuations in the value of inventories, jumped to $27.7 billion ($43.8 billion NZD) in 2022 from $12.8 billion ($20.3 billion NZD) a year earlier.

That beat the $26.8 billion ($42.4 billion NZD) BP earned in 2008, when tensions in Iran and Nigeria pushed world oil prices to a record of more than $147 ($233 NZD) a barrel.

The oil market has also faced a negative supply shock, which has increased the market price of oil. Shouldn't profits be decreasing for oil producers like BP? Not quite. In the case of oil, the decrease in supply is very concentrated on Russian oil. Russian oil producers are facing decreased supply, because it is more difficult (and more expensive) for them to sell oil. This is shown in the diagram below (which is the same diagram as yesterday). The supply of Russian oil decreased from S0 to S1. The equilibrium price increases from P0 to P1, and the quantity of Russian oil traded decreases from Q0 to Q1. The producer surplus for Russian producers of oil decreases from the area P0BC to the area P1DE.

What about other (non-Russian) oil producers, like BP? With Russian oil becoming less available and more expensive to obtain, the demand for oil from non-Russian producers increases. This is shown in the diagram below. The demand for non-Russian oil increased from DA to DB. The equilibrium price increased from P0 to P1, and the quantity of non-Russian oil traded increased from QA to QB. The producer surplus for non-Russian producers of oil increased from the area P0FG to the area P1HG.

And so we end up in a situation of high profits for non-Russian oil producers like BP, leading to this (from the same 1News article):

But the good news for BP shareholders is likely to be tempered by the public fallout, particularly in its home country. High oil and gas prices have hit Britain hard, with double-digit inflation fuelling a wave of public sector strikes, soaring food bank use and demands that politicians expand a windfall tax on energy companies to help pay for public services.

Ed Miliband, the opposition Labour Party’s spokesman on climate issues, called on the UK government to bring forward a “proper” windfall profits tax on energy companies.

“It’s yet another day of enormous profits at an energy giant, the windfalls of war, coming out of the pockets of the British people,″ Miliband said.

My first thought about a windfall profits tax is that it's a bit of overkill. Business profits are taxed already. Higher business profits mean that the business will pay more tax already. Most argument I have read in favour of the windfall profits tax use an argument related to fairness. BP's high profits haven't arisen as a result of BP's outstanding management, their forward planning and investment strategies, improved quality of their product, or improved processing technology. In all of those cases, it might be seen as fair for a producer to reap higher profits. BP's higher profits are a pure windfall that they had no hand in generating. I don't find this fairness argument convincing, but I can see why it is attractive for many people.

However, I do think that there is a symmetry argument that could be used to justify windfall profits. When times are tough, businesses often call on the government for help. We saw this during the pandemic, when the New Zealand government paid wage subsidies to many employers. We've seen it more recently with tourism and hospitality operators asking for further assistance from the government. So, when businesses are not doing so well, they sometimes (maybe often, depending on how strongly they are able to lobby government) receive additional assistance. For the sake of symmetry then, when businesses are doing well, they should be voluntarily paying the government more in tax. If they aren't willing to do so voluntarily, then that may justify a windfall tax. Businesses shouldn't simply expect an asymmetric relationship with the taxpayer, where they are bailed out when they are doing badly, but keep their excess profits when they are doing well.

On the other hand, there may be good reason to bail out businesses, even if there is no such symmetry in the taxpayer-business relationship (more on that in my next post).

Tuesday, 14 March 2023

The paradox of price and profit, or why high prices don't always translate into high profits

As I noted in a couple of posts about kūmara last week (see here and here), recent weather events have reduced the supply of many vegetables, raising their prices. In the minds of most people, high prices equate to higher profits for sellers (more on that point in my next post as well). However, that is not necessarily the case. As the New Zealand Herald reported yesterday:

This will be one of the worst years for growers in Aotearoa despite rising prices at the supermarket, says an industry chief executive...

Leaderbrands chief executive Richard Burke said fresh produce prices are unlikely to go down at the supermarket as severe weather across the North Island means growers’ supply has been hard hit...

Burke said conditions are unprecedented for growers across the country: “People think when there are high prices, growers are making a lot of money. But frankly, this will be one of the worst years we’re facing, no question.”

“We’re certainly not making more money under these high prices,” Burke said.

“I think a lot of growers will be in the same boat. I would think growers are feeling rather tired and have worked pretty hard, but it’s not falling to their bottom line.”

It seems a little paradoxical, so how can it be that high prices do not lead to high profits for farmers? Consider the market for broccoli, as shown in the diagram below. Before the storm, the market was operating at equilibrium (where the demand curve D0 meets the supply curve S0), with a price of P0, and Q0 tonnes of broccoli were traded. The diagram also shows the producer surplus, which can be thought of as the collective profits of the sellers (if we ignore fixed costs). Producer surplus is the difference between the price that sellers receive for the good, and the sellers' costs. Since the supply curve shows the sellers' marginal costs of production (MC), the producer surplus is the area below the price, above the supply curve (or marginal cost curve), and to the left of the quantity traded. That is the area of the triangle P0BC.

Now consider what happened as a result of the bad weather. The supply of broccoli decreased from S0 to S1. The equilibrium price increases from P0 to P1, and the quantity of broccoli traded decreases from Q0 to Q1. What happens to farmer profits? The new producer surplus is the area P1DE, which is clearly smaller than the original producer surplus of P0BC. [*] Producer surplus decreases. Farmers are less profitable, even though they are able to sell broccoli at a higher price.

This makes sense if you think about it without even considering the market model above. Farmers are receiving a higher price for broccoli, but they have less broccoli to sell. On top of that, their costs of producing broccoli have increased. What we should take away from that is that high prices don't always translate into higher profits for sellers. How those high prices came about it an important consideration.

*****

[*] You may doubt that the triangle P1DE is smaller than P0BC. The difference might not be obvious by eyeballing the diagram. However, P1DE is most assuredly smaller. The area of a triangle is half its base multiplied by its height. Both triangles have the same height (the distance from C to P0 is the same as the distance from E to P1). The triangle P1DE has a base that is equal to Q1, while P0BC has a base that is equal to Q0. So, the triangle P1DE must be smaller than P0BC.

Sunday, 12 March 2023

Peanuts no long cost peanuts after China subsidises soybean production

The Financial Times reported earlier this week (paywalled):

Peanuts have become China best-performing agricultural commodity as dry weather and Beijing’s policies have eaten into supplies, raising traders’ fears that demand from the world’s largest importer of the legume will push up international prices.

China suffered a severe drought in key growing areas last year, while the government’s agricultural subsidy programme, which favours soyabeans, has led to a sharp drop in the country’s peanut acreage...

Beijing has yet to announce official production figures for 2022, but Chinese media have begun sounding the alarm in recent months, warning that government subsidies encouraging farmers to raise corn and soyabeans, a rival oilseed, had pushed farmers to abandon peanut planting in pursuit of greater returns from other crops.

Consider the market for soybeans, shown in the diagram below. Without the subsidy, the market is in equilibrium, with a price of PA, and QA tonnes of soybeans are traded. The subsidy, paid to the farmers (the sellers in this market), is represented by a new curve S-subsidy, which sits below the supply curve. It acts like an increase in supply, and as a result the price that soybean buyers pay for soybeans falls to PC. The farmers receive that price, and then also receive the subsidy from the government, so in effect they receive the higher effective price PP. The difference in price between PP and PC is the amount of the subsidy. The lower price for consumers, and the higher effective price for farmers, leads the quantity of soybeans grown and traded to increase from QA to QB. As noted in the article, farmers grow more soybeans.

Now consider what happens in the market for peanuts, as shown in the diagram below. Farmers have shifted agricultural production to soybeans, because of the subsidies (as shown above). This, along with the dry weather, reduces the supply of peanuts from S0 to S1. This increases the equilibrium price of peanuts from P0 to P1, and decreases the quantity of peanuts traded from Q0 to Q1.

So, you can see that Beijing's agricultural subsidies flow through to the prices on non-subsidised products as well. Both soybeans and peanuts (as well as products derived from soybeans and peanuts) are going to cost more as a result.

Saturday, 11 March 2023

Mobile phone bans and traffic fatalities

Many countries have introduced bans on mobile phone use while driving, in order to reduce distracted driving, traffic accidents, injuries and fatalities. The idea has a fairly simple economic foundation based on rational behaviour - when the cost of something increases, we tend to do less of it. Mobile phone bans introduce penalties (usually in the form of fines) for mobile phone use while driving. This increases the costs of using a mobile phone while driving and reduces the number of people doing so.

But do mobile phone bans work to reduce traffic fatalities? That is the research question addressed in this recent article by Nicholas Wright (Florida Gulf Coast University) and Ernest Dorilas (Cone Health), published in the Journal of Health Economics (sorry, I don't see an ungated version online). Wright and Dorilas look at state-level data on traffic fatalities for 14 states over the period from 2000 to 2015. They implement both: (1) a regression discontinuity design, which looks at daily traffic fatalities for the 90 days before, and after, implementation of the (handheld) mobile phone ban in each state; and (2) a difference-in-differences strategy, which looks at the difference in monthly fatalities between states with and without a mobile phone ban, before and after the ban was introduced.

Using the regression discontinuity design, Wright and Dorilas find that:

...a handheld ban reduced daily traffic fatalities by 0.63 individuals or 0.012 fatalities per hundred thousand population. These reductions account for at least 36% of the mean fatalities over the period.

In the difference-in-differences analysis, they find that:

...the policy reduced motor vehicle casualties by 5.72 individuals on average each month.

The size of the effect is much larger in the regression discontinuity design than in the difference-in-differences analysis (about 18 vs. 5.7 fewer fatalities per month). However, both are statistically significant, providing strong evidence that the mobile phone ban reduces traffic fatalities. However, Wright and Dorilas note that:

...the long-term reduction in total monthly fatalities that we observed in the DID model (5.72) represents approximately one-third of the short-term policy impact... As such, the result indicates that handheld bans are still effective at curbing traffic fatalities over a longer time horizon, although the effect is significantly smaller. One potential explanation for this observed fade-out is that drivers are less inclined to comply with the policy over time...

It would be interesting to explore why there might be reduced compliance over time, if the penalties (and hence the costs) of violating the ban remain the same. Perhaps a policy of gradually increasing penalties over time would maintain compliance with the ban? Increasing the costs of distracted driving would maintain the salience of the penalty in the minds of drivers, and hopefully maintain a lower level of distracted driving (and resulting accidents, injuries, and fatalities) over a longer period of time.

Thursday, 9 March 2023

Open banking as a way to break out of bank customer lock-in

Bank profits have been in the spotlight this week (for example, see this story from Stuff from earlier this week). Banks are wildly more profitable in New Zealand than in comparable countries, and have been for many years (see here and here). If banks are more profitable here than they are elsewhere, that must be making bank customers worse off than those same customers would be in other countries.

The whole discussion about bank profitability has me a little confused. What exactly is Kiwibank doing? If the foreign-owned banks are milking their customers for excess profits (as many people claim), then why isn't Kiwibank simply undercutting their mortgage rates and fees, and paying higher deposit rates, and taking their customers away? Looking at mortgage rates as of today, Kiwibank and ANZ have essentially the same rates. I'm sure that ANZ isn't cross-subsidising its New Zealand mortgage rates from elsewhere, because if they were, their profits wouldn't be so high. If we want an inquiry into banking, we should be looking at how Kiwibank is failing to create more competition in retail banking.

Anyway, coming back to the discussions on bank profitability, one of the things that has arisen in these discussions is the idea of 'open banking'. I've seen various definitions of open banking in recent years (and there is a good explainer on The Conversation). However, one aspect of open banking that some commentators have focused on is bank account portability - the ability for bank customers to shift from one bank to another, without having to change their bank account number (in the same way that customers can change mobile phone providers, without changing their phone number).

Bank account portability is probably not a silver bullet for high bank profits, but it might help to explain at least some of the high bank profitability in New Zealand. To see why, let's consider what happens when there is no bank account portability (as is the case right now).

When there is no bank account portability, then it becomes costly for bank customers to switch banks. The cost is not monetary though. It is the time and inconvenience of switching over all automatic payments, direct debits, and so on to a new bank account number. Economists refer to those costs as switching costs. When switching costs are high, customers become locked in to a longer-term relationship with the seller (in this case, their bank). Customer lock-in is very profitable situation for a seller, because it means that they can increase prices without their customers leaving. In the case of banks, a lack of bank account portability locks customers into their current bank, and means that banks can raise mortgage rates (and reduce deposit rates) without losing their customers.

If bank account portability was introduced, then bank customers would no longer be locked in to the relationship with their current bank (or, at least, not to the same extent - there is still some time cost associated with changing banks, even if you can take your existing account numbers with you). Banks would then have to compete for new customers, rather than relying on milking their current locked-in customer base for profits.

If we want to improve the situation for bank customers in New Zealand (and reduce bank profits as a consequence), then open banking (and bank account portability specifically) is likely to be part of the solution. It works for mobile phones. It should work for bank accounts as well.

Tuesday, 7 March 2023

Increased childcare subsidies will almost certainly raise the price of childcare

The National Party has announced a new childcare policy, called Family Boost, which would be implemented if they are elected later this year. As Stuff reported earlier this week:

If the public service stops hiring so many consultants, National Party leader Christopher Luxon says it could afford greater childcare subsidies.

During his State of the Nation speech on Sunday, Luxon promised he would order the public service to cut $400 million from its consultancy bill. That money, he said, would fund a new childcare policy, giving a 25% rebate to most families’ childcare bills.

The childcare rebates were expected to cost about $250m per year and would be available per household – not per child...

National’s new childcare policy, which it calls Family Boost, would give a rebate of 25% on childcare costs up to an annual limit of $3900 for families earning under $140,000. This would mean up to $75 per week to offset childcare costs.

So far, so good. But then:

 Luxon dismissed concern that giving tax rebates for early childhood fees would lead to an increase in prices, given the majority of children attend for-profit childcare centres. He said the “really competitive” ECE market would mean prices wouldn’t go up.

”I think early childhood education providers will know that they try and pump up fees, they will lose families,” deputy leader Nicola Willis said.

Both Luxon's statement that a competitive market won't raise prices, and Willis' statement that increasing childcare fees following the subsidy will lead childcare centres to lose families, are very likely wrong. To see why, let's consider a model of the market for childcare services, as shown in the diagram below. For simplicity, let's start with no subsidy in the market. [*] Without a subsidy, the market is in equilibrium, with a price of P0, and Q0 hours of childcare are provided. The subsidy, paid to the families (the buyers in this market), is represented by a new curve D+subsidy, which sits above the demand curve. It acts like an increase in demand, and as a result the price that childcare providers receive for childcare services increases to PP. The families pay that price, then receive the rebate back from the government, so in effect they pay the lower price PC. The difference in price between PP and PC is the per-hour amount of the subsidy. [**] So, our model immediately suggests that Luxon's comment about the competitive market not raising prices is incorrect. It is true that the price doesn't go up by the whole amount of the subsidy - the difference between the original price P0 and the new higher price PP is less than the per-hour subsidy (PP - PC).

Next, the number of hours of childcare services provided increases from Q0 to Q1 (because families want to have children in childcare for more hours because of the lower effective price they have to pay, and childcare providers want to provide more hours of childcare because of the higher price they receive). So, even though childcare services have increased in price, the quantity of childcare services demand has increased. So, our model also suggests that Willis' comment about childcare services losing families is also incorrect.

Of course, the model is not the real world. However, what would it take for a subsidy not to increase prices? If the supply curve was horizontal (meaning that supply is perfectly elastic), then prices would not increase. Perfectly elastic supply suggests that there is a large reserve army of childcare providers at the current market price, just waiting for families to call them for childcare services (in fact, it means that there is unlimited supply available at the market price). Any parent who has tried to find a childcare centre for their child at short notice will know that is not the case (see here, for example). And that is just the tip of the iceberg for problems in this sector, as this Stuff article by Michelle Duff documents.

Subsidies increase prices. That happens when the market is competitive, and when the market is not competitive. I'm sorry Mr Luxon - you can't appeal to the competitiveness of a market to argue that subsidies won't increase prices.

*****

[*] This assumption essentially doesn't change any of the main conclusions. It just makes the market a bit easier to draw.

[**] The actual amount of the subsidy varies between families, but again that doesn't change any of the main conclusions.