Thursday 29 February 2024

The effect of Netflix on illegal streaming

Consider two goods (Good A and Good B) that are substitutes for each other. Consumers consume one of the goods, or the other. When one of those goods becomes more expensive, some (but not all) consumers will switch to the other good. When one of those goods becomes less available (or unavailable), many (but not all) consumers will switch to the other good.

So, what happens when a movie is no longer available on Netflix? Some consumers will simply watch other movies instead. Other consumers will try to find the movie that they really wanted to watch on some other service. Those other services include illegal streaming. How much will illegal streaming increase when a movie is removed from Netflix? That's essentially the question that this 2023 article by Sarah Frick (UC Berkeley), Deborah Fletcher (Miami University), and Austin Smith (Bates College), published in the Journal of Economic Behavior and Organization (sorry I don't see an ungated version online) tries to answer.

Frick et al. focus on a particular natural experiment:

Epix is an entertainment cable network that features movies and TV shows distributed by Paramount, Lionsgate, and Metro-Goldwyn-Mayer, and its movie content varies from large blockbusters such as The Wolf of Wall Street to smaller independent films... Epix and Netflix upheld an exclusive licensing agreement from 2010 until September 2015 when Netflix announced its decision not to renew the licensing contract with Epix, citing the company’s plan to shift towards hosting its own original content... In response to this, Epix entered into a multi-year agreement with Hulu... Thus, all titles owned by Epix were removed from Netflix on October 1st, 2015 and appeared on Hulu for streaming that same day.

The shift of movies from Netflix to Hulu represents a reduction in availability because:

At the time of the switch, Netflix had roughly 4 times as many subscribers as Hulu, and between July 2015 and December 2016 (the time frame for this study) Google trend searches for “Netflix” were, on average, 5.6 times higher than searches for “Hulu”...

Frick et al. then look at the effect of this change on Google searches of free streaming of each movie that was removed from Netflix as a result of the change. Specifically:

We measure searches in the United States for “watch movie title free online” per month for each movie using Google Ads Keyword Search Planner. This tool provides the absolute number of searches rounded to the nearest tens for values less than 1000 and rounded to the nearest hundreds for values greater than 1000.

...we collect piracy search rates from July 2015 to December 2016 - three months prior to the switch and 15 months after the switch.

They use a difference-in-differences approach, which compares the difference in searches between movies that were, and were not, removed from Netflix, between the time before removal and the time after removal. The control group contains 501 movies that were never removed from Netflix over the period considered (and were also not available on Hulu), while the treatment group includes 141 movies that moved from Netflix only to Hulu only. In this analysis, Frick et al. find that:

...moving Epix movies from Netflix to Hulu results in a 20-22% increase in intent to pirate those movies compared to movies that remained on Netflix. There are distinct heterogeneous effects by movie release year; older movies experience almost three times the increase in piracy following their removal from Netflix compared to newer movies.

Frick et al. go a bit further than that, estimating the cost of illegal streaming:

We calculate that the annual piracy streaming in 2015 for a popular movie in our sample, Hunger Games: Catching Fire was approximately 100 million streams... Assuming each view is linked with at least one search, our 20% result yields an expected 20 million additional piracy searches after this movie was removed from Netflix. Applying estimates from Blackburn, Eisenach, and Harrison (2019) that each illegal viewing displaces 0.14-0.34 paid viewing, the implied impact of removing a movie as popular as Hunger Games: Catching Fire from Netflix is a reduction of 2.8-6.8 million paid viewings annually. To arrive at a dollar cost of these lost viewings for content producers, we multiply these lost views by the $0.41 average revenue per viewing on a streaming platform from Blackburn, Eisenach, and Harrison (ibid.), which yields an average annual lost revenue per movie of $1.15 -$2.79 million.

Given that some 141 movies were moved by Epix from Netflix to Hulu, that may have cost Netflix hundreds of millions of dollars in lost revenue. Of course, there are lots of assumptions embedded in that estimate, not least of which is that subscribers to Netflix don't pay per movie viewing, so actually the marginal revenue to Netflix from one additional viewing is actually zero. The real question is whether losing access to the Epix movies caused Netflix to lose subscribers, since that would be what would really impact their revenue.

So, putting the lost revenue aside since the estimate isn't particularly robust, these results really tell us that when movies are no longer available on Netflix, there is more illegal streaming of those movies. That also implies that having a movie available on Netflix decreases illegal streaming. Which is pretty much exactly how we should expect things to work for substitute goods. 

Wednesday 28 February 2024

Challenges in establishing causality in the relationship between alcohol outlets and crime

In my ECONS101 class this week, among other things we discussed the 'faulty causation fallacy'. That occurs when you observe two variables (A and B) that appear to be moving together (either in the same direction or opposite directions), and you assume that a change in Variable A is causing a change in Variable B. You might even be able to tell a really good story about why it is that changes in Variable A cause changes in Variable B. But that doesn't mean that your observation and story about causality is true.

What we observe when we see two variables moving together is correlation. When the two variables move in the same direction, that is positive correlation. When the two variables move in opposite directions, that is negative correlation. [*] Sometimes, when we observe correlation, there really is a causal relationship between the variables. When I push down on the accelerator in my car, my car goes faster. Pushing the accelerator (Variable A) causes a change in the car's speed (Variable B).

However, not all correlations that we observe arise because a change in Variable A causes a change in Variable B. Sometimes, it is the other way around - a change in Variable B causes a change in Variable A. We call this reverse causation. Sometimes, there is some third variable (Variable C), and it is a change in that variable that causes both a change in Variable A and a change in Variable B. We refer to Variable C as a confounder (or a confounding variable). Alternatively, we can say that Variable C is a common cause for both Variable A and Variable B. Finally, the correlation that we observe might be entirely by random chance. In that case, we would say that we have observed a spurious correlation (as in the excellent Tyler Vigen website spurious correlations, which offers up a new classic in the form of correlation #2,204: The number of global pirate attacks is highly correlated with the number of downloads of the Firefox browser - perhaps pirates use Firefox?).

Anyway, I want to illustrate these with an example related to my own research, on the relationship between alcohol outlets and crime. I've published articles on this here and here, with another report here. That research establishes a generally positive correlation between the number (or density) of alcohol outlets and crime. The strength of the correlation varies depending on context - it is different for different locations, and different for different types of alcohol outlets. However, the correlation suggests that where there are more alcohol outlets, there is more crime.

Is this relationship causal though? My earlier research doesn't establish this. However, we can tell a good story, using what is termed availability theory. Availability theory suggests that alcohol consumption depends on the 'full cost' of alcohol - which is made up of the price of alcohol, plus the travel cost of getting to and from the alcohol (such as driving to the alcohol outlet and home). When there are more alcohol outlets in an area, they may compete more vigorously on price, meaning that the first part of the full cost of alcohol is lower. And, when there are more alcohol outlets in an area, consumers don't have to travel as far to get the alcohol, lowering the second part of the full cost of alcohol as well. When there are more alcohol outlets in an area, the full cost of alcohol is lower. And when the full cost of alcohol is lower, people will drink more. And when people drink more, then the amount of crime increases (either because there are more alcohol-impaired victims, or more alcohol-affected offenders). So, this observed relationship could be causal.

On the other hand, there could be reverse causation here. In areas where there is more crime, commercial property rents are lower, and there may be more vacant storefronts. Retailers (including alcohol retailers) looking to set up a store are looking for a vacant storefront, and they will tend to be attracted to low rents. So, perhaps an increase in crime would cause an increase in alcohol outlets, as the crime forces other businesses out of an area?

On the third hand, there could be confounding here. As I noted here, social disorganisation theory is the idea that differences (or changes) in family structures and community stability are a key contributor to differences (or changes) in crime rates between different places (or times). Areas that are more socially disorganised have more crime. Areas that are more socially disorganised are also less able to act collectively to prevent alcohol outlets from opening (or remaining open) in their area. So, social disorganisation might be a confounding variable in the relationship between alcohol outlets and crime, because social disorganisation causes more outlets and more crime.

Finally, the observed relationship could just be a spurious correlation, but spurious correlations tend to arise when you have two variables that are trending over time. In this case, the number of outlets doesn't have an obvious time trend (in some areas it is increasing, and in others it is decreasing), and similarly for crime. So, it seems like there is something more than random chance that leads alcohol outlets and crime to be correlated.

So, we can tell a good story for a causal relationship. However, we can also tell a good story for reverse causation, and a good story for confounding. It requires some careful statistical analysis to disentangle these potential explanations, and that is something that researchers (including myself) will continue to work on. I had an article published in the journal Addiction last year (open access, and I blogged about it here) that shows that at least one potential confounding variable, retail density, doesn't explain the relationship. I also presented at the NZAE Conference a couple of years ago on some further analysis which tentatively suggests that the causal relationship is statistically insignificant (although that research is somewhat hampered by the low quality of alcohol outlets data in New Zealand). There will be more to come on this topic in the future.

*****

[*] This is just one way of conceptualising a correlation between Variable A and Variable B (and I think it is the easiest way). There are other ways we can conceptualise a correlation. For example, if we ignore changes over time, we can observe correlations by looking at variables across different individuals or different areas. In that case, if individuals (or areas) with higher values of Variable A also have higher values of Variable B, that is a positive correlation. And if individuals (or areas) with higher values of Variable A have lower values of Variable B, that is a negative correlation.

Monday 26 February 2024

Judges are more lenient on defendants' birthdays

Are you nice to people on their birthdays? Probably you are. Most people are. It's a social norm. It turns out that this social norm also extends to judges' decisions about sentencing defendants, as shown in this recent article by Daniel Chen and Arnaud Philippe (both University of Toulouse Capitole), published in the Journal of Economic Behavior and Organization (ungated version here).

Chen and Philippe first look at judicial decisions in France, using data on 4.2 million sentencing decisions over the period from 2003 to 2014. Importantly, in this context:

Judges in correctional courts (for misdemeanor) have no control over their schedule. For each case, when the investigations are finished, the prosecutor in charge chooses the type of procedure (accelerated/normal) and, based on this, picks the next session of the relevant type. The weekly schedule of the sessions is fixed and decided at the beginning of the year by the head of the court with little discretion to select trial dates on defendant birthdays.

So, whether a defendant is sentenced on their birthday or not is effectively random (and Chen and Philippe establish this with some statistical checks in the paper), and which judge the case is assigned to is unrelated to whether it is a defendant's birthday or not. Are judges more lenient on defendants' birthdays? The results are neatly summarised in Figure 2 from the paper:

Notice that the average sentence is substantially lower on a defendant's birthday (the red column) compared to days on either side of their birthday (the blue columns). Statistically:

Results are consistent and indicate that sentences are reduced by roughly four days... On average, sentences are up to 6.2% shorter on defendant birthdays.

So, judges in France are more lenient on defendants' birthdays. Chen and Philippe then turn their attention to the US, where judges have a bit less discretion. As they explain:

Cases are randomly assigned to a single judge. The United States Sentencing Commission (USSC) produces sentencing guidelines for federal judges. The judges are given a guideline range for the criminal sentence that is based on the severity of the crime and the defendant’s criminal history. Due to these guidelines, the largest factor determining sentence range is the criminal charges brought to the judge by the prosecutor. Therefore, we expect the effect of a birthday to be more limited than in France, where judges have more discretion.

Because of the sentencing guidelines, judges have little discretion over the length of the sentence (measured in months), but can vary the number of additional days in the sentence (so, for example, a sentence of 15 months and six days is more lenient than a sentence of 15 months and 20 days). Chen and Philippe therefore focus on differences in the day component of the sentence for US defendants. Their US data is based on over 600,000 sentencing decisions between 1991 and 2003. And their results look very similar to those for France, and are best summarised in Figure 4 from the paper:

Notice again that the red column is much smaller than the blue columns. Statistically:

...the number of days in a federal sentence declines on defendant birthdays, but not on the days before or after birthdays... We find that judges assign 0.13 fewer days if the decision occurs on the defendant’s birthday, all else equal. The effect is about one-third of the average number of days (0.36). We also see no impact on the days before or after the birthday.

So, judges in the US are more lenient on defendants' birthdays. Interestingly, with the US data Chen and Philippe dig a little bit deeper into judicial thinking, since within that data they know which sentences were given by which judges. They also have a dataset of their written judicial decisions. Using those data:

We measure judges’ use of deterrence language and consider it as a proxy for “economic reasoning”... We find that judges below-median in economic thinking are affected by birthdays, decreasing the day component by 0.17, while those above-median in economic thinking are essentially unaffected by birthdays.

Now, if we interpret Chen and Philippe's measure of 'economic reasoning' as a measure of whether judges make decisions in a rational way (in the economic sense), then it appears that judges who are more rational are less affected by the social norm of favouritism on birthdays. That is what we might expect from rational decision-making, which should be based on the costs and benefits of the alternatives (and this applies in sentencing, just as it does in other decision contexts).

Now, Chen and Philippe bury a lot of the detail on this analysis into Appendix C to the paper, but to some extent this is the most interesting of their results. In fact, it would be really interesting to explore this further. Judges' decisions have previously been shown to be affected by whether the decision is made before or after lunch, or affected by weather conditions. It would be interesting to see whether judges who are more rational are less affected by those irrelevant factors as well. There is definitely an opportunity for future research in this area.

Saturday 24 February 2024

The effect of inequality on crime

A rational choice (economic) model of crime would suggest that higher inequality leads to more property crime. This is because, as the disparity between the rich and poor increases, the poor have more incentive to commit property crime, because there is more to gain from such crime, and the opportunity cost of committing crime is lower for the poor than for the rich. Now, this model is easily criticised as unrealistic, as even the relatively wealthy may commit crimes that have an economic motive (Bernie Madoff being the obvious example). The model also doesn't do a good job of explaining violent and other crimes that do not have an obvious economic motive.

Criminologists have a different view of the relationship between inequality and crime. One criminological theory that may be used to explain the relationship is social disorganisation theory. This theory suggests that higher inequality reduces social cohesion, which in turn increases crime - not just property crime, but crime more generally.

Given how easy it is to criticise the economic model of crime, I was interested to read this new article by Matteo Pazzona (Brunel University London), published in the journal World Development (open access). Pazzona conducts a meta-analysis of studies of the relationship between inequality and crime, limiting the analysis to empirical studies in the economics literature (more on that point later). They identified 43 studies, with 1341 estimates of the relationship between inequality and crime (it is not unusual for a study to report multiple estimates, with different covariates and spread across main results and robustness checks). Meta-analysis provides a method of combining those results to estimate an overall effect. In this case, Pazzona finds that:

Firstly, the true values of the partial correlation coefficients – net of publication bias – are statistically but not economically significant. They are in the range 0.007–0.123, which represents non-existent or small effects, according to the guidelines provided by Doucouliagos (2011). Secondly, I also find some limited evidence of positive publication bias (preference for positive results), but its presence is limited.

So, Pazzona concludes at that point in the paper that there is basically no effect of inequality on crime. However, the Doucouliagos paper that he cites says that effects between 0.070 and 0.173 represent a 'small effect', and three of the six point estimates in Pazzona's preferred model fit within this range. So, perhaps there is a small effect of inequality on crime. Which, to be fair to Pazzona, is what he concludes by the end of the paper:

It is safe to say that, if inequality affects crime, its effect is – at best – small.

However, this is clearly not the last word on this topic. Pazzona limits the analysis to include only studies published in the economics literature. That leaves out many studies within the criminological or sociological literature (and possibly other literatures as well). As he notes, three past meta-analyses conducted by criminologists:

...found correlation coefficients higher than the ones found in this research and no evidence of publication bias.

So, that suggests that leaving the criminological literature out of this meta-analysis probably biases the overall effect downwards. Pazzona gives only a very weak rationale for ignoring the studies outside of economics:

By focusing exclusively on economics, I can also limit the large differences in theoretical and methodological approaches with other sciences.

Yes, but at a cost of probably biasing the estimates. We could try to argue that economics has a larger publication bias problem than many other fields (see here), and so the small effect of inequality on crime from the economics literature overall might even over-estimate the true effect. However, Pazzona has very carefully controlled for publication bias in the meta-analysis, 

Coming back to the choice to limit the analysis to economics studies alone, this was an especially inexplicable choice, given that in subsequent analysis in the paper, Pazzona controls for a variety of features of the studies. That analysis could have dealt with the range of methodological approaches that were applied, and actually been helpful in understanding the differences between the findings in the economics literature and those in criminology. In that heterogeneity analysis, Pazzona found that, when looking at the type of crime that was analysed across the 43 studies:

...the coefficient for Property crime is negative and relatively small... The lack of a positive and statistically significant impact on property crime categories implies that inequality does not primarily influence economically motivated criminal behaviour as predicted by the rational choice model.

Score another one against the economists, since the economic model of crime suggests that the effects of inequality on crime should be largest for property crime. How the variables are measured matters, with studies that use crime victimisation survey data reporting larger estimates than those using police data, and using a measure of inequality that is more sensitive to income differences at the bottom of the distribution also increases the estimated relationship with crime. On the latter point, Pazzona notes that:

This provides some evidence that crime incentives are the highest when criminal payoff increases, rather than when the opportunity cost decreases.

I guess, if you believe the economic model of crime, which the other results might give us reason not to. The other variables that are included in a model matter too. Including unemployment and a measure of police deterrence increases the observed effect, while including measures of income or poverty decrease the observed effect. Cross-sectional studies also seem to inflate the observed effect. These results are important, as they show the consequences of methodological choices in the analysis (and, as per my point above, could have helped us understand the differences with the criminology literature).

Overall, this paper is a good case study of how to conduct and report a meta-analysis (and for that reason I have shared it with one of my PhD students who is doing a meta-analysis in quite a different research area). However, the choice to exclude non-economics literature from the analysis leaves the key research question of the relationship between inequality and crime incompletely answered. Clearly, there is more work to do in this area.

Friday 23 February 2024

This week in research #11

Here's what caught my eye in research over the past week (which was relatively quiet, as teaching prep has consumed much of my week):

  • Abeliansky, Beulmann, and Prettner (open access) look at German Socioeconomic Panel data and find that higher robot intensity in a manufacturing sector is associated with deteriorating mental health among workers in that sector, and that the effect that is mainly driven by worries about job security and a lower sense of achievement on the job
  • Blanchard, Bown, and Chor (with ungated earlier version here) find that the US-China trade war affected political outcomes in the US, with Republican House candidates losing support in counties more exposed to tariff retaliation, but receiving no appreciable gains in counties that received more direct US tariff protection
  • Neumark (open access) reviews the effects of minimum wages on health outcomes and health-related behaviours, and shows somewhat mixed effects, but in their view strong enough to conclude that policy conclusions that minimum wages improve health are unwarranted

Thursday 22 February 2024

Book review: Whatever It Is, I'm Against It

Working in academia can be a really frustrating experience. Don't get me wrong. It's a great job. But there are some seriously frustrating aspects to it. One of those is the academic bureaucracy and bullshit work. Another is that getting meaningful change is a little like turning a supertanker. You need to have an excessive amount of patience, because nothing changes quickly.

That is essentially the theme underlying Brian Rosenberg's book Whatever It Is, I'm Against It, which I just finished reading. Rosenberg is former president at Macalester College in the US, a position he held for 17 years, so is well placed to talk about intransigence and inertia in academia. These are serious problems, and they run deep throughout the industry. As Rosenberg notes early in the book:

The resistance to anything like serious change is profound. By "change" I don't mean the addition of yet another program or the alteration of a graduation requirement, but something that is truly transformational and affects the way we do our work on a deep level.

The book starts by making the case for change. The industry faces both demographic and pedagogical challenges, and the pandemic and technological change have simply doubled down on those challenges. Then, Rosenberg turns to outlining a number of factors that contribute to the resistance to change. None of these will surprise any reader with a passing familiarity with higher education. The culprits are a mixture of institutional complacency, resistance to change from presidents, faculty with loyalties split between their institution and their discipline, shared governance (with faculty having a large amount of decision-making power), strategic planning processes, and tenure. Not all of these are features of the higher education landscape in New Zealand (shared governance is limited, and tenure is non-existent), and yet the problems described in the book apply as much in New Zealand as they do in the US (which is where the vast majority of the examples that Rosenberg uses are drawn from).

Rosenberg resists the urge to try and rank the various contributing factors in any order of importance, which is just as well. It seems likely that the causes are over-determined, even with that small set of factors. Nevertheless, a larger fraction of the book is devoted to discussing tenure than is probably warranted. That chapter came across a more of a general complaint about tenure (not surprising, coming from a former university president), than a tightly argued explanation for the contribution of tenure to resistance to change in higher education. The remainder of the book was much better in that respect.

I especially liked the section on strategic planning, which highlighted that every university or college wants to argue that they are distinctive, and yet by arguing their distinctiveness they really demonstrate that they are so much the same. And this bit made me cringe:

Every institution in search of enrollment and revenue, it seems, is looking to move somehow into the world of online education, but in doing so they are stepping into the world of bigger, better-known, more well-funded providers or contracting with for-profit online program managers, who take up to 50 percent of online revenue and have a less than admirable history...

So many universities seem to think that they can be distinctive by moving more education into online modes. And yet, commoditising education in that way will simply lead to greater competition, lowering prices until only the lowest cost operator is left. Most universities will not be the last institution standing, and none of them seem to understand it. And that is in spite of the increasing number of university council or governing board members with business experience. Anyway, I digress.

I enjoyed this book, but not for the reasons that I expected before I read it. I was anticipating more micro-level stories of resistance at the level of individual faculty and within departments (which I have seen first-hand). Instead, the resistance that Rosenberg was most concerned with was resistance to change at the institutional level. That reflects Rosenberg's background and extensive industry experience, and for that reason alone this is an informative book to read, if you are interested in the challenges and impediments to change in higher education generally. Rosenberg has clearly had a lot of frustrating experiences as an academic administrator. I hope that he found the writing of this book cathartic. It certainly seems like it was.

Wednesday 21 February 2024

Korean doctors are trying to protect their market power

In my ECONS101 and ECONS102 classes, we define market power as the ability of the seller (or sometimes the buyer) to have an influence over market prices. How do sellers (or buyers) get market power? Essentially, a seller (or buyer) has market power if they don’t face a lot of competition from other sellers (or buyers). The fewer competitors a seller (or buyer) has, the more market power they will have, and that generally means that they can set a higher price (if they are a seller, or a lower price if they are a buyer).

With that in mind, this report from Reuters should come as no surprise:

South Korea's prime minister pleaded on Sunday with doctors not to take people's lives hostage, a day before scores of trainee doctors are expected to quit to protest a plan to increase medical school admissions and the number of physicians.

Trainee doctors at the country's five biggest hospitals, all in Seoul, have said they would tender their resignation on Monday, raising concerns about the impact on medical service as the system relies heavily on them for emergency and acute care.

Let's be clear. Whatever 'concerns' the trainee doctors are raising are secondary to their desire to protect themselves from competition. If you've spent several years in medical school, possibly running up a large student loan debt, the last thing you want is lots of competitors arriving in the market in a few years' time, driving down the price of your services. This type of coordinated behaviour, trying to protect a position of market power, is a form of rent seeking. Economists call it rent seeking because another term for economic profit for sellers is the sellers' economic rent.

What's interesting to me is that this is essentially the same behaviour that we see in some specialised labour markets in New Zealand. When the Nursing Council imposes a "long and costly bridging course" on foreign-trained nurses, that's rent seeking. When foreign-trained teachers have to jump through multiple hoops, some of which are imposed by the Teaching Council, that's rent seeking.

Some of the most effective rent seekers are engaged in occupational licensing. The sad thing is that these are exactly the occupations (like doctors, nurses, and teachers) where New Zealand (along with many other countries) is facing labour shortages. Those labour shortages are being perpetuated by the occupational in-group, at the expense of everyone else.

I hope that the Korean government doesn't give in. It shouldn't. Governments should resist obvious rent seeking behaviour from firms. They should also resist obvious rent seeking behaviour from occupational groups.

[HT: FirstFT Asia morning newsletter]

Tuesday 20 February 2024

The global price of nickel collapses, and Australian miners are hurting

This article in The Conversation yesterday by Mohan Yellishetty (Monash University) discussed the state of the global market for nickel:

Nickel is a metal crucial for the production of stainless steel, alloys, electroplating and the batteries used in electric vehicles.

The global price has dived from a high of US$50,000 in 2022 to just US$16,400 per tonne on Monday in response to a huge increase in supply from Indonesia, much of it from Chinese-owned and operated mines.

To see how this works, consider the diagram below, which represents the market for nickel. Before the increase in supply from Indonesian nickel mines, the supply was S0, and demand was D0. The equilibrium price of nickel was P0 (US$50,000), and the equilibrium quantity of nickel traded was Q0. The increase in the global supply of nickel to S1 decreased the equilibrium price to P1 (US$16,400), and increased the equilibrium quantity of nickel traded to Q1.


Should we be worried about this situation? Yellishetty clearly is:

Australia is a leading producer of critical minerals, supplying all ten of the elements needed for lithium-ion batteries, and has the advantage of better environmental, social, and governance (ESG) standards that make it an attractive destination for investment.

But it lacks the capacity to refine all of its own production, meaning it has to dispose of many of the critical minerals it extracts as byproducts...

Until Australia can find a way to break free of the market stranglehold of our biggest customer, those investments will remain at risk.

Australian nickel producers are now receiving a much lower price for their nickel. Clearly, that makes them worse off. Nickel is less profitable, and: 

On Thursday BHP wrote down the value of its West Australian nickel division Nickel West to zero and said it was considering placing the entire division into a “period of care and maintenance”.

Some Australian producers (like BHP) may shut down operations, albeit temporarily (in mining terminology, "care and maintenance" refers to a temporary closure).

However, the negative impact on Australian miners isn't the end of the story. Nickel consumers are clearly better off, because they are now buying more nickel (the equilibrium quantity has increased), and they are paying a lower price per tonne. Since nickel is an input into the production of a number of products, such as electric vehicle batteries, lower nickel prices lower the production costs of those products. That flows through into lower prices of the final products that include nickel as an input, meaning lower prices for electric vehicles and replacement batteries. [*] So, it's not all bad.

Some people may be concerned that the nickel profits are going to Chinese-owned mining firms. However, that concern would need to be weighed up against the fact that consumers (including Western consumers) will benefit from lower nickel prices. The Australian mines aren't going away completely, unless for some reason they lose the capability necessary to re-start production. If that were to happen, then maybe governments might decide to act, but not right now.

*****

[*] The relevant market diagram for the electric vehicle battery market is exactly the same as the one shown above. Lower costs of production lead to an increase in supply (because the supply curve shows the marginal costs of production, and lower costs shift that curve down and to the right), which decreases the equilibrium price and increases the equilibrium quantity of electric vehicle batteries.

Monday 19 February 2024

'Swiftonomics' and the optimal number of Taylor Swift examples

I was interested to read this recent article on Inside Higher Education, about 'Swiftonomics':

Paul Krugman, a New York Times columnist, Nobel Prize winner and Distinguished Professor of economics at the CUNY Graduate Center, began working on the curriculum for the course last summer. Swift’s massive Eras Tour had just kicked off, creating such a frenzy among fans that it caused Ticketmaster’s website to crash.

Most of the course’s 12 economic principles feature a Swift example, from her impact on supply and demand with ticket prices to the discussion of monopolies, since Ticketmaster was the sole seller of her concert tickets. Krugman said he designed his course to make it relatable to college-age students—even if they are not exactly fans of the pop star.

“There’s always been a problem with principles books, where you have middle-aged authors trying to relate to college students, and it comes across as condescending or fake,” Krugman said. “In this case, it’s a natural connection that matters with a lot of students. It wasn’t ‘This is trendy; let’s put it in [our curriculum].’”

My ECONS101 and ECONS102 classes are filled with real-world examples. Indeed, one of the purposes of this blog from the beginning has been to demonstrate to my students how economics applies to real world situations and problems. Am I missing a trick by not including more Taylor Swift examples in those classes? Now, I'm pretty sure that I could come up with a Taylor Swift example for each of the microeconomics topics in ECONS101, which I start teaching next week. [*] It might be a bit more challenging for some of the macroeconomics topics (although for inflation, perhaps a Beyoncé example would suffice?).

How many Taylor Swift examples should be included? We can actually apply some marginal analysis (from my ECONS102 class) to consider this question. This is illustrated in the diagram below. Marginal benefit (MB) is the additional benefit of one more Taylor Swift example. The marginal benefit of Taylor Swift examples is downward sloping. Not all Taylor Swift examples provide the same benefit for student learning, and students would get bored if I trundled out variations on the same tired examples over and over, even if the source material is interesting. So, each additional Taylor Swift example must provide less additional benefit (lower marginal benefit) than the previous one. Marginal cost (MC) is the additional cost of one more Taylor Swift example. The marginal cost of Taylor Swift examples is upward sloping - the more Taylor Swift examples that are used, the higher the opportunity costs of repairing one more Taylor Swift example. Some of the existing examples I use are better than others. We can replace the less-good examples with Taylor Swift examples at relatively low opportunity cost. However, as more and more Taylor Swift examples are squeezed in, the better previous examples start to be squeezed out. So, the marginal cost of Taylor Swift examples increases as we include more Taylor Swift examples. The 'optimal quantity' of Taylor Swift examples occurs at the quantity where MB meets MC, at Q* Taylor Swift examples in the diagram.

Now, consider what happens if more than Q* Taylor Swift examples are included in the paper, such as Q2. For every Taylor Swift example beyond Q*, the extra benefit (MB) of each example is less than the extra cost (MC) of each example, making the students worse off. So, it is clear that it is possible to include too many Taylor Swift examples in a paper.

Now, this doesn't tell us exactly how many Taylor Swift examples is the right number for a paper. But it does tell us that we can go overboard in our enthusiasm for Taylor Swift. Coming back to the Inside Higher Education article, note that for Krugman's course, "most of the course’s 12 economic principles feature a Swift example". That suggests that, for Krugman at least, Q* is fairly low, even in a course titled 'Swiftonomics'. [**]

Do I need more Taylor Swift examples in my papers? Perhaps my students will tell me.

[HT: Marginal Revolution]

*****

[*] In fact, there are a couple of Taylor Swift examples that I do use in class, one in ECONS101 and one in ECONS102.

[**] Which raises a question about the credibility of titling a course 'Swiftonomics', when it maybe includes one Taylor Swift example in each topic.

Sunday 18 February 2024

Terrorism and international air travel

When a terrorist attack occurs in a country, it seems natural to expect that international tourists would be dissuaded from visiting that country. How big an effect does terrorism have in reducing international tourism arrivals? That is essentially the question addressed in this 2018 article by Devashish Mitra (Syracuse University), Cong Pham (Deakin University), and Subhayu Bandyopadhyay (Federal Reserve Bank of St. Louis), published in the journal The World Economy (ungated version here).

Their data covers the period 2000-2014, with bilateral air passenger travel between 58 source countries and 26 destination countries, drawn from the UN Service Trade Database, and terrorism data from the Global Terrorism Database maintained by the University of Maryland. They limit the terrorism data to:

...all non-state terrorist attacks that the GTD can classify without uncertainty as terrorist incidents to construct the measure of terrorism as our main explanatory variable of interest...

For the analysis, they employ a gravity model approach (which I have used in my own research, and previously described here and here). Mitra et al. find that:

...terrorism adversely and significantly impacts bilateral air passenger travel. What is the economic significance of our estimates? According to the results... a 10% increase in the number of terrorist incidents in the source country and the destination country results in a reduction in bilateral air passenger travel at least approximately by 1% annually for pairs with bilateral distances of 1,000 km or less... It equivalently means that an additional terrorist incident, which usually is of very small scale and non-fatal, can cause average bilateral air passenger travel between those source and destination countries to decrease by at least 1.3% or US$0.9 million approximately... Similarly, for pairs of countries with bilateral distance being 2,000 km or less an additional terrorist incident causes approximately a 0.82% decrease in their bilateral air passenger travel.

They also find that transnational terrorism has larger effects, and present a number of robustness checks of the results. However, I want to stop things right here, because there are two major problems with their analysis. First, their dependent variable is the dollar value of air passenger travel. That is problematic because the value of air passenger travel is made up of the number of air passengers multiplied by the cost of their travel. Theoretically, we might expect the number of air passengers to decrease due to terrorist attacks. However, the theoretical effect on the cost of travel is indeterminate. If the demand for air travel decreases, prices will decrease. However, if the supply of flights decreases, prices will increase. These effects offset each other. And besides that, I would argue that we should be more interested in the number of air passengers anyway, not the value of air passenger travel.

Second, the key explanatory variable (terrorism) is also problematic, because they measure it as the total number of terrorist attacks in both the origin (where the air passengers are coming from) and the destination (where the air passengers are going to). As noted at the start of this post, terrorism should dissuade international air travel. However, that applies to terrorist attacks at the destination. It doesn't apply to terrorist attacks at the origin. In fact, you could argue that terrorist attacks at the origin should increase international air travel, as people try to escape the risk of terrorism. So, again, the effect of terrorism as Mitra et al. measure it on air passenger travel is theoretically indeterminate.

Combining those two problems, I think the analysis doesn't really tell us much at all about how terrorism affects international air travel, because both the dependent variable and the key explanatory variable are both mis-measured. However, there is clearly an opportunity for some follow-up work by a good Honours or Masters student, using more appropriate data to explore the same research question.

Saturday 17 February 2024

The economics of time travel

Have you ever met a time traveller? I haven't. Or at least, not one who identified themselves as a time traveller. Why haven't we met any time travellers? Is it because time travel is impossible? Or, given the many choices of time period where a time traveller might choose to go to, perhaps our time just isn't worth visiting? Perhaps we just aren't that interesting or important to people from the future. These are the questions that Stuart Mills (University of Leeds) grapples with in this recent and interesting article published in Seeds of Science (open access). Mills' argument is simple, and summarised at the end of the introduction:

I argue the main economic benefit which our descendants may receive via time travel is knowledge which we currently possess, but they have lost. Furthermore, this knowledge must be sufficiently critical to our descendants to justify the costs of time travel, which are likely to be dominated by energy costs. I posit that even assuming the energy requirements for time travel are met by a human civilisation in the future, it is highly unlikely that that same civilisation will come to depend on a piece of knowledge which we currently possess, but they have lost and cannot rediscover by other means. In other words, I argue that even assuming time travel is possible, our epoch is unlikely to offer any economic benefit to a future, time travelling civilisation.

In other words, our time simply isn't worth travelling to. Mills supports this argument with a rough cost-benefit analysis. The costs and benefits are largely unknown, but the point is not to perfectly evaluate a benefit-cost ratio for time travel, but to establish an explanation for the empirical observation that time travellers have thus far never been observed. As Mills notes:

...the reader is encouraged to regard the given inequality as a proposal along the lines of the Drake equation for estimating the number of intelligent civilisations in the universe—not necessarily accurate, but sufficient for provoking some thought and discussion.

The paper is interesting, but I wonder if the premise itself is faulty. Just because time travellers have not been observed, that doesn't mean that there have been no time travellers. It just means that any time travellers have not been observed. A civilisation that is advanced enough to have developed time travel is almost certainly also advanced enough to have developed some form of cloaking technology, such that time travellers can avoid detection. Paraphrasing the American astronomer Carl Sagan, we could be over-run with time travellers, and we wouldn't even know it.

To be fair to Mills, Footnote 1 in the paper does make note of a lot of reasons why we might not observe time travellers, but misses Sagan's suggestion. To some extent, I found that footnote to be one of the highlights of the paper, especially the seventh reason (which is something I have wondered about on occasion):

Seventhly, time travel may only affect time, not space. As the Earth is constantly moving around the Sun, and the Solar System shifting around the galaxy, and the galaxy moving throughout the universe, a time traveller may very well travel to attend the party, only to find themselves on the opposite side of the universe.

Time travel is an interesting problem to ponder. I suspect we don't know enough to really answer this question yet. Mills has provided a starting point, but there are a lot of unjustified assumptions, and alternative assumptions would likely be equally unjustified. If you are interested in time travel, you should read the article. But also, read the comments at the end of the article, and Mill's response.

[HT: Marginal Revolution]

Friday 16 February 2024

This week in research #10

Here's what caught my eye in research over the past week:

  • Charness and Rodriguez-Lara (open access) use a simple experiment to show that people are more likely to lie when they disclose non-personal information (a number they thought of) compared with personal information (the last digit of their birth year)
  • Koivuranta, Korhonen, and Lehto (open access) find using Finnish data that PM2.5 ambient air pollution reduces student exam performance in mathematical but not in verbal subjects
  • Rodriguez et al. look at the abstracts of top five economics journals between 2000–2019, and find that abstracts with a higher proportion of women co-authors are more readable (which might sound like a good thing, but actually just reiterates earlier findings that female economists are held to a higher standard in publications than their male counterparts - see here)
  • Lee, Lee, and Miyamoto (with ungated earlier version here) find a negative association between inflation and the speed of ageing for both Japan and the US
  • Alsultan, Kourtis, and Markellos (open access) investigate the CryptoPunks NFT art collection, and find that buyers prefer NFTs with higher levels of colourfulness and texture complexity and lower levels of saturation and brightness

Finally, you can now watch the recordings of sessions from the New Zealand Economics Forum. The Day One video is here, with the tax session here. And the Day Two video is here. The session I presented in (as part of a panel discussing social investment) is on Day One, starting at 4:55:00 or thereabouts. Maria English (ImpactLab) and Merepeka Raukawa-Tait (Whānau Ora) were the true stars of that session though!

Tuesday 13 February 2024

Willingness-to-pay for working from home

Jobs come with both monetary and non-monetary characteristics. The monetary characteristics include the salary or wage (obviously) and other monetary benefits. The non-monetary characteristics include how pleasant or unpleasant the job is, how clean or dirty, and how safe or risky. An important non-monetary characteristic of jobs that has become particularly important since the pandemic is the flexibility to work from home. However, it isn't clear whether working from home is a positive or negative characteristic. Many people prefer to work from home, but many others don't. Incidentally, I'm in the latter group, because we only have a small house, and working from home entails working at the dining room table.

Now, non-monetary characteristics of jobs can give rise to wage differences between jobs - what economists refer to as compensating differentials. Jobs with desirable non-monetary characteristics tend to have lower wages than jobs with undesirable monetary characteristics (for an extreme example, see here). One way of explaining this is that many fewer workers want to work in jobs that have undesirable characteristics, and that lower supply of labour leads to higher wages. In other words, workers are essentially compensated for taking on jobs with undesirable non-monetary characteristics.

What does a compensating differential look like for working from home? I wrote about this last year, but the research I referred to there didn't look specifically at compensating differentials. In contrast, this new article by Akshay Vij (University of South Australia) and co-authors, published in the Journal of Economic Behavior and Organization (open access, with non-technical summary on The Conversation), does. They use data from a survey of 1113 employees conducted in Australia in 2020-21, where:

Respondents with an on-site job that had some ability to be done remotely were presented with multiple stated preference experiments where they were offered a choice between job arrangements with different salaries, and differing degrees of flexibility with regards to when and where job tasks needed to be performed... Each respondent was shown 8 scenarios, and the job attributes were varied systematically across scenarios...

This is what economists refer to as a discrete choice experiment, since research participants are asked to make a discrete choice among alternatives, which have different characteristics. Since each research participant makes many such choices, that data can be used to extract the marginal willingness-to-pay for each of the characteristics. In this case, the characteristics included the 'flexibility to work remotely on some days', and the 'flexibility to work remotely at some hours'. So, this research essentially worked out how much workers were willing to pay (that is, how much salary or wage they were willing to give up) in order to have the ability to work remotely.

Vij et al. then used a latent class model to identify four different groups of research participants based on their different responses, as shown in Table 2 from the paper:

Each class represents a roughly equal share of the sample of research participants. However, only two of the groups (Class III and Class IV) value the ability to work remotely some of the time. Vij et al. summarise the results as:

Across our sample, the average worker is willing to forego roughly AUD$3000 - AUD$6000 in annual wages to have the ability to work remotely some workdays and/or workhours. Given that the average respondent in our sample earns roughly $73,000 in annual wages, this implies a compensating wage differential of 4 – 8 per cent. However, median values are lower at AUD$1000 - AUD$1800, or roughly 2 per cent of average annual wage, due to considerable heterogeneity across the four classes. Classes 1 and 2 together comprise 54.3 per cent of the sample population, and do not have a statistically significant preference for either the ability to work remotely some workdays and/or workhours, and therefore have a corresponding wage differential of $0. Class 3 is willing to forego roughly 3 - 5 per cent of average annual wages (AUD$2000 - AUD$4000) to have the ability to work remotely some workdays and/or workhours, and Class 4 is willing to forego 16 – 33 per cent (AUD$12,000 - AUD$24,000) for the same.

Interestingly, the different classes differ on their beliefs about remote work, and Vij et al. note that:

...we observe that Class 1 is less optimistic than the other classes about the quality and quantity of work that can be done remotely, explaining their lower marginal willingness to pay for the ability to work remotely, and Class 4 is most optimistic, explaining their higher marginal willingness to pay...

Next, we compare responses to attitudinal indicators measuring impacts on human relations. Here, Class 2 has significantly greater concerns than the other classes, explaining their lower marginal willingness to pay for the ability to work remotely. In particular, workers belonging to Class 2 are more concerned on average about the negative impacts on their relationships with their colleagues, supervisors and the firm as a whole, as well as opportunities for learning and career advancement.

When you think about the types of jobs that each class predominantly engages in (from Table 2 above), this makes a lot of sense. Class 1 is mostly clerical and administrative workers, but Class 2 is mostly managerial workers, and the latter probably rely more on interpersonal relationships in their work that would be negatively impacts by remote work. In contrast, Class 3 and 4 workers are mostly professional workers, likely to be more self-directed and in some cases fairly autonomous.

However, in light of recent increases in remote work, this was interesting:

Interestingly, in terms of experience with remote working, individuals belonging to Class 2 were more likely to have had greater experience with remote working arrangements prior to the pandemic than other classes...

And yet, those workers had a zero compensating differential for remote work. That is, they didn't value working from home. The survey was conducted in 2020/21, when many of us were experiencing large-scale remote work for the first time. As other workers gain more experience with remote work, I wonder whether the Class 3 and Class 4 workers will be as positively inclined towards remote work in the future. This is something that deserves further investigation.

Finally, in terms of demographics, there was little difference between the classes, although:

...we find that women are most likely to belong to Class 4, and have a significantly higher valuation for remote working. This is consistent with previous studies that have found that women value job flexibility more than men, due to greater caregiving and other responsibilities...

This has interesting implications for the gender wage gap. Since women are more likely to choose flexible work arrangements, and are willing to pay (through lower wages) for the flexibility that remote work provides, should a zero gender wage gap be the appropriate goal, or a zero gap accounting for differences in flexible work arrangements? Again, this is something that deserves further consideration.

Remote work isn't going away any time soon. Some of us might think that the flexibility is a good thing for all workers, but it is clear that not all workers themselves feel that way.

Read more:

Sunday 11 February 2024

Woolworths learns the hard way that incentives change behaviour

In their book Think Like a Freak (which I reviewed here), Steven Levitt and Stephen Dubner give an explanation of how incentives lead to unintended consequences:

  • No individual or government will ever be as smart as all the people out there scheming to take advantage of an incentive plan;
  • It’s easy to envision how people who think like you do will react to an incentive plan, but not everyone thinks like you do; and
  • We assume that people will always behave the same way they do today. But incentives by their very nature change people’s behaviour, sometimes in unexpected ways.

Woolworths learned this the hard way this week, as explained in this New Zealand Herald article:

A loophole in the new Woolworths Everyday Rewards loyalty programme has seen some shoppers create burner accounts and claim hundreds of dollars in points to spend in-store.

A generous 1000 points for downloading the app and registering an account has seen people create multiple accounts to claim the reward.

The points were then shared back to the main account. A $15 voucher to spend in-store or online was given for every 2000 points.

One man who worked with computers had heard through friends about the loophole and was surprised it was so easy.

“I heard about people making multiple burner accounts and stocking them each with $150-plus in rewards, then driving around buying up the sports supplements,” he said.

He said Woolworths had since shut the loophole by disabling the ability to share points between cards.

How did this happen? Woolworths is clearly not as smart as all the people who try to take advantage of the new rewards scheme. It's easy for Woolworths to anticipate how their own marketing team (or whoever devised the rewards scheme) will react to the scheme, but not everyone thinks that way. And, if they based the scheme on people's past behaviour on the OneCard loyalty scheme, making a change to the scheme (by giving away 1000 free points) will change their behaviour. This was a classic case of unintended consequences, and an expensive lesson about human behaviour for Woolworths.

[HT: The incomparable Gemma Piercy-Cameron]

Saturday 10 February 2024

Book review: Cogs and Monsters

Last week, I reviewed What's the Use of Economics?, edited by Diane Coyle, and noted that I wished I had read it much earlier. In part, that's because I felt that economics had moved on a bit from where it was in 2012 when that book was compiled. Diane Coyle's 2021 book Cogs and Monsters updates the situation and simultaneously argues that, while economics has moved on since the Global Financial Crisis, we haven't yet reached the final destination. And this is important. Economics and economists are constantly the target of ill-informed criticism, with which Coyle is clearly annoyed:

This book reflects my frustration with the straw men arguments because, as well as ignoring welcome changes in economics and in the way it is taught, they have allowed economists to overlook or deny some things that are badly wrong with the discipline, both in its intellectual approach and in the ways economists are so unrepresentative of the societies we aim to study.

I also bristle at the ongoing criticisms of economics that don't reflect the reality of modern economics, or that treat all economics as if it is macroeconomics (when microeconomics in particular has seen many important developments in recent times). In contrast, while Coyle outlines a critique of economics, hers comes from a position of greater knowledge and understanding. And her goal is not to simply engage in petty point-scoring, but to have economics incorporate many of the changes in theoretical and empirical understanding, as well as recent changes in the way that the economy works. And it is those two aspects (critique and improvement) that define the cogs and the monsters from the title of the book:

...the cogs are self-interested individuals assumed by mainstream economics, interacting as independent, calculating agents in defined contexts. The monsters are snowballing, socially-influenced, untethered phenomena of the digital economy, the uncharted territory where so much is still unknown (labelled 'Here be monsters', on mediaeval maps). In treating us all as cogs, economics is inadvertently creating monsters, emergent phenomena it does not have the tools to understand.

That last point, on economics creating monsters, is the theme of the first parts of the book. The key message is that, by assuming rational, self-interested agents, economic models can actually create the conditions that lead to more people acting in self-interested ways, and how this is not ultimately for the good of society. Much of this happens through economics' influence on policy. However, Coyle argues that, while economists might believe otherwise, they are unable to truly stand outside of the world that their models are being used to understand, and that the distinction between positive (what is) and normative (what ought to be) is a false dichotomy.

The last part of the book looks at what economics needs to do differently, and concludes with a special focus on the digital economy (where Coyle has a deep and longstanding interest, going back to her 1998 book The Weightless World). Coyle argues that we already have many of the elements for an improved economics, but they need to be systematically incorporated into the mainstream:

We need to build on the work that already exists to incorporate as standard externalities, non-linearities, tipping points, and self-fulfilling (or self-averting) dynamics. We need to revive and rethink welfare economics... We need a modern approach to the public provision and regulation of information goods, applying the rich literature on asymmetric information and older network industries to the non-linearities and externalities of the digital world. And we need to put the social, not the individual, at the heart of the study of economics, taking seriously the line often-stated about the importance of institutions and trust to economic outcomes. This means above all returning to the origins of economics as political economy.

Sadly, the book had less to say about institutions and trust than about the other aspects, which was a bit of a missed opportunity. Certainly, more could have been made of those points. And there were other parts of the book where I disagreed with particular aspects, although the overall message is important and economists should take heed. If I had one minor quibble though, it was one part of the book that discussed the 'Close the Door' campaign:

As you walk down a high street in winter, you will find many stores with their doors wide open blasting out heat in the entrance. This is not a desirable state of affairs either in environmental terms or in terms of the stores' energy bills. So why do they continue to do it? Their fear of discouraging ambling shoppers from entering their store, when every competitor's door is open, outweighs their desire to cut the electricity bill, or reduce emissions. No sop can shut the door unless the others do so. It is a classic co-ordination problem...

What Coyle describes here is not a coordination problem (which in game theory has two or more Nash equilibriums, like here or here), but is a prisoner's dilemma (where there is a Nash equilibrium that results in all players being worse off, whereas if they agreed to work together, there is an alternative outcome where they could all be better off, like here or here). This situation would be a coordination problem if there are two stable outcomes (all stores keep their doors open, or all stores keep their doors closed). However, in this case, each store would be better off if every store keeps their doors closed (since their energy bill would be lower), but every individual store has an incentive to open their door to attract more customers. Opening their door is a dominant strategy (it is a better choice for each store regardless of whether other stores open their doors or not), leading to a worse outcome for all. Despite that quibble, the policy response is the same (it will require regulation to get the stores to close their doors, since any agreement between the stores may just result in the stores cheating on the agreement). The irony of this story, though, is it is the rational, individualistic behaviour of the stores that leads to the outcome of all stores having their doors open, when Coyle has already spent most of the book arguing that we shouldn't be modelling decision-makers as rational and individualistic.

Overall, I did really enjoy this book, and it offers much more for both the general reader and for economists than What's the Use of Economics?, and not only because it is of more recent vintage. Economics students would also likely benefit from the broader perspective on the discipline that they are about to join.

Friday 9 February 2024

This week in research #9

It's been a hectic week this week, so I'm a bit behind and I don't have as much to share as usual. Nevertheless, here's what caught my eye in research over the past week:

  • Fiva and King (with ungated earlier version here) look at whether there are 'child penalties' for women in politics using data from Norway, and find women are less likely than men to secure elected office after their first child is born, and mothers receive less favourable rankings on party lists relative to comparable fathers
  • Madsen, Robertson, and Ye (open access) find that outbreaks of plague had statistically significant, but relatively modest, impact on local variations in wheat prices between the 14th and 17th Centuries C.E.

Finally, you can see me presenting next week at the New Zealand Economics Forum. I understand that the in-person event has sold out, but you can watch the livestream using this link. I'm part of a panel discussing social investment, alongside Maria English (ImpactLab) and Merepeka Raukawa-Tait (Whānau Ora). It's a little beyond my direct research experience, and so I've spent a lot of time preparing. I'm looking forward to it!

There's a stellar line-up of speakers (in keynotes and sessions) in the other sessions at the Forum as well. So, if you're at all interested in economics in New Zealand, you should definitely tune in for both days of this event. The theme for the Forum is 'A Briefing to the Incoming Government'. I hope that they're listening.

Tuesday 6 February 2024

Electric vehicles, relative prices, and changing consumer behaviour

My ECONS101 class doesn't start for a couple of weeks, but I figures I would post this now. In yesterday's New Zealand Herald, there were two stories related to changes in electric vehicle policies, and the resulting changes in consumer behaviour.

First, this article talks about the removal of the electric vehicle subsidy scheme (which I discussed earlier here and here and here):

EV sales drove off a cliff in January, as expected, with a carrot gone and stick about to hit.

At the same time, light commercial sales jumped 53 per cent with the abolition of the “ute tax” and “ICE” passenger vehicle sales surged.

With the clean car discount gone, petrol and diesel vehicles - less than half the market during most months of 2023 - accounted for 96 per cent of new vehicle registrations in January 2024, according to Motor Industry Association (MIA) figures.

There were just 244 new registrations of new battery electric light vehicles during the month compared to 3469 during December - when sales spiked in the final month of the CCD - and 448 in January 2022.

Consider the change in relative prices here. When one good (Good A) gets more expensive relative to some other good (Good B), consumers will tend to buy less of Good A and more of Good B. There is no surprise here. Removing the 'clean car discount' subsidy from electric vehicles makes them more expensive to buy, relative to petrol and diesel vehicles. Consumers respond by buying more petrol and diesel vehicles, and fewer electric vehicles.

The figures here probably overstate the impact of removing the subsidy. Some consumers who were thinking about buying an electric vehicle this year probably pushed their purchase decision forward to the end of last year instead (and there is some evidence for that).

Removing the clean car discount wasn't the only recent policy change that will affect electric vehicle owners though. As this second New Zealand Herald article reported:

Kevin Parker’s Outlander plug-in hybrid vehicle is getting on in age, and its electric battery is down to around 15km of driving range even when fully charged.

Since Parker lives in a rural Marlborough, he said, the battery “only gets him to the end of the road” - meaning, for most of every journey, he uses petrol.

Under changes to road user charges, this means he faces paying both petrol taxes (on his fuel) and road user charges (for driving an EV) for most of every journey - a change he said made his vehicle “not economically viable”.

Like some other plug-in hybrid owners, he wants to remove the electric plug, to avoid the Government’s new road user charges.

As I note in my ECONS101 class, rational decision-makers tend to try to capture benefits, and avoid costs. This is a clear case of avoiding costs. Since plug-in hybrid vehicles will now attract additional costs, it makes sense for some drivers to remove the plug from their vehicle. That will especially be the case where the benefits of having the plug are low. If you only get 15 kilometres of travel from the battery, the costs in additional road user charges are going to far outweigh the benefits of having the battery.

None of these changes in behaviour should be unexpected though. When the costs and/or benefits associated with decisions change, some decision-makers will change their behaviour.

Read more:

Monday 5 February 2024

Lifespans of the rich and famous, from 800-1800 C.E.

Life expectancy is one of the key statistics in human wellbeing. However, we know surprisingly little about life expectancy prior to the systematic recording of births, deaths, and marriages, which began in England in 1538 with the establishment of parish registers. Many other countries started recording this data, but later in the 16th Century (or even later).

So, I was really interested to read this 2017 article by Neil Cummins (London School of Economics and Political Science), published in the Journal of Economic History (ungated earlier version here). They use family tree data on the European elite across multiple countries collected by the LDS church. As Cummins explains:

The church has been at the frontier of the application of information technology to genealogy and has digitized a multitude of historical records. Today they make their research available online at familysearch.org. The records number in the billions. The source of the family trees used here are the online databases at histfam.familysearch.org, a collaboration between the LDS church (familysearch) and individual genealogical experts.

The sample size is large:

The family tree records used here contain 402,204 unique date descriptions... Of the 1,329,466 individual records, 167,266 have a birth year between 800 and 1800 with an associated age at death. Of those, 115,650 have an age at death over 20 and 76,403 have a specific date of death.

Cummins uses the data to estimate the length of life of the European elite, after dealing with some tricky data issues (such as 'heaping' of dates in years that end in '0'). The resulting dataset does a good job of picking up changes in lifespan resulting from plague years and from violent battle deaths. On those points:

First, plague, which afflicted Europe 1348-1700, killed nobles at a much lower rate than it did the general population. Second there were significant declines in the proportion of male deaths from battle violence, mostly before 1550... Before 1550, 30 percent of noble men died in battle. After 1550, it was less than 5 percent.

However, more interesting is the trends in lifespan over time, where Cummins finds that:

...there was a common upwards trend in the adult lifespan of nobles even before 1800. But this improvement was concentrated in two periods. Around 1400, and then again around 1650, there were relatively sudden upwards movements in longevity.

Those changes are captured in panel (a) of Figure 8 from the article:

The figure plots average life expectancy in 50-year bins. Life expectancy hovered around 50 years (or slightly higher) from 800 to 1350 C.E., then jumped up to about 52 years in 1400 C.E., then to over 55 years in 1700 C.E. Cummins isn't able to offer a reason as to why those increases happened at those times, and notes that:

No conclusions can be drawn as to why adult noble lifespan increased so much after 1400. No known medical innovations in Europe before 1500 could be responsible... Nutrition, in terms of calories consumed, also cannot explain this rise. These elites could be expected to have always filled their bellies.

Finally, Cummins shows a clear geographical gradient in lifespan:

I find that there were regional differences in elite adult lifespan favoring Northwest Europe, that emerged around 1000 AD. While average lifespan in England in 1400 was 54, in Southern Europe, as well as in Central and Eastern Europe, it was only 50. The cause of this geographic “effect” is unknown.

So, while interesting, this paper leaves a lot to explore in terms of the reasons for changes in the lifespan of the European elite over this time period. There are also open questions about the lifespan differences between the elite and the rest of the population, as Cummins' results imply that the Black Death had a much lower impact on the elite than the rest of the population, which differs from the conclusions of much past research. As I've noted before, we need to understand more about lifespan inequality. Clearly, there is more work necessary in this area.

Sunday 4 February 2024

Book review: What's the Use of Economics?

The textbook The Economy, by CORE Economics (and which I use in my ECONS101 class), is marketed as teaching economics "as if the last three decades had happened". The genesis of that textbook project, with the first edition released in 2014 (as I noted here), was a conference hosted by the Bank of England and the Government Economic Service in February 2012, that "brought employers and academics together to discuss the state of economics and the state of economics education". This was particularly important in the wake of the Global Financial Crisis.

One other outcome of that conference was the book What's the Use of Economics?, edited by Diane Coyle and featuring contributions from a number of conference attendees. The book is divided into four parts. The first part looks at what employers of economists want. The second part covers economic methodology and the implications for teaching, while the third part looks more specifically at macroeconomics - especially important after the financial crisis. Finally, the last part covers reform of the undergraduate economics curriculum in more detail, focusing on the UK (of course).

So, what do employers of economists want? The authors in this part of the book highlighted a number of things that are missing from the curriculum at present, from the perspective of employers. Coyle summarised these missing things well in the introductory chapter, noting that:

The missing ingredients listed here are

  • greater awareness of history or real-world context,
  • practical knowledge of data handling,
  • the ability to communicate technical results to non-economists,
  • understanding of the limitations of modelling or of economic methodology,
  • a more pluralistic approach to teaching the subject, and
  • a combination of inductive and deductive reasoning.

I think there would have been (and probably still would be) agreement among many economists that more economic history and more history of economic thought would be welcome additions to the economics curriculum, and indeed the second part of the book confirms that. The challenge, as always, is what to omit in order to fit the additional content and context in existing courses. Stand-alone courses on economic history or history of economic thought are unlikely to prove popular enough to remain sustainable.

One of the highlights of the second part was the chapter by John Kay, which uses video games as an analogy for economic models, then notes that:

...it obviously cannot be inferred that policies that work in Grand Theft Auto are appropriate policies for governments and businesses.

Andrew Lo also offered an amusing, but entirely true, quip:

We economists wish to explain 99% of all observable phenomena using three simple laws, like the physicists do, but we have to settle, instead, for ninety-nine laws that explain only 3%, which is terribly frustrating!

I didn't get much out of the second or third parts of the book, to be honest, but this was more than made up for by the final part. I particularly enjoyed the chapters by Michael McMahon and John Sloman. McMahon used an analogy of student learning being like going to the gym, which I am almost certain to use with my students. The more they go to class, or engage in learning activities outside of class, the more 'econ fit' they will get. Lectures and tutorials are like a Zumba or spin class, while office hours are like a session with a personal trainer. There's a lot of potential in that analogy. Sloman's chapter was a great summary of recent developments in pedagogy, especially as applied to economics. I particularly liked the short section on problem-based learning. Interestingly, we had already decided to adopt a lite version of problem-based learning in ECONS101 this year.

Overall, the book was interesting, but I kind of wished I had read it much earlier (it has been sitting on my bookshelf for many years). However, it was good to see some of the things that I do (or will do) in my classes affirmed, and it is clear to see how the textbook The Economy grew out of the discussions at that conference. This is a book that would only be of value to economics teachers, but many of them should read it (and I will be circulating a copy of Sloman's chapter to my colleagues tomorrow!).