Monday, 24 June 2019

Book review: Economics Rules

Economics and economists come in for a fair amount of criticism from those outside the discipline. Not all of that criticism is for good reason, but at least some of it is. And there is also a fair amount of criticism of economics and economists from within the discipline. Dani Rodrik's new book, Economics Rules, fits into the latter category. However, it isn't all negative. As Rodrik notes in the introduction, "this book both celebrates and critiques economics".

At the heart of economics lie models. Rodrik spends much of the early chapters describing what economic models are, and what makes them useful. The usefulness of models is that they capture aspects of reality. The multiplicity of different models in economics exist because they capture different aspects, relying on different simplifying assumptions to do so. However, this also causes a problem because:
...very few of the models that economists work with have ever been rejected so decisively that the profession discarded them as clearly false.
Despite this problem, Rodrik is clearly in favour of having diversity of models, and clearly advocates for this, with the caveat that economists need to recognise that each model is a model, not the model. This is fair criticism - too often economists rely on shoehorning reality into their preferred model, rather than recognising that in different situations or contexts, different models will be called for. This is something more akin to the approach of Nobel Prize winner Jean Tirole.

When economists confuse a model for the model, Rodrik explains that this leads to errors of omission (where economists fail to see troubles looming ahead, such as the Global Financial Crisis), and errors of commission (where economists become complicit in policies whose failure might have been predicted in advance, such as the Washington Consensus). He goes on to note that:
Because economists go through a similar training and share a common method of analysis, they act very much like a guild. The models themselves may be the product of analysis, reflection, and observation, but practitioners' views about the real world develop much more heuristically, as a by-product of informal conversations and socialization among themselves. This kind of echo chamber easily produces overconfidence...
Rodrik sees this as leading to two weaknesses in modern economics:
...the lack of attention to model selection and the excessive focus at times on some models at the expense of others.
Alluding back to his earlier book, The Globalization Paradox (which I reviewed here), he argues that economists need to be 'Foxes' (holding many different views about the world, based on different models), rather than 'Hedgehogs' (who are captivated by a single big ides, such as 'markets work best'). 

Overall, this book is a good read for both economists and non-economists alike. Economists who read the book with an open mind may be persuaded to be a little more open to alternative models, or at least they might apply themselves more thoughtfully to the task of model selection. Non-economists who read the book may gain a better appreciation for what underlies the perceived arrogance of economists in defending their policy prescriptions based on particular models. Recommended!

Sunday, 23 June 2019

Crack cocaine and the gun violence equilibrium in the U.S.

In economics, we often recognise that there may be multiple equilibriums, and that even a relatively small shock may be enough to cause the economy to move from one equilibrium to another. Consider gun violence as an example. If gun violence is low, people feel safe and therefore don't feel the need to carry a gun for self-defence purposes. Therefore there exists a low gun violence equilibrium. [*] However, if some shock occurs, and gun violence increases, then people will feel less safe. They will be more likely to carry a gun for self-defence purposes, and therefore more likely to use a gun, perpetuating the level of gun violence. Therefore there also exists a high gun violence equilibrium. However, once a society is in a high gun violence equilibrium, it is going to be very difficult to reverse things.

What would cause society to move from a low gun violence equilibrium to a high gun violence equilibrium? A 2018 NBER Working Paper by William Evans (University of Notre Dame), Craig Garthwaite (Northwestern University), and Timothy Moore (Purdue University) provides evidence that the rise of the crack cocaine market in the U.S. in the 1990s caused a shift from a lower gun violence equilibrium to a higher gun violence equilibrium. Their argument is that:
...the daily experiences of young black males were fundamentally altered by the emergence of violent crack cocaine markets in the United States. We demonstrate that the diffusion of guns both as a part of, and in response to, these violent crack markets permanently changed the young black males’ rates of gun possession and their norms around carrying guns. The ramifications of these changes in the prevalence of gun possession among successive cohorts of young black males are felt to this day in the higher murder rates in this community.
They use city-level data from the largest 57 metropolitan areas (in 1980) on age-, sex- and race-specific murder rates, over the period:
...from eight years prior to the arrival of crack and 17 years after, for a total of 26 years for each city. As the earliest date of crack’s arrival is 1982 and the latest is 1994, our data set spans from 1974 through 2011.
Essentially, they use a difference-in-differences approach that compares the change in murder rate for young black males (aged 15-24 years) before and after the introduction of crack into their city, with the same change for black males aged 35 years and over. They find that:
...the emergence of crack cocaine markets is associated with an increase in the murder rate of young black males that peaks at 129 percent in the decade after these markets first emerge...
...17 years after crack markets arrived, the murder rates for young black males were 70 percent higher than they would have been had they followed the trends of older black males. 
They key figure from the paper is this one, which plots the change in murder rate for young black males:


The x-axis tracks years before (negative numbers) and after (positive numbers) the introduction of crack cocaine into the city. There is a clear and statistically significant increase in murders among young black males, compared with older black males. Evans et al. then go on to show that this likely arose from increases in gun violence, in three ways, by showing:
...in the six years after crack markets emerged, the share of all murders attributable to young black males increased by 75 percent. Seventeen years after crack markets emerged, young black males still accounted for a 45 percent greater share of all murders than they had in the years before the arrival of crack markets...
...these murders [between family members] increase markedly in the years after crack markets and remain elevated over the next sixteen years. This increase is driven entirely by murders involving guns, with no detectable change in the non-gun domestic violence murder rate over this time period...
...we further show that there is a strong correlation between ten-year changes in gun ownership and changes in the fraction of suicides involving guns among 15-19 year olds.
These results are all consistent with a story that the arrival of crack cocaine in a city increases murders primarily through an increase in gun-related violence. I liked this paper also because it gave a detailed account of the development of crack cocaine markets in the U.S. Here are the highlights:
In the early 1970s, much of the cocaine shipped to the U.S. originated in Chile. After the 1973 military coup by Augusto Pinochet in Chile that toppled the administration of Salvador Allende, Pinochet initiated a military crackdown on cocaine smuggling operations. Many smugglers moved to Colombia with the goal of using established marijuana smuggling routes as a way of getting cocaine to the United States...
As these organizations [the Colombian drug cartels] grew, an informal agreement was struck where the Medellin cartel would primarily control supply into Miami and Los Angeles, while Cali would concentrate its operations in New York...
The large-scale entrance of the Colombian cocaine cartels into Miami, New York and Los Angeles meant that by the early 1980s, these areas had relatively high cocaine supply leading to falling prices. Despite the downward pressure on prices, many low-income consumers remained priced out of the market...
Crack cocaine was an innovation that provided a safer way to smoke cocaine... This new product has two attractive properties. First, it produced an instant high, and its users could quickly become addicted. Second, an intense high could be produced with a minimal amount of cocaine, meaning that the profit-maximizing per-dose price was a fraction of the price per high for powder cocaine...
Crack was first introduced to the market by innovative retail organizations in New York, Miami and Los Angeles, which had a large supply of powder cocaine. It then spread from those cities...
The combination of a liquidity-constrained customer base and the short-lived high offered by the product meant many customers purchased multiple times a day... crack cocaine was sold in small doses, often in open-air drug markets where the dealer and the customer had no pre-existing contact to arrange that particular sale (though may have participated in a similarly anonymous sale at that location before)...
The lack of preexisting arrangements with buyers meant that geography was a key determinant of a crack dealer’s revenue...
The violence associated with establishing and defending a market from entry was a key reason for a substantial amount of drug-related violence.
If you need to fight a turf war to protect your market (or gain access to a market), guns are an efficient way to do so. Crack cocaine was a key driver that moved the U.S. from a lower gun violence equilibrium to a higher gun violence equilibrium.

[HT: Marginal Revolution, last year]

*****

[*] However, this equilibrium is very unstable. Readers who understand some game theory will probably recognise this as a form of the 'arms race' game, which is itself a type of prisoners' dilemma. Everyone would be safe(r) if no one carried a gun. However, if no one else carries a gun, you can be both safe and powerful by carrying a gun. So, there are incentives to carry a gun, regardless of whether everyone else is, or no one else is. The low gun violence 'equilibrium' is not actually a Nash equilibrium in this game. It is unstable, but may be kept in place by cultural norms against carrying guns, or high penalties for doing so.

Saturday, 22 June 2019

Retractions hurt academic careers, and may be worst for senior researchers

In modern academic publishing, retractions (where a published article is removed from the academic record) have become a fairly regular occurrence (a quick read of Retraction Watch will show you just how often this occurs). Articles may be retracted for many reasons, from simple mistakes in analyses or contaminated lab samples, to fabrication of data and results. A reasonable question to ask, then, is to what extent a retraction impacts on an academic's career. Oftentimes, the retraction comes years after publication of the article, and in the meantime the author has used the article to contribution to their reputation. Is their reputation damaged by the retraction, and if so, by how much? And, does the type of retraction (simple mistake, or serious misconduct) matter?

A 2017 article by Pierre Azoulay, Alessandro Bonatti (both MIT), and Joshua Krieger (Harvard), published in the journal Research Policy (and not retracted, ungated earlier version here), provides some answers. First, they note that the number of retractions has increased over time, as shown in their Figure 1:


You can see that the problem is getting worse over time. Or at least, you can see that the number of retractions is increasing over time. Maybe we have become more vigilant at recognising mistakes and misconduct, and ensuring those articles are retracted? It is difficult to say.

In any case, Azoulay et al. then looked at data from 376 US-based biomedical researchers with at least one retracted article that was published between 1977 and 2007, and retracted before 2009. They compared those authors with a control group of 759 authors with no retractions, made up of authors who published the article that was immediately after the retracted one in the same journal. They focus on the impacts on citations of the authors' published articles that are unrelated to the retracted one, because a retraction might negatively impact the entire line of inquiry, in terms of citations. They find that:
...the rate of citation to retracted author's unrelated work published before the retraction drops by 10.7% relative to the citation trajectories of articles published by control authors.
Azoulay et al. also find evidence that the citation penalty increases over time. In the sample of retractions as a whole, they don't find differences between the impact on high status (those in the top quartile of researchers in terms of the number of citations to their previous research) researchers and low status researchers (those in the bottom three quartiles). However, when they look at different types of retraction, they find:
...a much stronger market response when misconduct or fraud are alleged (17.6% vs. 8.2% decrease).
You might wonder why a simple mistake would have a negative impact on researchers. This arises because no one can be certain of a researcher's quality, and if a researcher has made a mistake in a published article, then the perception of their quality are a researcher is reduced (and with it, citations of their other work).

When it comes to mistakes and misconduct, there are differences in their impact between high status and low status researchers. Retractions due to mistakes have a greater impact on low status researchers than high status researchers (about a 9.7% reduction in citations for low status researchers, but a 7.9% reduction for high status researchers). However, retractions due to misconduct have a much larger impact on high status researchers (19.1% reduction in citations) than on low status researchers (10% reduction).

Across all their results, the impacts on research funding follow a similar pattern. Junior researchers face greater career penalties for mistakes, but senior researchers face greater penalties for serious misconduct. However, since their sample was limited to researchers who were still employed after the retraction, their results may be biased if junior researchers are more likely to exit the profession than senior researchers, in response to a retraction (or before the retraction). Perhaps junior researchers whose careers would be most negatively affected are most likely to exit? Some additional work in this area is definitely warranted.

Despite that caveat, the overall story is somewhat comforting. The research community does punish researchers for their malpractices, and more severely than for genuine mistakes. However, in order for that process to be effective, the community needs to know the circumstances surrounding each retraction. Indeed, Azoulay et al. conclude that:
...the results highlight the importance of transparency in the retraction process itself. Retraction notices often obfuscate the difference between instances of “honest mistake” and scientific misconduct in order to avoid litigation risk or more rigorous fact-finding responsibilities. In spite of this garbled information, our study reveals that the content and context of retraction events influences their fallout.

Wednesday, 19 June 2019

Auckland as an internal migration donor to the rest of New Zealand is nothing new

Newsroom reported a couple of weeks ago:
A growing number of people are turning their back on Auckland for greener and cheaper pastures of the regions.
A study by independent economist Benje Patterson indicates 33,000 left the super city in the four years to 2017, when its overall population grew by nearly 200,000 to nearly 1.7 million.
Patterson's study is available here. He makes use of a cool new dataset from Statistics New Zealand on internal migration, based on linked administrative data from the Integrated Data Infrastructure (IDI). However, even though the data he uses are new, the story is not. Auckland has long been an internal migration donor to the rest of New Zealand. This is a point that Jacques Poot and I have made at numerous conferences and seminars over the years.

In each Census (until the 2018 Census), people were asked where they were living five years previously (including in the 2013 Census, even though it was seven years after the 2006 Census). We can use that data to construct a matrix of flows from each region or territorial authority (TA) to every other region or TA. This essentially captures the number of people who changed the region or TA they lived in over a five-year period. It is different from the annual change data that Patterson uses, and in comparison the annual flows should be larger (because a person who migrates from Auckland to somewhere else, and then back to Auckland, within the five-year period, would not count as a migrant in these data).

Now, even though (in the Newsroom article) Patterson describes the five-yearly Census as "clunky", it is this Census data that shows Auckland's net out-migration to the rest of New Zealand is not a new phenomenon, and has been ongoing since the mid-1990s. Here's the data for the last four Censuses we have data for (not the 2018 Census, as we are still waiting) [*]:


The blue bars are the number of in-migrants to Auckland (from elsewhere in New Zealand) over each five-year period based on the Census data. The orange bars are the number of out-migrants from Auckland (to other places in New Zealand) over the same period. The smaller grey bar is the net internal migration to or from Auckland. Notice that for the last three periods (1996-2001, 2001-2006, and 2008-2013), net migration is negative. That means more out-migrants from Auckland to the rest of New Zealand than in-migrants from the rest of New Zealand to Auckland.

In other words, the new Statistics New Zealand data are not showing a trend that is new at all. It's something that has been going on for a long time. Which also puts the shallowness of the analysis in Patterson's report into context, such as this:
Auckland’s regional migration losses to the rest of New Zealand are not surprising when one considers the deterioration to housing affordability in Auckland that occurred over the period. Data from interest.co.nz shows that in April 2017, the median Auckland house was estimated to cost about 9.5 times the median household income. By comparison this ratio was 6.2 nationally.
The largest net out-migration from Auckland was in the 2001-2006 period (-18,000; or 3600 per year). Was Auckland housing affordability declining the fastest during that period? The truth is, the data don't provide an answer as to why on net people are moving away from Auckland.

Even the locations where they are moving to are not new. Newsroom notes that:
The regions closest to Auckland attracted two thirds of the exodus, with Tauranga proving to be the most popular, attracting an average 1144 people a year.
Waikato District on the southern fringe of Auckland gained an average of 3381 Aucklanders over the period, while Hamilton gained just over 1500 residents from Auckland.
The data indicates nearly 6000 Aucklanders moved to Northland over the four years, with gains spread evenly across Whangarei District, Far North and Kaipara.
Looking at the Census data for 2001 (so, the 1996-2001 period), the regions that Auckland lost (on net) the largest number of migrants to were (in order, and to the nearest 10 people) Bay of Plenty (-2800), Waikato (-2340), and Northland (-1600).

So, really there is nothing new here, other than the (albeit very useful, and more timely than the Census) dataset.

*****

[*] I'm using inter-regional migration flows here, rather than inter-TA flows. However, the story is very similar if I use inter-TA flows, because the Auckland region is the Auckland TA.

Monday, 17 June 2019

Book review: Nudge Theory in Action

Richard Thaler and Cass Sunstein's book Nudge set policymakers on a path to taking advantage of the insights of behavioural economics to modify our behaviour, in areas such as retirement planning, nutrition, tax payments, and so on. It spawned the Behavioural Insights Team (otherwise known as the 'Nudge Unit') in the U.K., and similar policy units in other countries. However, it also caused a lot of controversy, particularly from libertarian groups that would prefer less government intervention into private decision-making.

I recently finished reading the book Nudge Theory in Action, a volume edited by Sherzod Abdukadirov. I have to say it was not at all what I expected. I thought I was going to get a lot of examples of nudges applied by governments and the private sector, and hopefully with some explanations of the underlying rationales and maybe some evaluations of their impact. The book does contain some examples, but mostly they are examples that have already been widely reported, and not all of them would necessarily qualify as 'nudges', under the definition originally proposed by Thaler and Sunstein.

Essentially, most of the chapters in this book are libertarian critiques of nudges in theory and in practice. Richard Williams sums up the underlying premise of book well in the concluding chapter:
The purpose of this book is to demonstrate that there is a strong private sector that helps people's decision making and that stringent criteria ought to be met before governments attempt to improve on private decision making, whether through structuring information to "nudge" people into making the government-preferred decision or using more stringent measures to achieve the same thing. Where people have difficulty matching their inherent preferences into real life decisions that satisfy those preferences, a private market will almost always arise that can help to match decisions with preferences.
Thaler and Sunstein defined a nudge as "any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives". The second part of that definition is important, and most of the chapters pay lip service to it, while at the same time ignoring it in favour of critiquing almost any government policy proposal that would restrict decision-making. The most cited example is the failed attempt by former New York mayor Michael Bloomberg to ban sales of large sodas. Given that it involves a ban, and therefore does forbid an option, under the original Thaler and Sunstein definition it is not a nudge.

However, policy makers do themselves no favours in this case by referring to policies like the New York large soda ban as a "nudge", and invoking behavioural economics principles in favour of all sorts of policies that are not, in fact, nudges. So, a more reasonable critique would be directed at policy makers' incorrect usage of the term 'nudge', rather than damning all nudges using examples that are not even nudges.

Having said that, there are some good and thought-provoking chapters. Mario Rizzo has an excellent theoretical chapter, and while I don't buy into the arguments he made, it definitely made me think more deeply about what we mean when we refer to rational behaviour. Jodi Beggs (from Economists Do It With Models fame) presents a great framework that differentiates private sector nudges into those that improve welfare for consumers (which she terms 'Pareto nudges', invoking the idea of a Pareto improvement from welfare economics), and those that make consumers worse off to the benefit of firms (which she terms 'rent seeking nudges'). Beggs also notes the subversion of the term 'nudge' to mean almost any policy change that aims to change behaviour. Several chapters raised the (very valid) point that not only are consumers (or savers or whoever) subject to the behavioural biases that behavioural economics identifies, but government decision makers are also likely to be subject to those same biases. In that case, we should be cautious about the ability of government to create the 'right' choice architecture to achieve their goals.

However, there are also some notable misses among the chapters. In his chapter, Mark White critiques government attempts to alter the choice architecture to favour some option over others, but never engages with the fact that there will always be some choice architecture in place. In many cases, there simply isn't a way to avoid presenting decision makers with options, and in those cases there has to be a choice architecture of some type in place. Why not attempt to make it one that will steer people to making decisions that improve their long-term wellbeing? Similarly, Adam Thierer argues that nudges prevent 'learning by doing' or learning through making mistakes. That is good in theory, but how many opportunities do we have to make mistakes in our own retirement planning, for instance? He also fails to acknowledge that governments can also learn from nudges that don't work as intended. As Steve Wendel notes in his chapter:
We should be skeptical that behavioral nudges will work the same (or at all) once they are translated from an academic environment into consumer products, because of core lessons in the behavioral literature itself - that the details of implementation and decision-making context matter immensely.
That isn't an argument not to attempt nudges. However, it is an argument that applies equally to government nudges - they may not work as intended, so they should be rigorously evaluated.

Ultimately, if you are looking for ammunition to mount a libertarian counter-attack against nudge theory applied by government, you will find a lot of suitable material in this book. However, as a general guide to 'nudge theory in action', I believe this book falls short.

Saturday, 15 June 2019

Corruption and quid pro quo behaviour in professional soccer

In their book Freakonomics, Steven Levitt and Stephen Dubner described research (from this article by Levitt and Mark Duggan - ungated earlier version here) that showed evidence of rigged matches in sumo wrestling. Specifically, wrestlers who were approaching their eighth win (which comes with much greater earnings and ranking) were more likely to win against those who already had eight or more wins, than would be expected based on their relative ability. And that was in Japan - a country not known for widespread corruption (Transparency International ranks Japan 18th out of 180 countries in its Corruption Perceptions Index). How bad could things be in other, less honest or trustworthy, countries?

A 2018 article by Guy Elaad (Ariel University), Alex Krumer (University of St. Gallen), and Jeffrey Kantor (Ariel University), published in the Journal of Law, Economics, and Organization (ungated version here), provides a window to widespread corruption in domestic soccer. [*] They look at games in the final round of the season, in domestic soccer leagues, where one of the teams needed to win (or draw) in order to avoid relegation, and where the match was inconsequential for the other team. They have a database of 1723 such matches in 75 countries over the period from 2001 to 2013. Most interestingly, they look at how the win probability (controlling for a range of factors, including the strength of the opposition and home advantage) varies by the level of corruption of the country. They find that:
...the more corrupt the country is according to the Corruption Perceptions Index (CPI), the higher is the probability of a team (Team A) to achieve the desired result to avoid relegation to a lower division relative to achieving this result in non-decisive games against the same team (Team B)... This finding is robust when controlling for possible confounders such as differences in abilities, home advantage, and countries’ specific economic, demographic and political features.
Then, there is evidence that the winning team in that game returns the favour (quid pro quo) in the following season, since they find that:
...in more corrupt countries the probability of Team A to reciprocate by losing in the later stages of the following year to Team B is significantly higher than losing to a team that is on average better (stronger) than Team B. This result strengthens the suspicious of corrupt norms, since in the absence of any unethical behavior, we would expect the opposite result, since naturally the probability of losing increases with the strength of the opponent.
There's clearly a lot of mutual back scratching going on in professional soccer. It is worth noting, though, that the top divisions in Europe (Premier League, Ligue 1, Bundesliga, etc.) were not included in the analysis, which focused on the second-tier leagues in Europe, and top leagues outside of Europe.

An interesting follow-up to this study would be to look at betting odds. Do the betting markets price this corruption into their expectations (so that the team needing to avoid relegation has betting odds that suggest a higher probability of winning than would be expected based on home advantage, strength of opponents, etc.)? If not, then there may be opportunities of positive expected gains available from betting on those games.

[HT: Marginal Revolution, last year]

*****

[*] Or football, if you prefer. To me, football involves shoulder pads and helmets.

Wednesday, 12 June 2019

The alcohol made them do it

Alcohol has well-documented effects on a range of harms such as drunk driving (almost by definition), violence, poor health, and mortality. However, the causal evidence for alcohol's effect on a range of less serious harms is less clear - things like risky sexual activity and other substance use. A new article by Jason Fletcher (University of Wisconsin-Madison), published in the journal Contemporary Economic Policy (ungated earlier version here), aims to fill that gap.

It is trivial to show that access to alcohol is correlated with measures of harm. The challenge with any study like this is to show that access to alcohol has a causal effect on the harm. That is, that the observed correlation represents a causal effect, and is not the result of some other factor. Fletcher does this by exploiting the Minimum Legal Drinking Age in the U.S. (of 21 years of age), and using a regression discontinuity approach. Essentially, that involves looking at the measures of harm and how they track with age up to age 21, and then after age 21. If there is a big jump upwards between the time before, and the time after, age 21, then plausibly you could conclude that the sudden jump upwards is due to the onset of access to alcohol at age 21. This approach has previously been used to show the impact of access to alcohol on arrests and on mortality. In this paper, Fletcher instead focuses on:
...drinking outcomes, such as any alcohol use, binge use, and frequency of use as well as drinking-related risky behaviors, such as being drunk at work; drunk driving; having problems with friends, dates, and others while drinking; being hung over; and other outcomes.
He uses data from the third wave of the Add Health survey in the U.S., which occurred when the research participants were aged from 18-26 years old. He analyses the results all together, and separately by gender, and finds that:
...on average, access increases binge drinking but has few other consequences. However, the effects vary considerably by gender; where females (but not males) are more likely to initiate alcohol use at age 21, males substantially increase binge drinking at age 21. In addition, males (but not females) face an increased risk of problems with friends and risky sexual activity at age 21. There is also some evidence of an increase in drunk driving and violence.
Interesting results, but not particularly surprising. Fletcher then tries to draw some policy implications on what would happen if the MLDA was reduced, by looking at differences between young people living with their parents and those not living with their parents. He finds that:
...no harm reduction associated with binge drinking for those individuals living with their parents around age 21; in fact, individuals living with their parents (regardless of whether they are in school) have larger increases in alcohol-related risky behaviors than individuals living away from their parents.
He uses that result to suggest that parents are not good at socialising their children into safer drinking behaviours (and the results, on the surface, suggest this because those living at home engage in more risky behaviour after they attain age 21. However, there is another interpretation that Fletcher doesn't consider. Those who are not living at home might be more likely to be drinking alcohol before age 21, and so experiencing some of the negative impacts earlier. So, those living at home may be simply catching up to their peers, when they are 'allowed' to drink. Maybe that strengthens his other results.

Overall, the paper doesn't tell us much that wasn't already known, although the causal aspect of the study is a nice touch. The differences by gender were a bit more surprising, and hopefully there are other studies that can work further in this area to test them further.

Monday, 10 June 2019

Why Uber drivers will make no money in the long run

This is the third post in as many days about Uber (see here and here for the earlier installments), all based on this New York Times article. Today, I'm going to focus on this bit of the article:

Drivers, on the other hand, are quite sensitive to prices — that is, their wages — largely because there are so many people who are ready to start driving at any time. If prices change, people enter or exit the market, pushing the average wage to what is known as the “market rate.”
The article is partially right here. It isn't just the price elasticity of supply that is at fault - it is the lack of barriers to entry into (and exit from) the market that create a real problem for drivers. A 'barrier to entry' is something that makes it difficult for other potential suppliers to get into the market. A taxi medallion is one example, if a medallion is required before you can drive a taxi. However, there is nothing special required in order to be an Uber driver, and most people could do it. Similarly, a 'barrier to exit' is something that makes it difficult for suppliers to get out of the market once they are in it, such as a long-term contract. Barriers to exit can create a barrier to entry, because potential suppliers might not want to get into a market in the first place, if it is difficult to get out of later if things go wrong. Again, Uber has no barriers to exit for drivers. These low barriers (to entry and exit) ensure that, in the long run, drivers can't make any more money from driving than they could from their next best alternative.

To see why, consider the diagrams below. The diagram on the left represents the market for Uber rides. For simplicity, I've ignored the 'Uber tax' (that I discussed in yesterday's post). The diagram on the right tracks the profits of Uber drivers over time. The market starts in equilibrium, where demand is D0, supply is S0, and the price of an Uber ride is P0. This is associated with a level of profits for Uber drivers of π0. For reasons we will get to, this is the same earnings that an Uber driver would get in their next best alternative (maybe that's driving for Domino's, or as a taxi driver, or working as a stripper).


Now, say there is a big increase in demand for ride-sharing, from D0 to D1. The price of an Uber ride increases to P1, and the profits for driving increase to π1. Now profits from being an Uber driver are high, but they won't last. That's because many other potential Uber drivers can see these profits, and they enter the market (there are no barriers to entry, remember?). Let's say that lots of drivers enter the market. The supply of Uber drivers increases (from S1 to S2), and as a result the price decreases to P2, and profits for Uber drivers decrease to π2.

Now the profits for Uber drivers are really low. There's no barriers to exit, so some drivers decide they would be better off doing something else (driving for Domino's, etc.). Let's say that a lot of drivers choose to leave, but not all of those who entered the market previously. The supply of Uber drivers decreases (from S2 to S3), and as a result the price increases to P3, and profits for Uber drivers increase to π3.

Now the profits for Uber drivers are high again (but not as high as immediately after the demand increase). Drivers start to enter the market again, and so on, until we end up back at long-run equilibrium, where the price of a ride is back at P0, and driver profits are back at π0. At that point, every driver who is driving makes the same low profit as before. So, in the long run, even if demand for ride-sharing is increasing over time, the drivers are destined not to profit in the long run. [*]

*****

[*] You might have noticed that the producer surplus is higher after supply increases, which implies that drivers (as a group) are earning higher profits after the market has settled back to long-run equilibrium. However, remember that supply is higher than before - those higher profits are shared among many more drivers, so the profit for each driver individually is the same as before.

[HT: The Dangerous Economist]

Read more:

Sunday, 9 June 2019

Uber is a tax on ride-sharing

This post follows up on yesterday's post about Uber, where we established that most of the benefit of Uber accrues to passengers, rather than to drivers. This is because of the shape of the demand and supply curves (steep demand, and flat supply). The New York Times article has more of interest though:
Economics says that the likelihood that a person will bear the burden of an increase in profit margins is inversely proportional to their price sensitivity. In other words, because drivers are four times more price sensitive than riders, a reasonable guess is that 80 percent of the price burden will fall on passengers, 20 percent on drivers.
The simple demand-and-supply diagram that I drew yesterday isn't the full story of the market for ride-sharing. It only showed the economic welfare of passengers (consumer surplus) and drivers (producer surplus). However, there is an important third party acting in this market: Uber.

Uber acts as a tax on the market for ride-sharing, because it essentially takes a cut of the value of every ride. This is essentially the same as the government taking a share of every sale (as in a sales tax), except that the money is going to Uber, rather than to the government. Who pays Uber? You might think it is the drivers - after all, the fee to Uber is taken out of what the consumer pays, before the net amount is given to drivers. But it turns out that actually, the 'Uber tax' is shared between passengers and drivers, and it is the passengers who pay the larger share.

To see why, let's modify the diagram from yesterday's post. This is shown below. If Uber charged a zero percent fee, then the market would operate at equilibrium, and the price would be PE and the quantity of rides would be QE (this is the situation we had yesterday). However, now let's introduce Uber's fees. Since it is the drivers who pay the fees to Uber (it is taken out of their pay), it is like an increase in their costs. It isn't really an increase in their costs, so the supply curve (which is also the marginal cost curve) doesn't shift. Instead, we represent this with a new curve, S+tax. The vertical distance between the supply curve (S) and the S+tax curve is the per-ride value of the tax. [*] The price that consumers pay will increase to PC, where the S+tax curve intersects the demand curve. From that price, the fee to Uber is deducted, which leaves the drivers with the lower price PP. Notice that the passengers' price has gone up by a lot, while the drivers' effective price has dropped by only a little. This tells you that the passengers are paying most of the Uber tax. The quantity of Uber rides falls from QE to QT.


We can also look at this using economic welfare. Without the Uber tax, the market operates in equilibrium. The consumer surplus (as we established yesterday) is the triangle AEPE, while the producer surplus is the triangle PEEC. However, this changes when the Uber tax is introduced. Now the consumer surplus (the difference between the amount that consumers are willing to pay (shown by the demand curve), and the amount they actually pay (the price)) is the smaller triangle ABPC. The passengers have lost the area PCBEPE. The producer surplus (the difference between the amount the sellers receive (the price), and their costs (shown by the supply curve)) is the smaller triangle PPFC. The drivers have lost the area PEEFPP.

The total amount of welfare that Uber gains is the rectangle PCBFPP (this rectangle is the per-unit value of the Uber tax, multiplied by the quantity of rides QT). This is the size of the Uber tax. We can split the Uber tax into the share paid by passengers (PCBGPE), based on the higher price they receive, and the share paid by drivers (PEGFPP), based on the lower effective price they receive. Note that the share of the Uber tax paid by passengers is much larger than the share paid by drivers.

The New York Times article notes that:
Uber and Lyft, the two leading ride-share companies, have lost a great deal of money and don’t project a profit any time soon.
Yet they are both trading on public markets with a combined worth of more than $80 billion. Investors presumably expect that these companies will some day find a path to profitability, which leaves us with a fundamental question: Will that extra money come mainly from higher prices paid by consumers or from lower wages paid to drivers?
Old-fashioned economics provides an answer: Passengers, not drivers, are likely to be the main source of financial improvement...
And now, hopefully, you can see why. If Uber raises the share of the price paid by consumers that it keeps, then it is passengers that will pay the majority of that higher Uber tax. [**] Which seems fair, since yesterday we established that it is passengers who benefit the most from Uber.

*****

[*] Strictly speaking, the 'Uber tax' is an ad valorem tax. That means that it is a percentage of the price paid by the passengers. That means that the distance between the supply curve and the S+tax curve should get larger when the price is higher. However, for simplicity, I've represented the Uber tax as a specific tax. A specific tax is a constant per-unit dollar amount, which means that the supply curve and the S+tax curve are parallel. It's a simplification, but inconsequential for our purposes here.

[**] If you increase the size of the Uber tax, then the distance between the supply curve and the S+tax curve increases. This further reduces the consumer surplus and producer surplus. The additional revenue for Uber will be predominantly paid by passengers in the form of a higher price. We could show this with an additional diagram that has a small tax, and then a large tax. But, a diagram with a small tax replaced by a large tax is not that different in its effects from a diagram with a zero tax replaced by a small tax. I decided not to go that far. Call me lazy if you want.

[HT: The Dangerous Economist]

Read more:

Saturday, 8 June 2019

Passengers benefit more from Uber than drivers

When a seller sells a good or service to a buyer, a surplus (economic welfare) is created. The buyer receives something they wanted to buy, usually for a price that is less than the maximum they were willing to pay for it. The seller offloads something they wanted to sell, usually for a price that is more than the minimum they were willing to receive for it (their costs). So, both the buyer and the seller benefit. Who benefits the most though?

The Dangerous Economist points to this New York Times article about Uber:
The most comprehensive study of rider behavior in the marketplace found that riders didn’t change their behavior much when prices surged. (Like most major quantitative studies about Uber, it relied on the company’s data and included the participation of an Uber employee.)
Passengers were what economists call “inelastic,” meaning demand for rides fell by less than prices rose. For every 10 percent increase in price, demand fell by only about 5 percent.
Drivers, on the other hand, are quite sensitive to prices — that is, their wages — largely because there are so many people who are ready to start driving at any time. If prices change, people enter or exit the market, pushing the average wage to what is known as the “market rate.”
In other words, while demand is price inelastic (passengers are relatively insensitive to price changes), supply is price elastic (drivers are very sensitive to price changes). Interestingly, in the case of demand this is the opposite of what I concluded in this 2015 post. [*]

These elasticities are reflected in the diagram below. The demand curve is steep, which reflects that passengers are not very sensitive to prices - a small change in price will lead to almost no change in the quantity demanded. The supply curve, on the other hand, is flat, which reflects that drivers are very sensitive to prices - a small change in price will lead to a large change in the number of rides on offer.

However, that doesn't yet answer the question as to which side of the market (passengers or drivers) benefits the most from Uber. We need to consider their shares of the total welfare created. Consumer surplus is the difference between the amount that consumers are willing to pay (shown by the demand curve), and the amount they actually pay (the price). In the diagram, at the equilibrium price and quantity, consumer surplus is the triangle AEPE. Producer surplus is the difference between the amount the sellers receive (the price), and their costs (shown by the supply curve). In the diagram, at the equilibrium price and quantity, consumer surplus is the triangle PEEC.

Notice that, because of the shape of the demand and supply curves, the size of the consumer surplus (AEPE) is much larger than the size of the producer surplus (PEEC). Passengers (as a group) benefit much more than drivers (as a group). Note that this isn't quite the same thing as saying that each passenger benefits more than each driver, because the consumer surplus is split among many more people than the producer surplus. However, it is clear that passengers benefit more from Uber than drivers do.

*****

[*] However, it could be that in 2015, demand was elastic, while demand has become less elastic over time and is now inelastic. It's hard to see why that would be the case. The rise of other substitutes would tend to suggest increasing elasticity for Uber rides. Or, perhaps demand is more elastic in New Zealand (which my 2015 post was based on) than in the U.S. (which this article is based on)? Again, it's hard to see why that would be the case, unless Uber prices in New Zealand in 2015 were much higher than in the U.S. now.

Read more:

Thursday, 6 June 2019

Medieval church regulations against cousin marriage and modern-day democracy in Europe

I was really interested in this job market paper by Jonathan Schulz (Harvard), the abstract of which ends with this sentence:
Twentieth-century cousin marriage rates explain more than 50 percent of variation in democracy across countries today.
That seems like an extraordinary claim. Do differences in cousin marriage rates really explain differences in democracy?

Essentially, the paper tests two hypotheses about kin-based networks:
First, anthropologist Jack Goody (1983) hypothesized that, motivated by financial gains, the medieval Catholic Church implemented marriage policies—most prominently, prohibitions on cousin marriage—that destroyed the existing European clan-based kin networks. This created an almost unique European family system where, still today, the nuclear family dominates and marriage among blood relatives is virtually absent. This contrasts with many parts of the world, where first- and second-cousin marriages are common... Second, several scholars have hypothesized that strong extended kin networks are detrimental to the formation of social cohesion and affect institutional outcomes...
Schulz tests these hypotheses in several steps. First, he establishes that the Church's Medieval prohibitions against cousin marriages explains the formation of communes (self-governing democratic towns or cities) before 1500 C.E. Using data on exposure to the Church (that is, how long a city was within 100 kilometres of a bishopric), he finds that:
...cities that experienced longer Catholic Church exposure were more likely to adopt inclusive institutions and become communes.
This is supported by analysis that shows that, when cousin marriage prohibitions were extended from second to sixth cousins, cities more exposed to the Church were even more likely to become communes.

In the next step, Schulz shows that exposure to the Church weakens kin-based social networks. To proxy for these networks, he uses cousin marriage rates (since marriage between cousins is an indicator of stronger kin-based networks), and differences in 'cousin terms' in different languages. In the case of the latter:
...in some societies the children of one’s parents’ same-sex siblings are called brothers and sisters — an indication of an incest taboo. Yet, the differently called children of one’s parents’ opposite-sex siblings are often preferred marriage partners. Cousin terms reflect historically distant cousin marriage practices...
Again making use of his measure of Church exposure, he finds that:
Western Church exposure of 100 years longer is associated with a decrease in the percentage of individuals speaking a language that differentiates cousin terms by about 7 to 9 percentage points. Similarly, Western Church exposure reduces the preference for cousin marriage by 0.05 points... and cousin marriages by 38%...
Exposure to the Eastern Church had somewhat lesser effects, although they were still statistically significant. In the third step, Schulz shows that:
...longer Church exposure and low cousin marriage rates are associated with higher contemporary civicness as proxied by voter turnout and self-reported trust in others. The association holds for regions within Italy, Spain and France that have been firmly within the sphere of the Catholic Church for at least half of a millennium but which differ in their previous experience of the Church’s medieval marriage regulations.
This suggests that breaking down the kin-based networks increased trust and civic-mindedness, both essential for the development of democratic norms. Note that these results relate the ancient Church exposure to modern-day effects. There is no possibility of reverse causation here (that is, democratic norms affecting cousin marriage rates). Finally, Schulz goes on to show that:
...countries that differentiate cousin terms have a 7.5 units lower democracy on a 21-item scale compared to countries that do not. At the same time, 20th-century cousin marriage rates account for more than 50% of the cross-country variation in democracy scores today.
So, there you have it. Prohibition of cousin marriage explains modern-day democracy in Europe. Ok, it's probably not that simple (nothing ever is). But certainly, this is a paper that tells an interesting story, backed up by some compelling data analysis.

[HT: Marginal Revolution, last November]

Tuesday, 4 June 2019

Book review: Economics Explained

I just finished an old book, Economics Explained, by Robert Heilbroner and Lester Thurow. This was the "newly revised an updated" edition, from 1998! I can't remember how this book was recommended to me, but the subtitle is "everything you need to know about how the economy works and where it's going". I'm not sure it quite lives up to that billing, and not just because the book is now quite dated. The book's purpose is to "simplify and clarify the vocabularies and concepts you will need to understand what is going on in the economic world and whether it is working smoothly or not". On that score, the book performs much better, not least because much of the core of economic understanding remains similar to 20 years ago.

However, I found the book to be somewhat unbalanced, but that might simply reflect my personal bias. They devote some 139 pages to macroeconomics, and just 32 pages to microeconomics. The macroeconomics section is able to be explained without resort to a single diagram (other than presenting some data), whereas the microeconomics section launches into a diagram on the third page. If it is possible to explain macroeconomics without diagrams, it is certainly possible to do the same for microeconomics.

That gripe aside, and the obvious datedness of some of the material, the book is a good read. It was interesting to read what Heilbroner and Thurow saw as the big economic problems of the time. The fact that they devoted the first chapter of that section to inequality, including some incredulous references to the ratio of CEO pay to that of the average worker, shows that in some respects we are still trying to solve the same issues that confronted us two decades ago.

Some parts of the book were quite refreshing, including the authors' insistence that the answer to many questions was "both yes and no" (which reminded me of my macro lecturer Brian Silverstone's response to almost any question: "It depends on your model"). They also noted many times that decisions are often political, not economic.

I liked that there was a fair amount of economic history, especially early in the book. However, there is also a lot of assumed knowledge of particular historical details, where it isn't clear that a modern reader would necessarily be able to place all the references (e.g. the New Deal is mentioned only in passing, but without further context). The material is also, as may be expected, very US-centric.

Despite that, there is a lot to like about the book, such as the section on government deficits, where they noted that:
...we should be debating not whether the government may or may not run a deficit, but whether its expenditures in excess of revenue reflect investment or consumption.
There are no doubt lessons there for current debates over 'budget responsibility' rules in New Zealand and elsewhere. However, this isn't so much a book for the modern reader, although those with an interest in how economics was explained in the mid-late 1990s might find it interesting.

Monday, 3 June 2019

What Jeopardy and Junior Jeopardy can tell us about gender differences in risk taking

Game shows are fun, and funny. As a bonus, they can provide a window into the contestants' decision making in a setting where the rules are known (and if you haven't already seen it, you should check out the show Golden Balls that I blogged about here). And they can provide data that economists can exploit to understand that decision-making.

That is exactly what this 2017 article (ungated) by Jenny Säve-Söderbergh and Gabriella Sjögren Lindquist (both of Stockholm University), published in The Economic Journal, does. Säve-Söderbergh and Sjögren Lindquist (hereafter SSSL) use data from the Swedish edition of the game show Jeopardy and Junior Jeopardy to investigate gender differences in risk taking, and the influence of the gender composition of the other contestants. Specifically, they are looking at whether women (and girls) make different decisions when competing against men (and boys) than when competing against other women (and girls). They also look at whether the differences are the same for adults (in Jeopardy), as for 10-11 year old children (in Junior Jeopardy).

Specifically, they look at what happens when the contestants receive a Daily Double, where they have the option to wager some of their current score on getting the answer (or, since this is Jeopardy, the question) right. Using data from 2000 and 2001 (206 shows of Jeopardy, with 449 contestants) and from 1993-2003 (85 shows of Junior Jeopardy, with 222 contestants), they find that:
...there is no gender gap in wagering among children, in contrast to the results for adults. This result is robust to controls for absolute performance, the difficulty level of the questions, experience, relative performance and performance feedback, in addition to whether children shared the game earnings with their classes. Our second finding is that male and female risk taking differ with age in different ways: whereas girls wager more than women, boys wager less than men.
That in itself is interesting. Girls aged 10-11 years are more risk-takers than boys of the same age, but this reverses among adults. It has been established in many studies that men are less risk averse than women (although those findings are contested), but girls being less risk averse than boys is a surprise.

SSSL then go on to find that:
...female behaviour is sensitive to social context. In particular, despite the high-stakes setting and the lack of strategic advantage created by providing incorrect answers, girls perform worse (answering the Daily Double incorrectly more often and winning less often) and employ less gainful wagering strategies when they are randomly assigned a group of boy opponents compared with when they are randomly assigned a same-gender group of opponents or a mixed-gender group of opponents... Conversely, women wager less if they are randomly assigned a group of male opponents... The performances of boys and men do not change with the social context...
So, essentially boys and men don't seem to care who they are playing against, but girls and women do. And it seems that the social context affects girls more than adult women, since it impacted girls' success in getting the Daily Double question right. SSSL interpret this as potentially showing stereotype threat (where girls perform worse than boys when there is a belief that on average, girls will perform worse than boys). To support this, the authors note that their results may:
...reflect feelings of intimidation in the presence of boys that therefore causes girls to be prone to making mistakes.
It is definitely concerning if girls as young as 10 are being affected by stereotype threat. You could put this down to being just one study in one particular (and fairly unique) context. However, it seems that there is a long history of studies that have identified stereotype threat among children (see here for an overview). If we're concerned about gender gaps among adults, and among university students, then it appears that solutions need to start from a much younger age.

Sunday, 2 June 2019

No, pre-drinking isn't more common among people aged over 30

The New Zealand Herald reported recently on some new (and interesting) cross-country research:
New research has found that New Zealanders increase their pre-drinking after the age of 30, instead of slowing down.
A University of Queensland study explored the pre-drinking habits of people in 27 countries and found in six countries, including New Zealand, pre-drinking appears to increase after the age of 30.
Pre-drinking, also known as pre-loading, pre-partying or pre-gaming, is most commonly defined as the consumption of alcohol in domestic settings prior to attending licensed premises.
The original research paper (ungated), by Jason Ferris (University of Queensland) and co-authors, was published recently in the journal Alcohol & Alcoholism. Having read the paper, it contains some interesting comparisons between countries in terms of reported pre-drinking. However, there is also plenty of reason for skepticism regarding the claim that New Zealanders increase their pre-drinking after age 30.

First, the data were from the Global Drug Survey, which Ferris et al. describe as:
...a non-probability sample of people who self-select to complete the online survey investigating past use of alcohol and other drugs...
In other words, it's basically an online poll that is not representative of the population. On top of that, they restrict the sample to exclude non-drinkers. So, their results don't show the proportion of people who are pre-drinkers, but instead show the proportion of drinkers who are pre-drinkers. If the proportion of people who drink is lower at higher ages, then their measure of pre-drinking prevalence will be biased upwards at higher ages because of their sample restrictions.

On a similar note, think about who is going to answer the Global Drug Survey. It seems to me that the sample itself may be biased towards those who drink more, drink more often, and pre-drink more. In that case, there is even more bias in the results.

Second (and pedantically), this is cross-sectional data, so it doesn't say anything about what people do as they get older. The data allow us to compare between different age groups, but are silent as to whether those who are currently aged under 30 will pre-drink more as they get older. So claiming that "New Zealanders increase their pre-drinking after the age of 30" is clearly misunderstanding how the results work (that is the journalist's issue though, not the researchers).

Third, even saying that people aged over 30 are pre-drinking more than younger people is pretty misleading. Here's the part of Figure 4 from the paper that relates to New Zealand:


The blue line tracks the proportion of men who reported being pre-drinkers, by age, while the pink line tracks the same proportion for women. You can probably see the uptick in pre-drinking after age 30 for both men and women, but it doesn't get back up to the peak in the early 20s. However, the error in these estimates gets larger in the higher age groups, and the bars are all overlapping, so clearly you'd be over-interpreting the data to claim there is a strong increase in pre-drinking at higher ages.

My own research from 2014 (which I blogged about here) actually shows that, within a representative sample of people in the night-time economy, pre-drinkers are younger (25 on average, compared with 31 on average for non-pre-drinkers), and that pre-drinking was strongly negative correlated with age. Even when I run a cubic of age (which is essentially what Ferris et al. did), I still get a downward slope of pre-drinking prevalence across all ages up to at least 40 (and then it's pretty flat). Here's the raw cubic regression line (for both genders combined):


Notice two things from this graph: (1) there is a massive drop-off in pre-drinking from young ages to older ages (that's because the sample here is representative, and not limited to drinkers only); (2) there is basically no up-tick in pre-drinking at older ages (unless you squint really hard after age 40, and even then the line is basically flat).

Research that Matt Roskruge and I did earlier this year (also based on a representative sample of people in the night-time economy) is also consistent with those 2014 results (and I'll blog on that research a bit later in the year).

Pre-drinking is a problem, because it is a major contributor to intoxication in the night-time economy (see my earlier blog post for more on that point). However, while many people aged over 30 do engage in pre-drinking, it is much more prevalent among younger people.

Read more: