Sunday, 21 July 2019

Religious competition and witch trials

Witch trials were a common feature of late medieval Europe, up until the Renaissance. These trials were less common in the early Middle Ages, and became much less common through until modern times. Why were witch trials so common during this period?

recent article by Peter Leeson and Jacob Russ (both George Mason University), published in The Economic Journal (ungated here), provides a compelling answer. Leeson is known for using economics to investigate somewhat unusual research questions (such as those reported in the book WTF?! An Economic Tour of the Weird, which I reviewed here, or The Invisible Hook: The Hidden Economics of Pirates).

In this case, Leeson and Russ investigated the factors associated with European witch trials over the period between 1300 and 1850 C.E. They argue that the witch trials were a response to intensified religious competition (predominantly between the Protestant and Catholic churches) following the Reformation:
Europe’s witch trials reflected non-price competition between the Catholic and Protestant churches for religious market share in confessionally contested parts of Christendom... By leveraging popular belief in witchcraft, witch-prosecutors advertised their confessional brands’ commitment and power to protect citizens from worldly manifestations of Satan’s evil...
The idea here is that the churches were competing for followers. One way that church leaders they could encourage followers to join their church was to convince them that their church was more active and successful in banishing evil. To do this, the churches engaged in witch trials. Leeson and Russ argue that witch trials should therefore be more common in areas that were more contested (that is, where there was more religious competition), and should become less common after the Treaty of Westphalia in 1648, because the Treaty permanently fixed the 'confessional geography' (that is, which areas could be claimed by Protestants, and which by Catholics), so that:
After 1648, it was no longer possible for Catholic or Protestant religious suppliers to change the denomination of any of the Empire’s territories, greatly reducing their motivation to compete.
Leeson and Russ assemble an impressive dataset of the dates and locations of European witch trials of over 43,000 people across 21 European countries, as well as the dates and locations of 424 religious conflicts (which they use as their measure of religious competition). They find that:
...each additional confessional battle is associated with an approximately 8% increase in the number of people tried for witchcraft; each additional confessional battle per million, with an approximately 11% increase in the number of people tried for witchcraft per million.
They then go on to test other theories from the literature. These included that weather shocks such as the 'Little Ice Age', or shocks to incomes, created incentives for scapegoating. Witches make handy scapegoats, so witch trials could increase when scapegoats are needed. Leeson and Russ are able to show that these alternative explanations are not as strong in predicting witch trials as religious competition is.

This is a really nice paper, underpinned by simple economic theory. For instance:
Of course, prosecuting witches was not free; it could be very expensive... The intensity of a religious supplier’s witch-trial activity thus depended on its benefit, which depended on the intensity of the religious market contestation he faced. The more intense the contestation, the higher the benefit of conducting witch trials, hence the more he would conduct.
However, one aspect was missing from the theoretical underpinning of the paper. Religion is, by its very nature, what economists call a credence good. With credence goods, some of the characteristics of the good (the 'credence characteristics') are not known to the consumer before they purchase, and they are still not known to the consumer even after they have consumed the good. [*] Health care is another example (because consumers can't know what the outcome would have been if they hadn't been treated, so they can't evaluate how much better the treatment is than the alternative).

In this case, it can be very difficult for a potential church follower to evaluate which church they should follow. Presumably, they want to follow whichever church will make them safer, 'in this life and the next'. The church leaders try to signal that their church is better through witch trials, but their signals cannot be credible, even to highly superstitious potential followers. To be credible, a signal must be costly (witch trials are costly to the church), but must be costly in a way that makes them unattractive for the lower quality church to attempt. Witch trials are not credible as a signal, because both churches presumably face the same costs of engaging in them. Instead, we see an escalation of intensity of witch trials, since whichever church engages in the most intense trial behaviour will be the church that appears to be the 'highest quality' church. Some further exploration of this point would have been interesting in the paper.

Leeson and Russ also note other periods of religious competition have led to increases in witch trials, such as the Salem witch trials in seventeenth-Century Massachusetts, where there was competition between Puritan ministers. Given that we are currently in a new period of religious competition, it makes you wonder whether we might see witch trials reappear. And indeed, that does appear to be the case, as The Economist notes (in an article about the same Leeson and Russ paper):
The persecution of vulnerable folk on trumped-up allegations of witchcraft may sound like a horror story from a history book, but the practice is on the rise in modern-day Africa. The prime victims are now children, with orphans, the disabled and albinos particularly at risk. In 2010 Unicef, a charity, estimated that 20,000 children accused of witchcraft lived on the streets of Congo’s capital, Kinshasa. Areas of intense religious competition between Christians and Muslims are hot spots. In Nigeria, for instance, Pentecostal Christian preachers fight for converts by offering protection from child witches.
The witch trials in Europe only ended after a treaty between the competing religious factions. What hope for such a treaty today?

[HT for The Economist article: Marginal Revolution]


[*] Credence characteristics are therefore different from search characteristics, which the consumer can find out before purchase, or experience characteristics, which the consumer finds out after they purchase the good.

Saturday, 20 July 2019

Criminalising prostitutes, or their clients?

One of the examples that I have used for many years in illustrating supply and demand is the different impacts of enforcing penalties on the sellers, vs. the buyers, of illegal drugs. Last month, The Economist had an article (maybe paywalled for you) on a similar point, related to the market for sex services:
In 1999 Sweden banned the purchase—but not the sale—of sex. A curious coalition of feminists and Christians backed the law. They argued that it would wipe out prostitution by eliminating demand, and that this would be a good thing because all sex work is exploitative...
Over the past two decades the Swedish model has been taken up by nearby Norway and Iceland, and beyond, by Canada, France, Ireland, Israel and Northern Ireland. In 2014 the European Parliament urged EU members to adopt it. Spanish lawmakers are in the process of doing so. In America politicians in Maine and Massachusetts are calling for a similar approach.
In areas where prostitution is illegal, the supply of sex services is lower than in areas where it is legal. That is because the sellers in this market face higher costs, such as the costs of fines or other penalties for engaging in an illegal activity. This is shown in the diagram below, where the supply curve is lower in the market where prostitution is illegal (S1, compared with S0). The price of sex services will be higher in the market where prostitution is illegal (P1, compared with P0), and the quantity of sex services traded will be lower (Q1, compared with Q0).

If instead the purchase, but not the sale, of sex services is illegal, then the penalties are imposed on the buyers of sex services. This reduces the demand, because the net benefit of sex services to the buyer is lower (because of the risk of fines or other penalties). This is shown in the diagram below, where the demand curve is lower (DB, compared with DA). The price of sex services will be lower in the market where buying sex services is illegal (PB, compared with PA), and the quantity of sex services traded will be lower (QB, compared with QA).

Notice that the key difference in these two markets is what happens to the price. The price is higher when selling sex is illegal, while the price is lower when buying sex is illegal (if both of these activities are illegal, then the overall effect on price will be ambiguous).

If your goal is to "wipe out prostitution" (as per The Economist article), which of these approaches is better depends on what you think happens next. If the price is higher, then the potential income from sex services is higher, so that might encourage more sellers to enter the market. That would simply shift the supply curve back towards where it started, meaning that the policy of penalising sellers has little overall impact. That suggests that penalising buyers may be a better approach to eliminating sex services. However, as The Economist article notes, that doesn't mean it is all good news:
Supporters of the Swedish model claim it protects prostitutes by giving them some power over clients, who will be worried about being shopped to the police. Prostitutes say it has the opposite effect. Face-to-face negotiations are more hurried. Kate McGrew of Sex Workers Alliance Ireland says that fewer sex workers are heeding what used to be red flags. For example, a trans woman was beaten up after taking on a client who asked if she was alone. Clients are more likely to insist on assignations in remote places. And because men refuse to reveal identifying information, prostitutes have little recourse if they are attacked.
In a study of more than 500 sex workers in France, nearly 40% said their power to negotiate prices and insist on condoms had diminished since buying sex was banned in 2016. Nearly 80% said their earnings had fallen, and almost 90% did not support the law. In Ireland violence against prostitutes shot up by almost 80% in the year after buying sex was banned, according to Ugly Mugs, a group that encourages sex workers to report attacks.
Yet the number of sex workers in Ireland who tell the police about such crimes has fallen. France has seen similar shifts.
If buyers are being penalised, then they become more cautious. They want to meet in more secluded locations, where the sellers of sex services are more isolated and unable to easily summon help if something goes wrong. This increases the risk of harm to the sellers. Penalising the buyers also reduces the price, which makes the sellers of sex services more willing to take risks (to their safety, but also to their health) to maintain their income.

Perhaps the best approach is actually to decriminalise (or legalise) both sides of the market:
Advocates of a more liberal approach point to New Zealand, which treats selling sex like any other job. An official report says that “the vast majority” of sex workers are safer and healthier since prostitution was decriminalised in 2003. Those working on the streets report that their relationship with the police has improved. Likewise, in the Australian state of New South Wales, where selling sex is legal, prostitutes’ use of condoms is higher than in other Australian states where it is banned.
If the market isn't 'hidden', then sellers of sex services can more easily be targeted for public health, they can have better relationships with the police. Buyers and sellers are safer as a result. And police resources can be diverted from policing the sex services market to other tasks. This seems like an easy win-win for all parties, and indeed has proven to be the case in New Zealand. The main argument for penalties on the buyers (or sellers) in this market is ideological.

Friday, 19 July 2019

Who are the happy economists in the UK?

In a new article published in the journal Economics Letters (gated; I don't see an ungated version online), Karen Mumford (University of York) and Cristina Sechel (University of Sheffield) used data from 443 academic economists in the UK to investigate the factors associated with job satisfaction. The method was essentially similar to happiness studies, with the question people were asked being: "Overall how satisfied are you with your job these days?", and responses measured on a scale of 1 (low) to 10 (high). If you substitute the word "job" for "life", then you have one of the standard questions for measuring life satisfaction.

Anyway, they found that:
Our most consistent results occur with the workplace characteristics. Job satisfaction is significantly related to working with proportionately more women (negative); working in London (positive); or working in a co-operative environment (positive). The latter relationship is particularly substantial. Never having had a mentor is negatively related to job satisfaction for these academics. Having a network available for professional advice is positively associated with satisfaction for women, but not for men.
The last points, about mentoring and professional networks, are important and accords with other research that has suggested the importance of mentoring for emerging female economists (for example, see this post from earlier this year). However, professional networks (one of the few differences between men and women in this study) becomes statistically insignificant if you control for self-reported salary (though this reduces the sample size down to 306, and results are not shown separately by gender for this reduced sample).

Overall, it seems that the characteristics of the workplace dominate in terms of their association with job satisfaction. Academic economists in the UK are most satisfied in a collegial and cooperative environment, with access to a supportive mentor. Who would have guessed that?

Thursday, 18 July 2019

If you think you can score a point off Serena Williams, you're not purely rational

In the first week of my ECONS102 class, we discuss behavioural economics. In particular, we discuss a range of behavioural biases and heuristics that create deviations from 'purely rational' decision-making, and lead to what 2017 Nobel Prize winner Richard Thaler has termed 'quasi-rational' decision-making. One of those biases is positivity bias, or the Dunning-Kruger effect (both related to what some psychologists call self-enhancement), where people overestimate their ability.

There are lots of real-world examples of positivity bias. If you've ever watched professional darts or poker and thought, 'that doesn't look too hard; I could totally do that', then you've been subject to it. And there was an excellent example reported in Newsweek last week:
A recent YouGov poll suggests 12 percent of men, or about one in eight, think they could score a point off Serena Williams.
The poll, conducted of 1,732 adults in the UK on Friday, revealed that only 3 percent of women thought they could get one past the 23-time Grand Slam champion.
Maybe women are more realistic about their chances, but I don't think even 3 percent of anyone is going to score a point in a game against Serena Williams. According to this article, her average first serve speed at the 2014 US Open was 108 mph (174 km/h) - that's the average serve speed. And her return game is pretty good too. Unless you're a top professional tennis player, you're not scoring a point. Although, maybe if you played the game on a dark night, with no lights, and matt black tennis balls...

Positivity bias makes us more likely to believe that we can achieve things, whether or not those things are realistically achievable. You might think that would be pretty benign in its effect. So what - we have good feelings about ourselves? However, this bias can lead us to invest in activities that are underproductive (because we aren't as talented, or productive, as we think we are), wasting resources in the process. In other words, it may cause us to underestimate the opportunity costs of some activities, making us potentially worse off than if we had never attempted them. Like these for example:

Purely rational decision-makers have realistic understandings of their strengths and weaknesses, and can accurately judge their ability to successfully undertake activities. Unfortunately, we're not purely rational decision-makers, but that doesn't mean that Serena Williams will be any easier to take a point off.

Monday, 15 July 2019

The adjustment of the egg market to increasing supplier costs

This week in my ECONS102 class, we are covering supply and demand. Understanding how the market adjusts from one equilibrium to another (which we call comparative statics) is an important component of that topic. There are loads of real-world examples. For instance, in a very timely article, the New Zealand Herald reported last week:
Gilmour's, the country's largest supplier of wholesale food and beverages, is warning that the price of eggs is set to increase and the breakfast favourite may be harder to come by as egg farmers move to meet changes to the law.
In an email sent to customers today, the retailer owned by supermarket giant Foodstuffs, said "huge investment" was required by the industry to meet the Animal Welfare Code of Practice for Layer Hens which in turn would drive up the price of eggs.
"There is currently uncertainty around supply as farms struggle to gain resource consent for new production whilst other suppliers exit the supermarket sector and/or industry altogether. This is resulting in a shortage of eggs which is expected to continue over the short to medium term as the industry readjusts," the notice outlined.
Gilmours said due to higher production-related costs colony eggs would be sold at a premium as cage eggs are phased out.
Egg producers are facing increasing production costs. When costs of production increase, that results in a decrease in supply. As shown in the diagram below, the supply curve shifts up and to the left, from S0 to S1. If egg prices were to remain at the original equilibrium price (P0), then the quantity of eggs demanded (Q0) would exceed the quantity of eggs supplied (QS) at that price, because egg producers are only willing to produce QS eggs at the price of P0, after the supply curve shifts. There would be a shortage of eggs.

When there is a shortage, we expect the equilibrium price to increase. This is because some buyers, who are willing to pay the going price (P0), are missing out. Some of them will find a willing seller, and offer the seller a little bit more, in order to avoid missing out. In other words, buyers bid up the price. The result is that the price increases, until the price is restored to equilibrium, at the new (higher) equilibrium price of P1. At the new equilibrium price of P1, the quantity of eggs demanded is equal to the quantity of eggs supplied (both are equal to Q1). We can say that the market clears.

Saturday, 13 July 2019

Saving the elephants, only for the hippos to be hunted

When two goods are substitutes, if the price of one of them increases or it becomes less available, consumers switch to the other. So, this report from The Telegraph (gated, ungated version from the New Zealand Herald here) last week should come as no surprise:
The elephant ivory ban is killing hippos, conservationists have said, as poachers and hunters take advantage of a loophole in the new law.
The Ivory Act, which will come into force later this year, was championed by Michael Gove, the British Environment Secretary, but conservationists argue that it puts hippos at grave risk as the import of their tusks will still be legal.
Hippo ivory, which resembles that of an elephant, is being increasingly traded globally with 12,847 hippo teeth and tusks, weighing 3,326kg, bought and sold in 2018. Trade increased from 273 items in 2007 to 6,113 in 2011...
Campaigners have called on the Government to close the loophole to ensure the ban applies to all ivory-bearing animals. They have also warned that it is nearly impossible to tell whether a tusk is from a hippopotamus that was slaughtered recently or many years ago, and whether it was poached or legally killed.
Will Travers, president of the Born Free foundation, said authorities were "shifting pressure" on to hippos by only banning ivory from elephants.
Banning elephant ivory doesn't completely shut off the supply of elephant ivory, but it does decrease it. This is because the costs of supplying ivory have increased (due to the penalties for supplying an illegal product). This decrease in supply shown in the diagram below, by the shift from S0 to S1. The equilibrium price of elephant ivory increases from P0 to P1.

Elephant ivory and hippo ivory are close substitutes (in the article, Will Travers says that "I sometimes can't tell the difference between different types of ivory and I've been in this for 35 years"). Elephant ivory has now become relatively more expensive, so more price-sensitive consumers switch to hippo ivory. This increases the demand for hippo ivory, as shown in the diagram below, from DA to DB. This increases the price of hippo ivory (from PA to PB) but importantly, it also increases the quantity of hippo ivory traded from QA to QB.

I have to admit, I hadn't realised the number of animals that produce ivory, including walrus, wart hogs, and narwhals (as well as elephants and hippos). But if you're going to ban ivory to save the elephants, it probably pays to ban all of it.

Wednesday, 10 July 2019

NZ ranks 18th in new ridiculous healthcare ranking

Last month, the New Zealand Herald reported:
New Zealand's healthcare system is ranked 18th place out of 24 countries - lagging behind a number of countries including Japan, Germany and even Australia.
UK healthcare recruiter Medical ID has ranked 24 OECD countries on their healthcare systems.
The ranking was based on the amount of GDP spent on healthcare, the number of doctors and nurses, how many hospital beds they have and the average life expectancy.
New Zealand was ranked 18th with a score of 60/100 and spending 9 per cent of its GDP on healthcare. It has 12,821 hospital beds and 62,843 doctors and nurses and the average life expectancy is 81.45.
It shared its 18th placed ranking with the UK which, despite having the 13th highest spending on healthcare, was brought down by being placed 22nd for the number of doctors and beds per capita.
This shouldn't even be news. Why? Because the ranking method is ridiculous, for a couple of reasons.

First, it is based on a mashup of both inputs (healthcare spending, doctors/nurses, and hospital beds) and outputs (life expectancy). This is basically double-counting, since inputs get turned into outputs. But also it double counts some inputs, since spending on doctors/nurses and hospital beds depends on the number of doctors/nurses and hospital beds.

When you want to know if a health system is good or not, it matters most to you what the output of the system is - does it keep people healthier, for longer? It doesn't matter so much how the health system achieves those outcomes. That is, it doesn't matter how much the health system spends, or what inputs it uses, if all you care about is whether it keeps people healthier. In that case, why not simply look at life expectancy, or even better healthy life expectancy, to work out which health system is better?

Alternatively, you might be interested in how much health outcome you get per unit of inputs (which might be per doctor/nurse hour, per dollar of spending, per hospital bed, etc.), or the amount of health inputs per unit of health outcome (for example, the cost per year of additional life expectancy). In the latter, you would be measuring the cost-effectiveness of the health system. In both cases, more output (using the same amount of inputs) is good, but more inputs (to get the same amount of output) is bad.

In contrast, in this ranking system, if two countries have the same number of doctors/nurses and hospital beds, and the same life expectancy, but one spends more than the other, the country that spends more is ranked higher. WTF? If Country A is spending more but only achieving the same outcome (the same life expectancy) as Country B, then Country A has got a worse health system. It is wasting healthcare resources, relative to Country B.

Second, and related to the previous point, higher health spending is not necessarily better. If that were true, a country could improve its ranking by simply contacting Big Pharma, and offering to pay them double for medicines.

So, it should be easy to see that this is a ranking system that is complete rubbish. I guess that's what happens when it is produced by "UK healthcare recruiter Medical ID", which has a vested interest in having a ranking system where the number of doctors and nurses, and healthcare spending, are indicators of a better healthcare system. In fact, they could instead be indicators that the healthcare system is simply wasteful.

Sunday, 7 July 2019

Rational avocado thefts are on the rise

Last month the New Zealand Herald reported:
Tauranga growers Liz Pratt and Neville Cooper have caught the latest thieves on film after two separate night break-ins on June 8 and June 12.
Cooper said the sole thief caught on film on June 8 stole avocados worth about $1250...
The couple believes the thieves are selling the fruit on the black market to sushi shops, where the fruit is used immediately and can't be traced...
Police said provisional figures recorded 130 avocado thefts in the six months to last December, up from 110 on the same period of 2017.
There were 210 reported thefts in the full year to December, mainly in the Bay of Plenty, Northland and Eastern districts...
Police Senior Sergeant Alasdair Macmillan said police "have seen a rise in reporting these types of thefts in recent months".
"Avocados are a target for thieves due to availability and price," he said.
This relates to previous posts of mine on honey thefts and onion thefts. At the risk of repeating myself, Gary Becker (the 1992 Nobel Prize winner) identified that rational criminals would weigh up the benefits and costs of their actions, in his economic theory of crime (see the first chapter in this pdf).

A similar way of thinking about it is represented in the diagram below, where Q is the quantity of avocado thefts. Marginal benefit (MB) is the additional benefit of engaging in one more avocado theft. In the diagram, the marginal benefit of avocado thefts is downward sloping - the more avocado thefts a criminal engages in, the less they can sell their stolen avocado for (because it is harder to 'fence' greater quantities of stolen avocados - there are only so many sushi shops that will accept stolen avocados). Marginal cost (MC) is the additional cost of engaging in one more avocado theft. The marginal cost of avocado theft is upward sloping - the more avocado thefts a criminal engages in, the higher the opportunity costs (they have to give up more valuable alternative activities they could be engaging in, and as well, they are more likely to get caught and it becomes harder to 'fence' their stolen avocado). The 'optimal quantity' of avocado thefts (from the perspective of the thief!) occurs where MB meets MC, at Q* avocado thefts. If the criminal engages in more than Q* thefts (e.g. at Q2), then the extra benefit (MB) is less than the extra cost (MC), making them worse off. If the criminal engages in fewer than Q* thefts (e.g. at Q1), then the extra benefit (MB) is more than the extra cost (MC), so conducting one more theft would make them better off.

Now consider what happens in this model when the value of avocados increases. The benefits of avocado crime increase. As shown in the diagram below, this shifts the MB curve to the right (from MB0 to MB1), and increase the optimal quantity of avocado thefts by criminals from Q0 to Q1. Avocado thefts increase.

It is incentives that lead to an increase in avocado theft, and avocado theft can also be reduced by changing the incentives. The New Zealand Herald article gives some suggestions:
Pratt and Cooper had four thefts last year, prompting them to spend about $2500 on security cameras and $1700 on electric fences along all their road frontages.
"Up to then we hadn't had any trouble at all," Cooper said.
"Other growers have had similar experiences. I was just talking to one the other day, he's just bought 10 of them [cameras] to put around his orchard. He's had a lot of break-ins."
He said he had heard that at least one sushi shop owner had been prosecuted for receiving stolen avocados. But that didn't seem to have stopped the practice...
[Police recommended that] "Orchardists can help prevent thefts by taking action to secure their properties and crops. Measures include installing boundary fences, and CCTV and hidden cameras to catch offenders.
"Such measures can be highly effective, and the information captured through CCTV can be extremely helpful as the more information residents can pass on to police, the more likely it is that we can make an arrest."
Installing CCTV and security fences increase the marginal cost of engaging in avocado theft, shifting the MC curve up and to the left. The optimal quantity of avocado theft will reduce. Prosecuting sushi shop owners who receive stolen avocados will make shop owners less willing to receive stolen avocados, reducing the marginal benefit of avocado theft. This shifts the marginal benefit curve down and to the left, and decreases the optimal quantity of avocado theft.

Read more:

Friday, 5 July 2019

For economists, talking with sociologists

Have you ever (as an economist or economics student) found yourself talking to sociologists, and wondering what on earth they are talking about? Sometimes they seem to be speaking a foreign language. Maybe they are. Or maybe, they can't understand you and you wish you could say things in a way that the sociologists could understand?

Fortunately, there is a solution. Back in 1990, Jeffrey Smith and Kermit Daniel (both PhD students at the University of Chicago) compiled the Economics/Sociology Phrase Book: help economists adjust their way of speaking in a manner that will make it comprehensible to Sociologists.
Why sociologists? Smith and Daniel explain that:
We chose Sociologists rather than Political Scientists because the latter tend to be unpleasant, emaciated people with glazed eyes, while Sociologists are often entertaining and cute. Unlike Anthropologists, they can be invited to parties without much worry for the safety of the silverware, and their rhetoric, when treated like background music, has a pleasant, lyrical rhythm.
The phrase book contains very helpful translations, such as the sociologists' use of "is correlated with", "determines", and "is caused by", all of which translate to economists as "is correlated with". Harsh, but fair. Enjoy!

Monday, 24 June 2019

Book review: Economics Rules

Economics and economists come in for a fair amount of criticism from those outside the discipline. Not all of that criticism is for good reason, but at least some of it is. And there is also a fair amount of criticism of economics and economists from within the discipline. Dani Rodrik's new book, Economics Rules, fits into the latter category. However, it isn't all negative. As Rodrik notes in the introduction, "this book both celebrates and critiques economics".

At the heart of economics lie models. Rodrik spends much of the early chapters describing what economic models are, and what makes them useful. The usefulness of models is that they capture aspects of reality. The multiplicity of different models in economics exist because they capture different aspects, relying on different simplifying assumptions to do so. However, this also causes a problem because:
...very few of the models that economists work with have ever been rejected so decisively that the profession discarded them as clearly false.
Despite this problem, Rodrik is clearly in favour of having diversity of models, and clearly advocates for this, with the caveat that economists need to recognise that each model is a model, not the model. This is fair criticism - too often economists rely on shoehorning reality into their preferred model, rather than recognising that in different situations or contexts, different models will be called for. This is something more akin to the approach of Nobel Prize winner Jean Tirole.

When economists confuse a model for the model, Rodrik explains that this leads to errors of omission (where economists fail to see troubles looming ahead, such as the Global Financial Crisis), and errors of commission (where economists become complicit in policies whose failure might have been predicted in advance, such as the Washington Consensus). He goes on to note that:
Because economists go through a similar training and share a common method of analysis, they act very much like a guild. The models themselves may be the product of analysis, reflection, and observation, but practitioners' views about the real world develop much more heuristically, as a by-product of informal conversations and socialization among themselves. This kind of echo chamber easily produces overconfidence...
Rodrik sees this as leading to two weaknesses in modern economics:
...the lack of attention to model selection and the excessive focus at times on some models at the expense of others.
Alluding back to his earlier book, The Globalization Paradox (which I reviewed here), he argues that economists need to be 'Foxes' (holding many different views about the world, based on different models), rather than 'Hedgehogs' (who are captivated by a single big ides, such as 'markets work best'). 

Overall, this book is a good read for both economists and non-economists alike. Economists who read the book with an open mind may be persuaded to be a little more open to alternative models, or at least they might apply themselves more thoughtfully to the task of model selection. Non-economists who read the book may gain a better appreciation for what underlies the perceived arrogance of economists in defending their policy prescriptions based on particular models. Recommended!

Sunday, 23 June 2019

Crack cocaine and the gun violence equilibrium in the U.S.

In economics, we often recognise that there may be multiple equilibriums, and that even a relatively small shock may be enough to cause the economy to move from one equilibrium to another. Consider gun violence as an example. If gun violence is low, people feel safe and therefore don't feel the need to carry a gun for self-defence purposes. Therefore there exists a low gun violence equilibrium. [*] However, if some shock occurs, and gun violence increases, then people will feel less safe. They will be more likely to carry a gun for self-defence purposes, and therefore more likely to use a gun, perpetuating the level of gun violence. Therefore there also exists a high gun violence equilibrium. However, once a society is in a high gun violence equilibrium, it is going to be very difficult to reverse things.

What would cause society to move from a low gun violence equilibrium to a high gun violence equilibrium? A 2018 NBER Working Paper by William Evans (University of Notre Dame), Craig Garthwaite (Northwestern University), and Timothy Moore (Purdue University) provides evidence that the rise of the crack cocaine market in the U.S. in the 1990s caused a shift from a lower gun violence equilibrium to a higher gun violence equilibrium. Their argument is that:
...the daily experiences of young black males were fundamentally altered by the emergence of violent crack cocaine markets in the United States. We demonstrate that the diffusion of guns both as a part of, and in response to, these violent crack markets permanently changed the young black males’ rates of gun possession and their norms around carrying guns. The ramifications of these changes in the prevalence of gun possession among successive cohorts of young black males are felt to this day in the higher murder rates in this community.
They use city-level data from the largest 57 metropolitan areas (in 1980) on age-, sex- and race-specific murder rates, over the period:
...from eight years prior to the arrival of crack and 17 years after, for a total of 26 years for each city. As the earliest date of crack’s arrival is 1982 and the latest is 1994, our data set spans from 1974 through 2011.
Essentially, they use a difference-in-differences approach that compares the change in murder rate for young black males (aged 15-24 years) before and after the introduction of crack into their city, with the same change for black males aged 35 years and over. They find that:
...the emergence of crack cocaine markets is associated with an increase in the murder rate of young black males that peaks at 129 percent in the decade after these markets first emerge...
...17 years after crack markets arrived, the murder rates for young black males were 70 percent higher than they would have been had they followed the trends of older black males. 
They key figure from the paper is this one, which plots the change in murder rate for young black males:

The x-axis tracks years before (negative numbers) and after (positive numbers) the introduction of crack cocaine into the city. There is a clear and statistically significant increase in murders among young black males, compared with older black males. Evans et al. then go on to show that this likely arose from increases in gun violence, in three ways, by showing: the six years after crack markets emerged, the share of all murders attributable to young black males increased by 75 percent. Seventeen years after crack markets emerged, young black males still accounted for a 45 percent greater share of all murders than they had in the years before the arrival of crack markets...
...these murders [between family members] increase markedly in the years after crack markets and remain elevated over the next sixteen years. This increase is driven entirely by murders involving guns, with no detectable change in the non-gun domestic violence murder rate over this time period...
...we further show that there is a strong correlation between ten-year changes in gun ownership and changes in the fraction of suicides involving guns among 15-19 year olds.
These results are all consistent with a story that the arrival of crack cocaine in a city increases murders primarily through an increase in gun-related violence. I liked this paper also because it gave a detailed account of the development of crack cocaine markets in the U.S. Here are the highlights:
In the early 1970s, much of the cocaine shipped to the U.S. originated in Chile. After the 1973 military coup by Augusto Pinochet in Chile that toppled the administration of Salvador Allende, Pinochet initiated a military crackdown on cocaine smuggling operations. Many smugglers moved to Colombia with the goal of using established marijuana smuggling routes as a way of getting cocaine to the United States...
As these organizations [the Colombian drug cartels] grew, an informal agreement was struck where the Medellin cartel would primarily control supply into Miami and Los Angeles, while Cali would concentrate its operations in New York...
The large-scale entrance of the Colombian cocaine cartels into Miami, New York and Los Angeles meant that by the early 1980s, these areas had relatively high cocaine supply leading to falling prices. Despite the downward pressure on prices, many low-income consumers remained priced out of the market...
Crack cocaine was an innovation that provided a safer way to smoke cocaine... This new product has two attractive properties. First, it produced an instant high, and its users could quickly become addicted. Second, an intense high could be produced with a minimal amount of cocaine, meaning that the profit-maximizing per-dose price was a fraction of the price per high for powder cocaine...
Crack was first introduced to the market by innovative retail organizations in New York, Miami and Los Angeles, which had a large supply of powder cocaine. It then spread from those cities...
The combination of a liquidity-constrained customer base and the short-lived high offered by the product meant many customers purchased multiple times a day... crack cocaine was sold in small doses, often in open-air drug markets where the dealer and the customer had no pre-existing contact to arrange that particular sale (though may have participated in a similarly anonymous sale at that location before)...
The lack of preexisting arrangements with buyers meant that geography was a key determinant of a crack dealer’s revenue...
The violence associated with establishing and defending a market from entry was a key reason for a substantial amount of drug-related violence.
If you need to fight a turf war to protect your market (or gain access to a market), guns are an efficient way to do so. Crack cocaine was a key driver that moved the U.S. from a lower gun violence equilibrium to a higher gun violence equilibrium.

[HT: Marginal Revolution, last year]


[*] However, this equilibrium is very unstable. Readers who understand some game theory will probably recognise this as a form of the 'arms race' game, which is itself a type of prisoners' dilemma. Everyone would be safe(r) if no one carried a gun. However, if no one else carries a gun, you can be both safe and powerful by carrying a gun. So, there are incentives to carry a gun, regardless of whether everyone else is, or no one else is. The low gun violence 'equilibrium' is not actually a Nash equilibrium in this game. It is unstable, but may be kept in place by cultural norms against carrying guns, or high penalties for doing so.

Saturday, 22 June 2019

Retractions hurt academic careers, and may be worst for senior researchers

In modern academic publishing, retractions (where a published article is removed from the academic record) have become a fairly regular occurrence (a quick read of Retraction Watch will show you just how often this occurs). Articles may be retracted for many reasons, from simple mistakes in analyses or contaminated lab samples, to fabrication of data and results. A reasonable question to ask, then, is to what extent a retraction impacts on an academic's career. Oftentimes, the retraction comes years after publication of the article, and in the meantime the author has used the article to contribution to their reputation. Is their reputation damaged by the retraction, and if so, by how much? And, does the type of retraction (simple mistake, or serious misconduct) matter?

A 2017 article by Pierre Azoulay, Alessandro Bonatti (both MIT), and Joshua Krieger (Harvard), published in the journal Research Policy (and not retracted, ungated earlier version here), provides some answers. First, they note that the number of retractions has increased over time, as shown in their Figure 1:

You can see that the problem is getting worse over time. Or at least, you can see that the number of retractions is increasing over time. Maybe we have become more vigilant at recognising mistakes and misconduct, and ensuring those articles are retracted? It is difficult to say.

In any case, Azoulay et al. then looked at data from 376 US-based biomedical researchers with at least one retracted article that was published between 1977 and 2007, and retracted before 2009. They compared those authors with a control group of 759 authors with no retractions, made up of authors who published the article that was immediately after the retracted one in the same journal. They focus on the impacts on citations of the authors' published articles that are unrelated to the retracted one, because a retraction might negatively impact the entire line of inquiry, in terms of citations. They find that:
...the rate of citation to retracted author's unrelated work published before the retraction drops by 10.7% relative to the citation trajectories of articles published by control authors.
Azoulay et al. also find evidence that the citation penalty increases over time. In the sample of retractions as a whole, they don't find differences between the impact on high status (those in the top quartile of researchers in terms of the number of citations to their previous research) researchers and low status researchers (those in the bottom three quartiles). However, when they look at different types of retraction, they find:
...a much stronger market response when misconduct or fraud are alleged (17.6% vs. 8.2% decrease).
You might wonder why a simple mistake would have a negative impact on researchers. This arises because no one can be certain of a researcher's quality, and if a researcher has made a mistake in a published article, then the perception of their quality are a researcher is reduced (and with it, citations of their other work).

When it comes to mistakes and misconduct, there are differences in their impact between high status and low status researchers. Retractions due to mistakes have a greater impact on low status researchers than high status researchers (about a 9.7% reduction in citations for low status researchers, but a 7.9% reduction for high status researchers). However, retractions due to misconduct have a much larger impact on high status researchers (19.1% reduction in citations) than on low status researchers (10% reduction).

Across all their results, the impacts on research funding follow a similar pattern. Junior researchers face greater career penalties for mistakes, but senior researchers face greater penalties for serious misconduct. However, since their sample was limited to researchers who were still employed after the retraction, their results may be biased if junior researchers are more likely to exit the profession than senior researchers, in response to a retraction (or before the retraction). Perhaps junior researchers whose careers would be most negatively affected are most likely to exit? Some additional work in this area is definitely warranted.

Despite that caveat, the overall story is somewhat comforting. The research community does punish researchers for their malpractices, and more severely than for genuine mistakes. However, in order for that process to be effective, the community needs to know the circumstances surrounding each retraction. Indeed, Azoulay et al. conclude that:
...the results highlight the importance of transparency in the retraction process itself. Retraction notices often obfuscate the difference between instances of “honest mistake” and scientific misconduct in order to avoid litigation risk or more rigorous fact-finding responsibilities. In spite of this garbled information, our study reveals that the content and context of retraction events influences their fallout.

Wednesday, 19 June 2019

Auckland as an internal migration donor to the rest of New Zealand is nothing new

Newsroom reported a couple of weeks ago:
A growing number of people are turning their back on Auckland for greener and cheaper pastures of the regions.
A study by independent economist Benje Patterson indicates 33,000 left the super city in the four years to 2017, when its overall population grew by nearly 200,000 to nearly 1.7 million.
Patterson's study is available here. He makes use of a cool new dataset from Statistics New Zealand on internal migration, based on linked administrative data from the Integrated Data Infrastructure (IDI). However, even though the data he uses are new, the story is not. Auckland has long been an internal migration donor to the rest of New Zealand. This is a point that Jacques Poot and I have made at numerous conferences and seminars over the years.

In each Census (until the 2018 Census), people were asked where they were living five years previously (including in the 2013 Census, even though it was seven years after the 2006 Census). We can use that data to construct a matrix of flows from each region or territorial authority (TA) to every other region or TA. This essentially captures the number of people who changed the region or TA they lived in over a five-year period. It is different from the annual change data that Patterson uses, and in comparison the annual flows should be larger (because a person who migrates from Auckland to somewhere else, and then back to Auckland, within the five-year period, would not count as a migrant in these data).

Now, even though (in the Newsroom article) Patterson describes the five-yearly Census as "clunky", it is this Census data that shows Auckland's net out-migration to the rest of New Zealand is not a new phenomenon, and has been ongoing since the mid-1990s. Here's the data for the last four Censuses we have data for (not the 2018 Census, as we are still waiting) [*]:

The blue bars are the number of in-migrants to Auckland (from elsewhere in New Zealand) over each five-year period based on the Census data. The orange bars are the number of out-migrants from Auckland (to other places in New Zealand) over the same period. The smaller grey bar is the net internal migration to or from Auckland. Notice that for the last three periods (1996-2001, 2001-2006, and 2008-2013), net migration is negative. That means more out-migrants from Auckland to the rest of New Zealand than in-migrants from the rest of New Zealand to Auckland.

In other words, the new Statistics New Zealand data are not showing a trend that is new at all. It's something that has been going on for a long time. Which also puts the shallowness of the analysis in Patterson's report into context, such as this:
Auckland’s regional migration losses to the rest of New Zealand are not surprising when one considers the deterioration to housing affordability in Auckland that occurred over the period. Data from shows that in April 2017, the median Auckland house was estimated to cost about 9.5 times the median household income. By comparison this ratio was 6.2 nationally.
The largest net out-migration from Auckland was in the 2001-2006 period (-18,000; or 3600 per year). Was Auckland housing affordability declining the fastest during that period? The truth is, the data don't provide an answer as to why on net people are moving away from Auckland.

Even the locations where they are moving to are not new. Newsroom notes that:
The regions closest to Auckland attracted two thirds of the exodus, with Tauranga proving to be the most popular, attracting an average 1144 people a year.
Waikato District on the southern fringe of Auckland gained an average of 3381 Aucklanders over the period, while Hamilton gained just over 1500 residents from Auckland.
The data indicates nearly 6000 Aucklanders moved to Northland over the four years, with gains spread evenly across Whangarei District, Far North and Kaipara.
Looking at the Census data for 2001 (so, the 1996-2001 period), the regions that Auckland lost (on net) the largest number of migrants to were (in order, and to the nearest 10 people) Bay of Plenty (-2800), Waikato (-2340), and Northland (-1600).

So, really there is nothing new here, other than the (albeit very useful, and more timely than the Census) dataset.


[*] I'm using inter-regional migration flows here, rather than inter-TA flows. However, the story is very similar if I use inter-TA flows, because the Auckland region is the Auckland TA.

Monday, 17 June 2019

Book review: Nudge Theory in Action

Richard Thaler and Cass Sunstein's book Nudge set policymakers on a path to taking advantage of the insights of behavioural economics to modify our behaviour, in areas such as retirement planning, nutrition, tax payments, and so on. It spawned the Behavioural Insights Team (otherwise known as the 'Nudge Unit') in the U.K., and similar policy units in other countries. However, it also caused a lot of controversy, particularly from libertarian groups that would prefer less government intervention into private decision-making.

I recently finished reading the book Nudge Theory in Action, a volume edited by Sherzod Abdukadirov. I have to say it was not at all what I expected. I thought I was going to get a lot of examples of nudges applied by governments and the private sector, and hopefully with some explanations of the underlying rationales and maybe some evaluations of their impact. The book does contain some examples, but mostly they are examples that have already been widely reported, and not all of them would necessarily qualify as 'nudges', under the definition originally proposed by Thaler and Sunstein.

Essentially, most of the chapters in this book are libertarian critiques of nudges in theory and in practice. Richard Williams sums up the underlying premise of book well in the concluding chapter:
The purpose of this book is to demonstrate that there is a strong private sector that helps people's decision making and that stringent criteria ought to be met before governments attempt to improve on private decision making, whether through structuring information to "nudge" people into making the government-preferred decision or using more stringent measures to achieve the same thing. Where people have difficulty matching their inherent preferences into real life decisions that satisfy those preferences, a private market will almost always arise that can help to match decisions with preferences.
Thaler and Sunstein defined a nudge as "any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives". The second part of that definition is important, and most of the chapters pay lip service to it, while at the same time ignoring it in favour of critiquing almost any government policy proposal that would restrict decision-making. The most cited example is the failed attempt by former New York mayor Michael Bloomberg to ban sales of large sodas. Given that it involves a ban, and therefore does forbid an option, under the original Thaler and Sunstein definition it is not a nudge.

However, policy makers do themselves no favours in this case by referring to policies like the New York large soda ban as a "nudge", and invoking behavioural economics principles in favour of all sorts of policies that are not, in fact, nudges. So, a more reasonable critique would be directed at policy makers' incorrect usage of the term 'nudge', rather than damning all nudges using examples that are not even nudges.

Having said that, there are some good and thought-provoking chapters. Mario Rizzo has an excellent theoretical chapter, and while I don't buy into the arguments he made, it definitely made me think more deeply about what we mean when we refer to rational behaviour. Jodi Beggs (from Economists Do It With Models fame) presents a great framework that differentiates private sector nudges into those that improve welfare for consumers (which she terms 'Pareto nudges', invoking the idea of a Pareto improvement from welfare economics), and those that make consumers worse off to the benefit of firms (which she terms 'rent seeking nudges'). Beggs also notes the subversion of the term 'nudge' to mean almost any policy change that aims to change behaviour. Several chapters raised the (very valid) point that not only are consumers (or savers or whoever) subject to the behavioural biases that behavioural economics identifies, but government decision makers are also likely to be subject to those same biases. In that case, we should be cautious about the ability of government to create the 'right' choice architecture to achieve their goals.

However, there are also some notable misses among the chapters. In his chapter, Mark White critiques government attempts to alter the choice architecture to favour some option over others, but never engages with the fact that there will always be some choice architecture in place. In many cases, there simply isn't a way to avoid presenting decision makers with options, and in those cases there has to be a choice architecture of some type in place. Why not attempt to make it one that will steer people to making decisions that improve their long-term wellbeing? Similarly, Adam Thierer argues that nudges prevent 'learning by doing' or learning through making mistakes. That is good in theory, but how many opportunities do we have to make mistakes in our own retirement planning, for instance? He also fails to acknowledge that governments can also learn from nudges that don't work as intended. As Steve Wendel notes in his chapter:
We should be skeptical that behavioral nudges will work the same (or at all) once they are translated from an academic environment into consumer products, because of core lessons in the behavioral literature itself - that the details of implementation and decision-making context matter immensely.
That isn't an argument not to attempt nudges. However, it is an argument that applies equally to government nudges - they may not work as intended, so they should be rigorously evaluated.

Ultimately, if you are looking for ammunition to mount a libertarian counter-attack against nudge theory applied by government, you will find a lot of suitable material in this book. However, as a general guide to 'nudge theory in action', I believe this book falls short.

Saturday, 15 June 2019

Corruption and quid pro quo behaviour in professional soccer

In their book Freakonomics, Steven Levitt and Stephen Dubner described research (from this article by Levitt and Mark Duggan - ungated earlier version here) that showed evidence of rigged matches in sumo wrestling. Specifically, wrestlers who were approaching their eighth win (which comes with much greater earnings and ranking) were more likely to win against those who already had eight or more wins, than would be expected based on their relative ability. And that was in Japan - a country not known for widespread corruption (Transparency International ranks Japan 18th out of 180 countries in its Corruption Perceptions Index). How bad could things be in other, less honest or trustworthy, countries?

A 2018 article by Guy Elaad (Ariel University), Alex Krumer (University of St. Gallen), and Jeffrey Kantor (Ariel University), published in the Journal of Law, Economics, and Organization (ungated version here), provides a window to widespread corruption in domestic soccer. [*] They look at games in the final round of the season, in domestic soccer leagues, where one of the teams needed to win (or draw) in order to avoid relegation, and where the match was inconsequential for the other team. They have a database of 1723 such matches in 75 countries over the period from 2001 to 2013. Most interestingly, they look at how the win probability (controlling for a range of factors, including the strength of the opposition and home advantage) varies by the level of corruption of the country. They find that:
...the more corrupt the country is according to the Corruption Perceptions Index (CPI), the higher is the probability of a team (Team A) to achieve the desired result to avoid relegation to a lower division relative to achieving this result in non-decisive games against the same team (Team B)... This finding is robust when controlling for possible confounders such as differences in abilities, home advantage, and countries’ specific economic, demographic and political features.
Then, there is evidence that the winning team in that game returns the favour (quid pro quo) in the following season, since they find that: more corrupt countries the probability of Team A to reciprocate by losing in the later stages of the following year to Team B is significantly higher than losing to a team that is on average better (stronger) than Team B. This result strengthens the suspicious of corrupt norms, since in the absence of any unethical behavior, we would expect the opposite result, since naturally the probability of losing increases with the strength of the opponent.
There's clearly a lot of mutual back scratching going on in professional soccer. It is worth noting, though, that the top divisions in Europe (Premier League, Ligue 1, Bundesliga, etc.) were not included in the analysis, which focused on the second-tier leagues in Europe, and top leagues outside of Europe.

An interesting follow-up to this study would be to look at betting odds. Do the betting markets price this corruption into their expectations (so that the team needing to avoid relegation has betting odds that suggest a higher probability of winning than would be expected based on home advantage, strength of opponents, etc.)? If not, then there may be opportunities of positive expected gains available from betting on those games.

[HT: Marginal Revolution, last year]


[*] Or football, if you prefer. To me, football involves shoulder pads and helmets.

Wednesday, 12 June 2019

The alcohol made them do it

Alcohol has well-documented effects on a range of harms such as drunk driving (almost by definition), violence, poor health, and mortality. However, the causal evidence for alcohol's effect on a range of less serious harms is less clear - things like risky sexual activity and other substance use. A new article by Jason Fletcher (University of Wisconsin-Madison), published in the journal Contemporary Economic Policy (ungated earlier version here), aims to fill that gap.

It is trivial to show that access to alcohol is correlated with measures of harm. The challenge with any study like this is to show that access to alcohol has a causal effect on the harm. That is, that the observed correlation represents a causal effect, and is not the result of some other factor. Fletcher does this by exploiting the Minimum Legal Drinking Age in the U.S. (of 21 years of age), and using a regression discontinuity approach. Essentially, that involves looking at the measures of harm and how they track with age up to age 21, and then after age 21. If there is a big jump upwards between the time before, and the time after, age 21, then plausibly you could conclude that the sudden jump upwards is due to the onset of access to alcohol at age 21. This approach has previously been used to show the impact of access to alcohol on arrests and on mortality. In this paper, Fletcher instead focuses on:
...drinking outcomes, such as any alcohol use, binge use, and frequency of use as well as drinking-related risky behaviors, such as being drunk at work; drunk driving; having problems with friends, dates, and others while drinking; being hung over; and other outcomes.
He uses data from the third wave of the Add Health survey in the U.S., which occurred when the research participants were aged from 18-26 years old. He analyses the results all together, and separately by gender, and finds that:
...on average, access increases binge drinking but has few other consequences. However, the effects vary considerably by gender; where females (but not males) are more likely to initiate alcohol use at age 21, males substantially increase binge drinking at age 21. In addition, males (but not females) face an increased risk of problems with friends and risky sexual activity at age 21. There is also some evidence of an increase in drunk driving and violence.
Interesting results, but not particularly surprising. Fletcher then tries to draw some policy implications on what would happen if the MLDA was reduced, by looking at differences between young people living with their parents and those not living with their parents. He finds that: harm reduction associated with binge drinking for those individuals living with their parents around age 21; in fact, individuals living with their parents (regardless of whether they are in school) have larger increases in alcohol-related risky behaviors than individuals living away from their parents.
He uses that result to suggest that parents are not good at socialising their children into safer drinking behaviours (and the results, on the surface, suggest this because those living at home engage in more risky behaviour after they attain age 21. However, there is another interpretation that Fletcher doesn't consider. Those who are not living at home might be more likely to be drinking alcohol before age 21, and so experiencing some of the negative impacts earlier. So, those living at home may be simply catching up to their peers, when they are 'allowed' to drink. Maybe that strengthens his other results.

Overall, the paper doesn't tell us much that wasn't already known, although the causal aspect of the study is a nice touch. The differences by gender were a bit more surprising, and hopefully there are other studies that can work further in this area to test them further.

Monday, 10 June 2019

Why Uber drivers will make no money in the long run

This is the third post in as many days about Uber (see here and here for the earlier installments), all based on this New York Times article. Today, I'm going to focus on this bit of the article:

Drivers, on the other hand, are quite sensitive to prices — that is, their wages — largely because there are so many people who are ready to start driving at any time. If prices change, people enter or exit the market, pushing the average wage to what is known as the “market rate.”
The article is partially right here. It isn't just the price elasticity of supply that is at fault - it is the lack of barriers to entry into (and exit from) the market that create a real problem for drivers. A 'barrier to entry' is something that makes it difficult for other potential suppliers to get into the market. A taxi medallion is one example, if a medallion is required before you can drive a taxi. However, there is nothing special required in order to be an Uber driver, and most people could do it. Similarly, a 'barrier to exit' is something that makes it difficult for suppliers to get out of the market once they are in it, such as a long-term contract. Barriers to exit can create a barrier to entry, because potential suppliers might not want to get into a market in the first place, if it is difficult to get out of later if things go wrong. Again, Uber has no barriers to exit for drivers. These low barriers (to entry and exit) ensure that, in the long run, drivers can't make any more money from driving than they could from their next best alternative.

To see why, consider the diagrams below. The diagram on the left represents the market for Uber rides. For simplicity, I've ignored the 'Uber tax' (that I discussed in yesterday's post). The diagram on the right tracks the profits of Uber drivers over time. The market starts in equilibrium, where demand is D0, supply is S0, and the price of an Uber ride is P0. This is associated with a level of profits for Uber drivers of π0. For reasons we will get to, this is the same earnings that an Uber driver would get in their next best alternative (maybe that's driving for Domino's, or as a taxi driver, or working as a stripper).

Now, say there is a big increase in demand for ride-sharing, from D0 to D1. The price of an Uber ride increases to P1, and the profits for driving increase to π1. Now profits from being an Uber driver are high, but they won't last. That's because many other potential Uber drivers can see these profits, and they enter the market (there are no barriers to entry, remember?). Let's say that lots of drivers enter the market. The supply of Uber drivers increases (from S1 to S2), and as a result the price decreases to P2, and profits for Uber drivers decrease to π2.

Now the profits for Uber drivers are really low. There's no barriers to exit, so some drivers decide they would be better off doing something else (driving for Domino's, etc.). Let's say that a lot of drivers choose to leave, but not all of those who entered the market previously. The supply of Uber drivers decreases (from S2 to S3), and as a result the price increases to P3, and profits for Uber drivers increase to π3.

Now the profits for Uber drivers are high again (but not as high as immediately after the demand increase). Drivers start to enter the market again, and so on, until we end up back at long-run equilibrium, where the price of a ride is back at P0, and driver profits are back at π0. At that point, every driver who is driving makes the same low profit as before. So, in the long run, even if demand for ride-sharing is increasing over time, the drivers are destined not to profit in the long run. [*]


[*] You might have noticed that the producer surplus is higher after supply increases, which implies that drivers (as a group) are earning higher profits after the market has settled back to long-run equilibrium. However, remember that supply is higher than before - those higher profits are shared among many more drivers, so the profit for each driver individually is the same as before.

[HT: The Dangerous Economist]

Read more:

Sunday, 9 June 2019

Uber is a tax on ride-sharing

This post follows up on yesterday's post about Uber, where we established that most of the benefit of Uber accrues to passengers, rather than to drivers. This is because of the shape of the demand and supply curves (steep demand, and flat supply). The New York Times article has more of interest though:
Economics says that the likelihood that a person will bear the burden of an increase in profit margins is inversely proportional to their price sensitivity. In other words, because drivers are four times more price sensitive than riders, a reasonable guess is that 80 percent of the price burden will fall on passengers, 20 percent on drivers.
The simple demand-and-supply diagram that I drew yesterday isn't the full story of the market for ride-sharing. It only showed the economic welfare of passengers (consumer surplus) and drivers (producer surplus). However, there is an important third party acting in this market: Uber.

Uber acts as a tax on the market for ride-sharing, because it essentially takes a cut of the value of every ride. This is essentially the same as the government taking a share of every sale (as in a sales tax), except that the money is going to Uber, rather than to the government. Who pays Uber? You might think it is the drivers - after all, the fee to Uber is taken out of what the consumer pays, before the net amount is given to drivers. But it turns out that actually, the 'Uber tax' is shared between passengers and drivers, and it is the passengers who pay the larger share.

To see why, let's modify the diagram from yesterday's post. This is shown below. If Uber charged a zero percent fee, then the market would operate at equilibrium, and the price would be PE and the quantity of rides would be QE (this is the situation we had yesterday). However, now let's introduce Uber's fees. Since it is the drivers who pay the fees to Uber (it is taken out of their pay), it is like an increase in their costs. It isn't really an increase in their costs, so the supply curve (which is also the marginal cost curve) doesn't shift. Instead, we represent this with a new curve, S+tax. The vertical distance between the supply curve (S) and the S+tax curve is the per-ride value of the tax. [*] The price that consumers pay will increase to PC, where the S+tax curve intersects the demand curve. From that price, the fee to Uber is deducted, which leaves the drivers with the lower price PP. Notice that the passengers' price has gone up by a lot, while the drivers' effective price has dropped by only a little. This tells you that the passengers are paying most of the Uber tax. The quantity of Uber rides falls from QE to QT.

We can also look at this using economic welfare. Without the Uber tax, the market operates in equilibrium. The consumer surplus (as we established yesterday) is the triangle AEPE, while the producer surplus is the triangle PEEC. However, this changes when the Uber tax is introduced. Now the consumer surplus (the difference between the amount that consumers are willing to pay (shown by the demand curve), and the amount they actually pay (the price)) is the smaller triangle ABPC. The passengers have lost the area PCBEPE. The producer surplus (the difference between the amount the sellers receive (the price), and their costs (shown by the supply curve)) is the smaller triangle PPFC. The drivers have lost the area PEEFPP.

The total amount of welfare that Uber gains is the rectangle PCBFPP (this rectangle is the per-unit value of the Uber tax, multiplied by the quantity of rides QT). This is the size of the Uber tax. We can split the Uber tax into the share paid by passengers (PCBGPE), based on the higher price they receive, and the share paid by drivers (PEGFPP), based on the lower effective price they receive. Note that the share of the Uber tax paid by passengers is much larger than the share paid by drivers.

The New York Times article notes that:
Uber and Lyft, the two leading ride-share companies, have lost a great deal of money and don’t project a profit any time soon.
Yet they are both trading on public markets with a combined worth of more than $80 billion. Investors presumably expect that these companies will some day find a path to profitability, which leaves us with a fundamental question: Will that extra money come mainly from higher prices paid by consumers or from lower wages paid to drivers?
Old-fashioned economics provides an answer: Passengers, not drivers, are likely to be the main source of financial improvement...
And now, hopefully, you can see why. If Uber raises the share of the price paid by consumers that it keeps, then it is passengers that will pay the majority of that higher Uber tax. [**] Which seems fair, since yesterday we established that it is passengers who benefit the most from Uber.


[*] Strictly speaking, the 'Uber tax' is an ad valorem tax. That means that it is a percentage of the price paid by the passengers. That means that the distance between the supply curve and the S+tax curve should get larger when the price is higher. However, for simplicity, I've represented the Uber tax as a specific tax. A specific tax is a constant per-unit dollar amount, which means that the supply curve and the S+tax curve are parallel. It's a simplification, but inconsequential for our purposes here.

[**] If you increase the size of the Uber tax, then the distance between the supply curve and the S+tax curve increases. This further reduces the consumer surplus and producer surplus. The additional revenue for Uber will be predominantly paid by passengers in the form of a higher price. We could show this with an additional diagram that has a small tax, and then a large tax. But, a diagram with a small tax replaced by a large tax is not that different in its effects from a diagram with a zero tax replaced by a small tax. I decided not to go that far. Call me lazy if you want.

[HT: The Dangerous Economist]

Read more:

Saturday, 8 June 2019

Passengers benefit more from Uber than drivers

When a seller sells a good or service to a buyer, a surplus (economic welfare) is created. The buyer receives something they wanted to buy, usually for a price that is less than the maximum they were willing to pay for it. The seller offloads something they wanted to sell, usually for a price that is more than the minimum they were willing to receive for it (their costs). So, both the buyer and the seller benefit. Who benefits the most though?

The Dangerous Economist points to this New York Times article about Uber:
The most comprehensive study of rider behavior in the marketplace found that riders didn’t change their behavior much when prices surged. (Like most major quantitative studies about Uber, it relied on the company’s data and included the participation of an Uber employee.)
Passengers were what economists call “inelastic,” meaning demand for rides fell by less than prices rose. For every 10 percent increase in price, demand fell by only about 5 percent.
Drivers, on the other hand, are quite sensitive to prices — that is, their wages — largely because there are so many people who are ready to start driving at any time. If prices change, people enter or exit the market, pushing the average wage to what is known as the “market rate.”
In other words, while demand is price inelastic (passengers are relatively insensitive to price changes), supply is price elastic (drivers are very sensitive to price changes). Interestingly, in the case of demand this is the opposite of what I concluded in this 2015 post. [*]

These elasticities are reflected in the diagram below. The demand curve is steep, which reflects that passengers are not very sensitive to prices - a small change in price will lead to almost no change in the quantity demanded. The supply curve, on the other hand, is flat, which reflects that drivers are very sensitive to prices - a small change in price will lead to a large change in the number of rides on offer.

However, that doesn't yet answer the question as to which side of the market (passengers or drivers) benefits the most from Uber. We need to consider their shares of the total welfare created. Consumer surplus is the difference between the amount that consumers are willing to pay (shown by the demand curve), and the amount they actually pay (the price). In the diagram, at the equilibrium price and quantity, consumer surplus is the triangle AEPE. Producer surplus is the difference between the amount the sellers receive (the price), and their costs (shown by the supply curve). In the diagram, at the equilibrium price and quantity, consumer surplus is the triangle PEEC.

Notice that, because of the shape of the demand and supply curves, the size of the consumer surplus (AEPE) is much larger than the size of the producer surplus (PEEC). Passengers (as a group) benefit much more than drivers (as a group). Note that this isn't quite the same thing as saying that each passenger benefits more than each driver, because the consumer surplus is split among many more people than the producer surplus. However, it is clear that passengers benefit more from Uber than drivers do.


[*] However, it could be that in 2015, demand was elastic, while demand has become less elastic over time and is now inelastic. It's hard to see why that would be the case. The rise of other substitutes would tend to suggest increasing elasticity for Uber rides. Or, perhaps demand is more elastic in New Zealand (which my 2015 post was based on) than in the U.S. (which this article is based on)? Again, it's hard to see why that would be the case, unless Uber prices in New Zealand in 2015 were much higher than in the U.S. now.

Read more: