Saturday, 31 March 2018

Medical marijuana laws decrease violent crime in the U.S.

Continuing this week's theme (e.g. see yesterday's post on medical marijuana and opiate harms), I recently read this new paper by Evelina Gavrilova (NHH Norwegian School of Economics), Takuma Kamada (Pennsylvania State), and Floris Zoutman (NHH Norwegian School of Economics), published in The Economic Journal (ungated earlier version here). In the paper, the authors look at the effect of medical marijuana laws on violent crime. They argue that medical marijuana laws lower the barrier to entry into the marijuana market for domestic firms, and that reduces the drug profits for Mexican drug trafficking organisations (DTOs), which are perpetrators of a significant amount of violent crime (in order to protect their high profits). They justify this by referring to DTOs as rational decision-makers weighing up costs and benefits:
The investment in violent activity can only be justified if the benefits exceed the costs. The benefits of violence depend on the revenue DTOs obtain when selling drugs. Intuitively, if revenues are high, the corresponding benefits of contesting those revenues through violence will also be high. Hence, for profit-maximising DTOs, optimal investment in violent capacity increases with the aggregate revenue DTOs make in the drug market.
Their assertion that reduced profits for DTOs arise from the entry into the market by smallholder marijuana farmers is backed up by the fact that prior to medical marijuana laws (MMLs) "[m]ost marijuana consumed in the US originates in Mexico", and a very simple demand-and-supply model that would be recognisable to my ECONS101 students (at least, in a couple of weeks, once we have covered demand and supply):
Figure 2 represents the market for marijuana. For simplicity we assume that illicit and medical marijuana are perfect substitutes in consumption, such that the supply and demand of both substances can be represented in a single figure. SDTO represents the supply curve for marijuana by DTOs. S0 represents the combined supply of marijuana by DTOs and local farmers that were already active prior to the introduction of an MML. An MML allows for entry of additional local farmers and thus shifts the combined supply to the right to S1. This results in a reduction in the price of the drug, an increase in the overall quantity and a reduction in the quantity sold by DTOs. The shaded area in the graph depicts the aggregate loss in revenues for DTOs.
Here is the Figure that they refer to:

Overall, they find that:
...MMLs lead to a strong reduction of 12.5% in the violent crime rate for counties close to the Mexican border. Moreover, within Mexican-border states, we find that the strongest decrease in the violent crime rate occurs in counties in close proximity to the border while the effect weakens with the distance of a county from the border. MMLs do not have a significant effect on crime in counties in inland states. 
When we conduct a spillover analysis we find that when a neighbour to a Mexican border state passes a MML, this results in a significant reduction in violent crime rates in the border state. More generally, we find that when a state passes a MML this reduces crime rates in the state in which the nearest Mexican border crossing is located. This evidence is consistent with our hypothesis that MMLs lead to a reduction in demand for illegal marijuana, followed by a reduction in revenue for Mexican DTOs, and, hence, a reduction in violence in the Mexican-border area.
Their results have clear policy implications:
The case of MMLs provides an important lesson for policymakers. Drug markets are well known for their violence. However, in the case of marijuana, when the supply chain of the drug is legalised, or at least decriminalised, a lot of the violence disappears and the business of organised crime structures is hurt.
Unlike the research I referred to yesterday, this paper didn't distinguish between when the MML was passed and when dispensaries became available. However, the results seem quite robust to a variety of alternative specifications, as detailed in the paper. The case for decriminalised marijuana seems to be strengthening with every new piece of research.

Read more:

Friday, 30 March 2018

Medical marijuana dispensaries decrease opiate harm, but only if they're not too heavily regulated

In a new paper by David Powell (RAND), Rosalie Liccardo Pacula (RAND), and Mireille Jacobson (University of California, Irvine) published in the Journal of Health Economics (ungated earlier version here) caught my attention this week, not least because it continues the theme of this week's posts (see here and here and here). In the paper, the authors investigate the impact of medical marijuana laws on opioid death rates, treatment admissions, the volume of legally distributed opioids, and non-medical use of pain relievers. In case you're unaware, opioid use has become a serious (and deadly) epidemic in the U.S. over the last decade. Figure 1 from the Powell et al. paper paints the picture, with opioid distribution and harms (treatment admissions and mortality) all climbing substantially since the late 1990s:

In their analysis, Powell et al. use data from the U.S. over the period from 1999 to 2013. Over this period, some states introduced medical marijuana laws, while others did not. The key difference between this study and earlier studies that have looked at similar questions is that this study correctly recognises that medical marijuana laws by themselves are unlikely to have any effect, so they look at changes when medical marijuana dispensaries became operative as well (rather than just when the State medical marijuana legislation passed).

They find:
...fairly strong and consistent evidence using difference-in-differences and event study methods that states providing legal access to marijuana through dispensaries reduce deaths due to opioid overdoses, particularly prior to the October 2009 Ogden memo when dispensary systems were not tightly regulated by the states. We provide complementary evidence that dispensary provisions lower treatment admissions for addiction to pain medications. We find in all cases that the effectiveness of having any medical marijuana law completely disappears when data after 2010 are included. Furthermore, while we show that legally protected and active dispensaries remain an important factor in reduced opioid harm, the magnitude of even this component of the policy has changed since 2010, when states more actively and tightly regulated marijuana dispensaries and as the opioid epidemic has shifted toward heroin consumption.
The final key point that makes this of relevance to my earlier posts on substitutes and complements is this:
In short, our findings that legally protected and operating medical marijuana dispensaries reduce opioid-related harms suggests that some individuals maybe substituting towards marijuana, reducing the quantity of opioids they consume or forgoing initiation of opiates altogether. 
In other words, the paper provides evidence that opioids and marijuana are substitutes, and making marijuana more easily available (and therefore, less costly) reduces demand for and use of opioids, and consequently reduces opioid-related harm. However, the most negative part of these results is that the effect disappears after dispensaries became more tightly regulated in 2010. Presumably, after that point it may have become too difficult for those at risk of opioid harm to obtain marijuana as a substitute. Maintaining looser regulation may have been a better option.

Wednesday, 28 March 2018

Are alcohol and cannabis substitutes or complements?

Continuing the theme for this week on alcohol and cross-price elasticities (see here and here), last month Eric Crampton wrote a post on the relationship between alcohol and cannabis:
It's been a bit of an open question whether legalised marijuana would lead to more or less alcohol use.  If the two goods are complements, say if people liked drinking while consuming cannabis, then any increase in cannabis use could yield greater alcohol use. If they were substitutes and people smoked instead of drinking, alcohol use could drop.
RAND surveys some of the more recent evidence. 
I won't replicate all of Eric's post here, but he quotes three of the key paragraphs from the RAND survey of the evidence. The short version is that the evidence is increasingly suggesting that cannabis and alcohol are substitutes. Substitutes are pairs of goods where an increase in the price of one good, increases demand for the other good (and similarly, a decrease in the price of one good decreases demand for the other good).

So, legalising cannabis (which would lower the market price of cannabis) would have the effect of increasing cannabis use, and decreasing alcohol use. This is an important result (if backed up in other studies), given the increasing calls for legalisation in New Zealand and elsewhere.

Read more:

Tuesday, 27 March 2018

Would a soda tax increase alcohol sales?

Soda taxes (or taxes on sugar-sweetened beverages (SSB), if you prefer) have been in the news quite a bit recently. You can read some of the debate over whether soda taxes would be effective in fighting obesity here (Laurie Kubiak) and here (Boyd Swinburn). Eric Crampton also has a number of related blog posts (see here and here and here and here). The short version is that NZIER's report for the Ministry of Health suggests that soda taxes would be pretty ineffective.

In this post though, I want to focus on something else. Let's say a soda tax is effective, and people consume less soda. What will they consume instead? Will they switch to bottled water? Or to sugar-free beverages? Or maybe they will switch to alcohol? In other words, what will consumers substitute towards if soda is made more expensive with a tax?

A new paper in the Journal of Epidemiology and Community Health (ungated), by Diana Quirmbach (London School of Hygiene and Tropical Medicine) and others looks at exactly this question. Or rather, they look at whether consumers would purchase more alcohol if the price of soda increased. They used data on 6 million beverage purchases by nearly 32,000 UK households in 2012 and 2013, and estimate own-price elasticities for soda (separated into high-sugar, medium-sugar, and low-sugar varieties), and cross-price elasticities between soda (by type) and alcohol (beer, lager, cider, and spirits). They found that:
...own-price elasticities for non-alcoholic drinks are lower than for alcoholic beverages (that is, alcoholic drinks are more sensitive to price change), and that elasticities for all three SSB groups are inelastic (ie, smaller than 1), which means that there is a less than proportionate decrease in purchase following a price rise. This also compares with relatively inelastic (ie, insensitive) reactions to changes in the price of alcoholic drinks (except for lager for low- and medium income groups, and cider and wine for the high-income group)...
Increases in the price of high-sugar SSBs are associated with increased purchases of diet drinks, juice and lager (ie, they act as substitutes), whereas they decrease purchases of medium-sugar SSBs and spirits (ie, they act as complements). Increases in the price of medium-sugar SSBs impacts across a wider range (high-sugar SSBs, juice, water, beer, lager, wines and spirits), although all categories affected witness reduced purchasing (ie, a consistent complementary relationship of ~0.1% for a 1% price increase). Increases in the price of diet/low sugar SSBs increases the purchases for all other categories (with the exception of the two other SSB categories), ranging from 0.1% for juice to 0.7% for milk-based drinks and spirits per 1% price increase. 
In other words, a tax on soda (especially on high-sugar and diet/low-sugar sodas) would induce consumers to consume more alcohol. So, the overall effect of a soda tax on calories consumed (and hence, obesity and other health problems) is somewhat ambiguous, especially when you consider that alcohol is more energy-dense than sugary sodas. If people consume more alcohol as a result of a soda tax, the net impact on society may well be negative. Of course, this assumes that a soda tax is effective at all, which the NZIER report on the evidence thus far indicates is far from clear.

The Quirmbach et al. study does have some issues, which are common to most studies that try to estimate elasticities when prices are not directly observed. The main problems are well summarised in this article in The Conversation by Robin Room and Heng Jiang, and their main criticism is:
In principle, elasticity is about what happens over time when there is a change – such as a new tax – which results in a higher price.
But the study was not actually measuring the effects of change in price over time. Rather, it correlated how much one family bought of each beverage type when faced with a particular set of prices against how much another family bought of each beverage type with a different set of prices.
But because the study isn’t actually measuring and correlating the change that elasticities would measure – a new tax and the change in consumption over time – it offers no direct evidence of what would happen in case of a change like a new tax, and should not be interpreted as having done so.
In spite of the problems with the study, it does raise a valid question that requires further investigation. If we tax soda, should we be concerned about the negative impacts of what consumers would purchase instead?

Monday, 26 March 2018

Why the alcohol industry is a big supporter of self-driving cars

The Washington Post reports:
Automakers and tech firms have long been the ones hustling to get self-driving cars on the street. But they’ve lately been joined by a surprise ally: America’s alcohol industry.
In recent weeks, two industry groups -- one representing wine and liquor wholesalers, and another representing large producers -- have thrown their weight behind coalitions lobbying to get autonomous vehicles on the road faster.
Inherent in their support, analysts say, is an understanding that self-driving cars could revolutionize the way Americans drink. Brewers and distillers say autonomous vehicles could reduce drunk driving.
Without the need to drive home after a night at the bar, drinkers could also consume far more. And that will boost alcohol sales, one analysis predicts, by as much as $250 billion.
This week in ECONS101, we are talking about elasticities. On Wednesday, we will discuss cross-price elasticities, so this WaPo story is certainly relevant, because it suggests that alcohol and self-driving cars are complements. Complements are pairs of goods where the demand for one good is negatively related to the price of the other good. In other words, if Good A decreases in price, consumers will demand more of Good A (because of the Law of Demand), as well as more of Good B (the complementary good to Good A).

How does this relate to alcohol and self-driving cars? If self-driving cars are only lightly regulated (or left unregulated), then the cost of using them decreases (compared with if they were more heavily regulated). Consumers will be more likely to buy a self-driving car if it costs less, so light regulation of self-driving cars will increase the number of consumers who purchase them. Consumers with self-driving cars will also be able to drink more and still use their own vehicle to get home (rather than public transport or a taxi or Uber). This suggests that alcohol and self-driving cars are complements.

Not all consumers will drink more of course, but at least some who would previously have had little to drink (knowing they had to drive home) will instead drink more. More drinking means more alcohol sales, and greater profits for the alcohol industry. And that is why it makes sense for the alcohol industry to be a big supporter of self-driving cars. Forget "autonomous vehicles could reduce drunk driving": this is about the alcohol industry's potential for greater sales and profits.

So, the next obvious question is: how long will it be before the anti-alcohol public health lobby becomes anti-self-driving vehicles as well?

Sunday, 25 March 2018

Market power, mark-ups and school uniforms

Late last year, I posted about school uniform monopolies:
With school uniforms, there are few substitutes. If your child is going to School A, you need the appropriate uniform for School A. This gives the school considerable market power (the ability for the seller to set a price above the marginal cost of the uniform). Since most schools are not uniform producers or sellers themselves, they instead transfer that market power to a uniform provider. Usually this takes the form of an exclusive deal with the uniform provider, where that provider is the only one that can sell the school's uniforms, and in exchange the school receives some share of the profits. This creates a monopoly seller of the uniforms, and the monopoly maximises its uniform profits by raising the price. The result is that parents must pay higher prices for uniforms, which must be purchased from the exclusive uniform provider.
It seems opportune that the cost of school uniforms is back in the news this week, given that my ECONS101 class will be talking about price elasticities and mark-ups this week:
A Kiwi social media personality best known for his videos about hunting and fishing has created a flurry on Facebook after taking a stab at the price of school uniforms.
Josh James, better known as Josh James Kiwi Bushman, posted a video on Wednesday afternoon which expressed his shock at the cost of his son's school uniform...
"It's bloody ridiculous," he said. "$14 for a pair of socks, and how much is it for a jersey? Holy crap $40 and $53.99 for a basic, cheap fleece jersey.
"Somebody is making a killing on this. I don't know who is making the money but someone is making an absolute killing making school jerseys. What a rip off," he said...
"I think it is ludicrous. They are selling basic fleece jerseys for $53 which can be made for cheap as chips, in NZ or China, where they bulk buy them from.
"Yet several aisles away there were other fleece jerseys, that were a different colour than the school uniform ones, for cheaper - around $9 to $15."...
James said he felt school uniform companies were marking up prices on purpose.
"They are marking the prices up and taking away all items that are similar - that people could buy for cheaper - so they have to buy the ones that are marked up.
"The kids are not allowed to wear any other clothes apart from their school uniform, and they get told off and sent home if they do. I understand the whole uniform thing, that it is conformity and kids don't bicker and fight, but I think the price is ludicrous.
"We ended up paying in excess of $350 just for one child's uniform." 
Of course the firms are marking up prices on purpose (they would hardly do so accidentally). As my earlier post quoted above notes, this is a story about market power, and when schools give a single seller the rights to sell uniform items that creates a great deal of market power. Firms with market power can raise the price above their costs - the difference between price and cost is their mark-up. The size of the mark-up depends on the price elasticity of demand, which in turn depends on a number of factors, one of which is the number of available close substitutes for the good. In the case of school uniforms, they are mandated to be a certain colour (and often mandated to have the school crest or logo printed or embossed on them). There are few substitutes for clothing of the right colour and type, and this makes demand more inelastic. Inelastic demand means that when price goes up, the quantity demanded changes only a little, in this case because parents can't easily buy substitute clothing. Inelastic demand also allows the firms selling the uniforms to charge a higher mark-up and make greater profits.

The response from the Warehouse chief executive Pejman Okhovat was hilarious:
"The cost is derived from a number of factors - including ensuring that the uniforms are good quality fabric designed to last, as well as the not insignificant cost of customising the uniform as per each school's specific requirements.
"Also when you consider their longevity they offer very good value over time."
There is zero chance that the difference in price between the school uniform items priced at $53 (to take the value from the story) and similar non-uniform items priced at $9 to $15 is due to cost differences. This is purely a story about market power.

Read more:

Thursday, 22 March 2018

Big data, student dropouts, and failing fast

Amy Wang at Quartz reports:
At the University of Arizona, school officials know when students are going to drop out before they do.
The public college in Tucson has been quietly collecting data on its first-year students’ ID card swipes around campus for the last few years. The ID cards are given to every enrolled student and can be used at nearly 700 campus locations including vending machines, libraries, labs, residence halls, the student union center, and movie theaters.
They also have an embedded sensor that can be used to track geographic history whenever the card is swiped. These data are fed into an analytics system that finds “highly accurate indicators” of potential dropouts, according to a press release last week from the university. “By getting [student’s] digital traces, you can explore their patterns of movement, behavior and interactions, and that tells you a great deal about them,” Sudha Ram, a professor of management systems, and director of the program, said in the release. “It’s really not designed to track their social interactions, but you can, because you have a timestamp and location information,” Ram added...
The University of Arizona currently generates lists of likely dropouts from 800 data points, which do not yet include Ram’s research but include details like demographic information and financial aid activity. Those lists, made several times a year, are shared with college advisers so they can intervene before it’s too late. The schools says the lists are 73% accurate and Ram’s research yields 85% to 90% accuracy, though it did not give details on how those rates are measured.
This sort of story isn't anything new. I blogged on a story about the University of Maryland doing something quite similar back in 2016. Student retention is a big issue, and its often presented as such because losing students results in lost custom for universities. However, there is another aspect of students dropping out that is more than a little problematic for me as an economist.

On the one hand, as a teacher I don't like to see students' efforts go to waste. And signing up for a degree programme that you don't complete really is a waste (and you'll be paying off those student loans for a while, for little to no benefit). On the other hand, as an economist I recognise that past costs that cannot be recovered are sunk costs, and shouldn't be relevant to a student's decision today about whether they complete their degree (those past study costs have already happened - you won't get them back if you drop out, but you also won't get them back if you continue to study either). The decision about whether to continue to study should come down to a dispassionate analysis of the future costs and benefits of continuing to study, and not be affected by things that have already happened and can't be changed.

How do I reconcile those two views? If we can identify at-risk students, perhaps we can help to find ways to ensure they succeed. Or, maybe we can counsel them on alternative options that would avoid wasting future study costs. In the latter case, I'm sure there are at least some students (hopefully few) for whom university study is probably not the best option, or at least not the best option for them at this time. For instance, I have a current research project that is looking into the reasons why students (specifically management students) drop out of university, and in many cases it is non-student life intervening that makes study difficult (more on that in a future post).

In the case of these students, I would expect that the Silicon Valley mantra failing fast and failing cheap applies, even though it has come under a lot of fire of late (see here or here, for example). If a student is going to fail anyway, wouldn't it be better to fail at low cost after one or two semesters than to fail at much higher cost after many semesters of low performance? Or, in the case of university students, perhaps the mantra should be failing fast and coming back later when life is no longer getting in the way? (that was my road through university study, after all).

[HT: Marginal Revolution]

Read more:

Wednesday, 21 March 2018

Solving the pirate riddle

Following up on yesterday's post on game theory, the excellent Cameron Hays (HOD Commerce at Tauranga Boys College) shared the following video with me, on "the pirate riddle". It involves a game with five pirates, but you can work out the answer using a type of backward induction.

Backward induction is how we usually solve sequential games (games where one player chooses first, and then the second player chooses, knowing what the first player has already chosen). It involves working out what the second player would do in response to each possible choice for the first player, and then using those outcomes to work out what is best for the first player to choose.

In the case of the pirate riddle, the process is somewhat similar. First, consider what will happen if there are only two pirates, then what will happen if there are three, then four, and finally five pirates. Pause the video before it gives you the answer, and see how you go. I bet you'll be surprised by what the Nash equilibrium is!

There are two problems with the pirate riddle game though. First, it assumes that the order is known. In reality, most pirates would elect the new captain once one was overthrown. So, the order of who would be the next captain is unknown and the game becomes more complex. Second, the pirate riddle that is presented in the video is a non-repeated game - that is, that the pirates only play this game once. In reality, the pirates will play the game many times - once for every time they capture some booty. If this game is a repeated game, and who would be the next captain is uncertain, then it is much more likely that pirates would share the treasure.

Believe it or not, there is actually an academic literature on the economics of pirates, and an excellent book by Peter Leeson (The Invisible Hook: The Hidden Economics of Pirates) summarises some of the key points of the literature (including how pirates divided their booty up, and how captains were selected). It is worth a read - you'd be surprised by how democratic (and rational) pirates were.

[HT: Cameron Hays]

Tuesday, 20 March 2018

The evolution of trust

One of my ECONS101 students shared a very cool animated online tool or game, called "The Evolution of Trust". The site was created by Nicky Case, and it explains the importance of trust and cooperation in repeated prisoners' dilemma games. And bonus, there is plenty of opportunity to play the games as well, and guess the outcomes.

Trust and cooperation are important in the repeated prisoners' dilemma, because the dominant strategy in the prisoners' dilemma is to 'defect' (not to cooperate with the other player), even though if both players cooperated they would both be better off (see here or here for more on the prisoners' dilemma). The only way to ensure that both players cooperate is to develop a reputation for cooperating, and to trust that the other player will also cooperate.

I would have explained the initial game differently, as it seems to imply that the payoffs to the other player impact on your decision, when really you should only be concerned about the payoffs to yourself (although, as I used to show in ECON100, if players care about the payoffs to the other player, the prisoner's dilemma can turn into a coordination game, with a Schelling point where both players cooperate). However, there is plenty to like about the site, including my quote of the day: "Look around. The world's full of total jerkwads." Yes indeed.

Anyway, enough from me. Try it out for yourself!

[HT: Sam from my ECONS101 class]

Monday, 19 March 2018

Stocks vs. flows... Billionaire edition

Regular readers will know that one of my pet peeves is the media not knowing the difference between stocks and flows. Here's the latest example, from Paul Little in yesterday's New Zealand Herald:
From memory the income sections in the Census didn't include a $1,000,000,000 or more a year bracket, so if the results ever come in they won't tell us how many billionaires are in the population.
The Census won't tell us how many billionaires there are, because the Census question asks about income (a flow, because it is an amount that is paid to you over time) and billionaires are defined by their net wealth (a stock, because it is a total that is measured at a single point in time). If a person has a billion dollars in net wealth in cash, and invests it in a low-yield term deposit earning 2 percent per year (after tax), they'll be earning $20 million per year (ignoring compounding interest). But they would still be a billionaire.

Looking for people with a billion-dollar annual income is a bit of a lost cause. There are no CEOs earning that much (top earners are in the low hundreds of millions - see here or here), no sports stars earning that much (top earners are in the mid tens of millions - see here), and no celebrities earning that much (see here, although dead celebrities may be earning more - the late Michael Jackson's earnings peaked at $825 million, according to Guinness). I imagine that some of the highest-wealth entrepreneurs will be earning a billion dollars plus per year, but most of their earnings are on paper, and if we're counting paper changes in wealth as income, then some of them have negative income so far this year.

So, I wonder who Paul Little thinks would fill out the box in the Census claiming an income of $1,000,000,000 or more?

Thursday, 15 March 2018

Coke Zero, Coke No Sugar, and the problem of getting customers to change products

The Morning Bulletin reported recently:
In a suburban Sydney supermarket, a women approaches the wall of red that is the Coke aisle.
She picks up a bottle of Coke No Sugar, the new brand the US giant is hoping will win over consumers wary of calorific carbonated drinks.
After a few seconds she puts it down and picks up a Coke Zero instead, the very product Coke No Sugar was supposed to replace. The shelves are stacked with Zero - it's hard to even see No Sugar.
This one interaction, witnessed by, illustrates the big problem Coke has - persuading fussy shoppers to forsake Zero in favour of No Sugar.
While in some overseas markets, Zero is no more, in Australia it's stubbornly hanging on taking up far more shelf space than its successor.
Indeed, Woolworths has told they want to continue stocking Coke Zero "due to customer demand".
Coke wants us to stop drinking Coke Zero, and switch to Coke No Sugar. However, that creates a problem. To see why, consider a very simple consumer choice model, where the consumers can choose to consume Coke Zero (on the x-axis) or Coke No Sugar (on the y-axis), as in the diagram below. [*] The prices of Coke Zero and Coke No Sugar are the same, so the budget constraint has a slope equal to one (actually -1, since it is downward sloping). Assume the price of both drinks is Pc (though the actual price is not important to this example). The consumer is currently buying the bundle of goods that is on their highest indifference curve (I1), and that bundle is the corner solution E1, where the consumer buys only Coke Zero and none of Coke No Sugar. Coke wants the consumer to buy the bundle E0, which is at the other corner (where the consumer only buys Coke No Sugar, and no Coke Zero). The problem is that the bundle E0 is on a lower indifference curve (I0). Forcing consumers to switch to Coke No Sugar would make them worse off (by making them consume on a lower indifference curve).

The problem is that the consumer in the diagram above views Coke Zero and Coke No Sugar as substitutes, maybe even close substitutes, but not perfect substitutes. Perfect substitutes are goods that are, in the consumer's view, identical. They don't care which of them they have. An example that I use in my ECONS101 class is red M&Ms and blue M&Ms. The consumer probably doesn't care at all about the difference in the colour of M&Ms (unless they're Van Halen); they only care about the total number of M&Ms. With perfect substitutes, the indifference curves become straight lines. In the case of the consumer in the diagram above, the indifference curve would be identical to the budget constraint, and the consumer would be equally happy with any bundle of goods on the budget constraint, including both E0 and E1. In other words, the consumer would be perfectly happy to switch from only drinking Coke Zero to only drinking Coke No Sugar.

So, Coke's job is to convince consumers that Coke Zero and Coke No Sugar are perfect substitutes. That seems to me to be a difficult task, since if the two products were identical, why replace the old one with the new one at all? And many consumers probably feel the same way. So Coke is probably left in the uncomfortable position of having to make its customers a little bit less happy, if it really wants to get rid of Coke Zero.


[*] In this very simple model, we are ignoring that the consumer can buy other products as well as Coke Zero and/or Coke No Sugar. You can think of it as assuming that consumers have a fixed budget for those two varieties of Coke.

Wednesday, 14 March 2018

Taxing robots may be a good idea, but it won't keep people employed

Right now, we barely go a week without another article about robots (or algorithms) taking our jobs. One of the latest is this article from the South China Morning Post, which quotes Cai Fang (vice-president of the Chinese Academy of Social Sciences) as advocating for taxing robots:
For years, we have been warned that the day will come when machines will be able to do our jobs better than we can. Now a leading Chinese economist is offering a time frame.
Cai Fang, vice-president of the Chinese Academy of Social Sciences, the country’s top think tank, and former head of its Population and Labour Economic Research Institute, robots will “definitely” surpass humans in many job skills in 10 to 20 years.
Like Microsoft founder Bill Gates and other technology titans, Cai is an advocate of tax policies and other measures to keep robots from putting human workers out of jobs.
In February, Gates said governments should levy a tax on the use of robots to fund retraining of those who lose their jobs and to slow down automation.
“For a human worker who does US$50,000 worth of work in a factory, the income is taxed,” Gates said. “If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.”
Cai, a delegate to the National People’s Congress in Beijing, said the idea made sense.
In the first week of ECONS101 this semester, we used a very simple production model to explain how rising wages in England provided incentives for automation and primed the Industrial Revolution. Now we're seeing something very similar, except it isn't rising wages but the falling cost of robots (and algorithms). However, the effect in terms of the cost of wages relative to capital are the same.

Consider a simple production model, as in the diagram below, with capital (robots) on the y-axis and labour on the x-axis. Let's say that there are only two production technologies available to firms, A and B, and that both production technologies would produce the same quantity (and quality) of output for the combination of inputs (robots and labour) shown on the diagram. Production technology A is a labour-intensive technology - it uses a lot of labour, and supplements the labour with a few robots. Production technology B is a robot-intensive technology - it uses a lot of robots, and a small amount of labour (perhaps for servicing the robots).

How should a firm choose between the two competing production technologies A and B? If the firm is trying to maximise profits, then given that both production technologies produce the same quantity and quality of output, the firm should choose the technology that is the lowest cost. We can represent the firm's costs with iso-cost lines, which are lines that represent all the combinations of labour and capital that have the same total cost. The iso-cost line that is closest to the origin is the iso-cost line that has the lowest total cost. The slope of the iso-cost line is the relative price between labour and capital - it is equal to -w/p (where w is the hourly wage, and p is the hourly cost of robots).

First, consider the case where labour is relatively cheap and robots are relatively expensive. The iso-cost lines will be relatively flat, since w is small relative to p (so -w/p is a small number). In this case, the iso-cost line passing through A (ICA) is closer to the origin than the iso-cost line passing through B (ICB). So production technology A is the lowest-cost production technology, and firms should use relatively labour-intensive production methods.

However, as the hourly cost of robots (p) falls, the relative price between labour and capital (-w/p) increases, and the iso-cost lines get steeper. Eventually, we end up in the situation in the diagram below, where production technologies A and B have the same total cost. At this point, firms are indifferent between choosing the labour-intensive production technology or the robot-intensive production technology. From this point on, if robots become any cheaper, firms would be better off to choose the robot-intensive production technology, as it will be the lowest-cost production technology.

What happens if we tax robots? Taxing robots has the effect of increasing the hourly cost of robots (p), and flattens the iso-cost lines. That will reduce the incentive for firms to shift to the robot-intensive production technology, but only temporarily. If robots keep getting cheaper relative to wages (which seems likely for now), then the size of the tax necessary to keep labour-intensive production technologies competitive will continue to increase over time. That seems unsustainable in the long term.

Fortunately, the most thoughtful advocates of taxing robots are not arguing for the tax in order to keep humans employed and competitive with robots in terms of cost. Instead, they are arguing for the tax to raise revenue to fund a universal basic income to ensure a minimum standard of living for all, or they are arguing that the tax could pay for re-training or up-skilling workers who lose their jobs to the robots.

At the time, many people worried that the first Industrial Revolution would lead to widespread job dislocation and poverty, and during that time there were certainly many workers who were negatively affected. This time will not be different. However, once we came out the other side of the first Industrial Revolution, there turned out to be just as many (if not more) jobs available, as new opportunities for workers opened up. My crystal ball is offline at the moment, so I'm unsure if that will be the case this time. But why take the risk? If we can think through the (probably many) practical issues, taxing robots might be one way for the winners from this transition to robot-intensive production technologies to fairly compensate the losers.

Monday, 12 March 2018

Jim Rose on free trade agreements

When teaching my students about the gains from trade, one of the points I like to make is to draw the distinction between free trade and free trade agreements. The takeaway from that distinction is that it is perfectly reasonable to be in favour of free trade, and yet simultaneously be against a given free trade agreement. There is no contradiction in taking those two positions, and you wouldn't understand that by reading most of what is written in the media about free trade agreements. All too often, in the debates around free trade agreements, the two sides are completely talking past each other. The pro-free-trade camp uses the 'gains from trade' argument, but essentially ignore the fact that free trade agreements include a bunch of other clauses that have little to do with trade. The anti-free-trade camp focuses on those other clauses, but ignores the fact that there are gains from reducing tariffs and non-tariff trade barriers.

Obviously, I'm not the only one who feels that way. Whether we should support a free trade agreement (including the recently-signed CPTPP) should come down to evaluating the costs and the benefits of the agreement for New Zealand. This point was made in an excellent Jim Rose article in the New Zealand Herald last week. It's difficult to excerpt without losing the key points, so I quote it at length:
Free trade agreements are at best suspect. That is not me saying this, it is Paul Krugman, his generation's leading trade theorist.
Krugman argues you should start as a mild opponent of any free trade agreement. Closely inspect the baggage they carry — environment and labour chapters, intellectual property, investor state dispute settlement (ISDS) and government procurement such as Pharmac. Start with a sceptical eye.
These add-on chapters are the costs of free trade agreements that are relatively obvious to the untrained eye. No technical economics is yet required to suspect that any trade agreement will be an opportunity for special interests on the right and left, unions and big corporations, to feather their own nest. Longer patent lives, more stringent enforcement of overseas copyrights, Pharmac buying more expensive drugs and so on, in return for tariff cuts in export markets.
But let us start with what is claimed as the benefits by the Government. In a TPP without the US, in about 30-years' time, as little as 0.3 per cent extra GDP and at most 1 per cent more GDP in sum will be generated. Less than one quarter of these modest gains over 30 years come from tariff cuts.
The rest of the gains are from behind-the-border changes from streamlining customs to investor state dispute settlement — never easy to quantify because even the most impartial spectators can disagree on whether these regulations are a plus or minus to begin with...
With the US out of the TPP we still have all the baggage from environment and labour chapters, intellectual property, threats to Pharmac, and ISDS. The costs have not gone down but the benefits have, because of the loss of the single biggest market.
Investor state dispute settlement has no place in trade agreements between democracies that have the rule of law where investors can take their chances in domestic politics just like the rest of us. Yes, there will be breathless populism from the left or right from time to time, such as recently over foreign land sales, but by and large foreign investment is welcome and gets a fair deal.
Developing countries offered to sign on to investor state dispute settlement because their own courts are corrupt.
Maybe investor state dispute settlement worked 50 years ago when investment in developing countries was tiny and handled by a few big players who might get picked on by politicians looking for a few cheap votes or more likely, a backhander to the Swiss bank account...
Most of these points were lost in the debate on the TPP because too many of its opponents are driven by anti-capitalist or anti-foreign sentiments rather than cost benefit analysis. They would oppose a trade agreement solely about tariffs that lowered prices to New Zealand consumers.
Not every trade negotiation is successful. For some, you reach the point where you must walk away. More so because of all the baggage loaded into trade agreements in the last few decades.
There should be a hard-nosed benefit cost analysis. When the US was in, the TPP might have been worth the risk, just. More access to the US market may have made up for all the other baggage. Now the price has gone up, so much so we probably should not be signing it today.
It's pretty clear what conclusion Rose is drawing on the CPTPP. Whether you agree with him or not should come down to how you value the benefits in terms of increased trade, against the costs in terms of loss of sovereignty to the investor state dispute settlement process, stricter intellectual property enforcement (which some might see as a benefit) and so on. As I've said before, I'm agnostic and will remain so until I see some defendable analysis of the impacts of ISDS, which has been notably absent from the economic analyses of this agreement.

Read more:

Thursday, 8 March 2018

Careful conclusions needed about the relationship between taverns and assault

In research, it is not uncommon for two different researchers, faced with the same research question and the same dataset, to come to very different conclusions about the answer to the research question. They may have simply used different research methods. Two researchers using the same methods and the same dataset could similarly come to different conclusions, if they have, included different variables in their analyses, excluded or included different subgroups or outlying observations, and so on. It is less common for two researchers to look at the same analysis and come to very different conclusions (differences in interpretation are, however, fairly common).

So, with that in mind I was interested to read this week new research by Adam Ward, Paul Bracewell, and Ying Cui (all from the Wellington-based Dot Loves Data), published in the journal Kotuitui: New Zealand Journal of Social Sciences Online (ungated). The paper generated a fair amount of media interest (see here and here), because it investigated the relationship between tavern locations and assaults. Those who are familiar with my research will recognise that this is one of my keen areas of research interest. Adam Ward actually sent me an early copy of this paper last year, but I have to admit I didn't have time to read it and that's a real shame, as will become clear below.

The authors used police data on assaults and Ministry of Justice data on the location of taverns for the 2016 calendar year. They essentially tested whether meshblocks (in urban areas, a meshblock is around the size of a city block, and they are bigger in rural areas) that had at least one assault differed from those that had no assaults, in terms of the distance to the nearest tavern, and the density of taverns within 500 metres. They tested this separately for 'peak assaults' (those occurring on Friday or Saturday nights) and 'off-peak assaults' (those occurring at other times). They also tested separately for differences between meshblocks that had multiple (more than one) assault and those that had one or fewer assaults.

One really cool thing they did, and which other studies have not done, is run a placebo test. They created a variable based on a sample of fast-food outlets, shopping centres, supermarkets, and petrol stations (which they called 'traffic generators'), and also used that variable in place of the tavern variables in some models, to see what happened. I had a Summer Research Scholarship student do something similar for Hamilton data a few years ago, using hairdressers, bakeries, service stations, and fast food outlets - hopefully, I'll find time to properly write up that research this year!

Anyway, coming back to the Ward et al. paper, they concluded that:
...our results show that whilst tavern density and proximity are more strongly associated to assault occurrence at peak times compared to traffic generator density and proximity, the reverse is true at off-peak times. It is not surprising that tavern density and proximity should be more strongly associated with assault occurrence at peak times compared to traffic generators given that the majority of the traffic generators within our sample are unlikely to open throughout peak hours.
They are suggest that their results show that taverns are associated with peak assaults more than 'traffic generators' (their placebo). However, here's their Table 2, with the key rows highlighted (you might need to zoom in to see it clearly):

The top two highlighted rows show the results for peak assaults (comparing meshblocks that had any assault in 2016 with those that had none). The top row (model 1) uses tavern variables, and the second row (model 2) uses 'traffic generator' variables. Comparing the two rows, the standardised coefficient for tavern proximity is clearly much larger for taverns than it is for traffic generators (and the odds ratio is larger for taverns than for 'traffic generators'). This suggests that the distance to the closest traffic generator has a much greater effect on peak assaults than the distance to the closest tavern. Similarly, the standardised coefficient for tavern density is clearly smaller for taverns than it is for traffic generators (and the odds ratio is smaller for taverns than for 'traffic generators'). This suggests that the number of taverns within 500m has a smaller effect on peak assaults in a meshblock than the number of traffic generators within that distance.  If you look at the second pair of highlighted rows (for multiple assaults), you'll notice the same conclusions can be reached. You've also probably noticed that these conclusions are the opposite of Ward et al.'s conclusions from above - taverns aren't associated with more peak assaults than traffic generators; they're associated with fewer peak assaults.

What explains the difference? Ward et al. focus their attention on the last column: the Gini coefficient. In the context of a logistic regression model, the Gini coefficient is simply a measure of how good the overall model is at classifying meshblocks, in this case classifying them into those meshblocks with assaults and those without assaults. You shouldn't interpret the Gini coefficient as telling you anything about the individual variables in the model, in the same way that you can't infer anything about individual variables from looking at the R-squared in a linear regression model. Just because the overall model provides a better fit (according to the Gini coefficient), it doesn't mean that the size of the effects demonstrated by the coefficients in the model are larger (which you can only determine by looking at the coefficients or odds ratios)!

Aside from the conclusions, the choice of a logistic regression model is somewhat odd. It treats a meshblock that had two assaults in a year the same as a meshblock that had twenty assaults. A count-based (e.g. Poisson) model would have made more sense, and to be honest their results are probably driven by their choice of model. A Poisson model would have better accounted for the greater incidence of assaults (when there was more than one) in the vicinity of taverns, and Poisson models have become increasingly common in this literature (and are the appropriate models to use in this context for theoretical reasons, which are explained in an article I have written with Bill Cochrane and Michael Livingston that is nearly complete, and which I'll blog about in the future).

Having said that, the paper isn't all bad. As I said above, I liked the placebo approach. Also, my quote of the day (which I am sure to refer to in future work) is:
...31% of peak time assaults occur within 100 m of a tavern whereas this geometric region constitutes only 2.3% of New Zealand’s land mass and contains only 6.3% of the population.
That's a conclusion I can't disagree with.

Tuesday, 6 March 2018

The boundaries between economics, psychology and biology are narrowing

Have you ever wondered how it is that we can simultaneously see a dinner purchased at Wendy's (which might cost us $20 per person) as relatively cheap, and yet simultaneously see spending $20 per person on ingredients for a dinner we cooked for ourselves at home as lavishly expensive. Ok, maybe you haven't experienced that particular conundrum, but certainly there are some things that are cheap in dollar terms that you view as relatively expensive, while simultaneously seeing some things that are expensive in dollar terms that you view as a bargain.

The answer lies in the complex workings on the human brain. Understanding our decision-making on a very fundamental level is the role of the exciting emerging field of neuroeconomics. Neuroeconomics is blurring the boundaries between economics, psychology and economics and offering new insights into how we make decisions.

So, on that note, I found this news release from Washington University School of Medicine from last November interesting:
Researchers at Washington University School of Medicine in St. Louis have found that when monkeys are faced with a choice between two options, the firing of neurons activated in the brain adjusts to reflect the enormity of the decision. Such an approach would explain why the same person can see 20 cents as a lot one moment and $5,000 as a little the next, the researchers said.
“Everybody recognizes this behavior, because everybody does it,” said senior author Camillo Padoa-Schioppa, PhD, an associate professor of neuroscience, of economics and of biomedical engineering. “This paper explains where those judgments originate. The same neural circuit underlies decisions that range from a few dollars to hundreds of thousands of dollars. We found that a system that adapts to the range of values ensures maximal payoff.”...
While you are contemplating whether to order a scoop of vanilla or strawberry ice cream, a part of your brain just above the eyes is very busy. Brain scans have shown that blood flow to a brain area known as the orbitofrontal cortex increases as people weigh their options.
Neurons in this part of the brain also become active when a monkey is faced with a choice. As the animal tries to decide between a sip of, say, apple juice or grape juice, two sets of neurons in its orbitofrontal cortex fire off electrical pulses. One set reflects how much the monkey wants apple juice; the other set corresponds to the animal’s interest in grape juice. The faster the neurons fire, the more highly the monkey values that option.
A similar process likely occurs as people make decisions, the researchers said. But what happens to firing rates when a person stops thinking about ice cream and starts thinking about houses? A house might be hundreds of thousands of times more valuable than a cup of ice cream, but neurons cannot fire pulses 100,000 times faster. The speed at which they can fire maxes out at about 500 spikes per second.
How does this explain the example I started this blog post with? It appears (to me, at least), that this research (which you can read here, but beware - it's very technical) is showing that there are biological underpinnings for why relative comparisons matter, more than absolute comparisons. The research was on rhesus monkeys rather than humans, but human brain functioning probably works in very similar ways. The firing of neurons associated with different options depends on what the other options are and their relative value to us, rather than their absolute value in terms of the overall universe of things that we could receive. In other words, our preferences adjust to the scale of values in the choices we are making:
The researchers concluded that making a choice between two juices is not a simple matter of comparing the firing rates of the apple-juice neurons to the firing rates of the grape-juice neurons. Instead, neurons pegged to each option feed into a neural circuit that processes the data and corrects for differences in scale.
It’s a system optimized for making the best possible choice – the one that reflects true preferences over a vast range of values, even though some detail gets lost at the higher end.
“It was a puzzle: How does the brain handle this enormous variability?” said Padoa-Schioppa. “We showed that a circuit that has adaptation and corrects for it ensures maximal payoff. And these findings have implications for understanding why people make the choices they do. There’s a good neurological reason for behavior that might seem illogical.”
Neuroeconomics is pretty cool. Expect to see a lot more of this type of research in the future, especially combined with machine learning to deal with the complexity of brain scan data.

Monday, 5 March 2018

The price of cauliflower hits $10

From today's New Zealand Herald:
Hankering for cauliflower in your meat and two vege?
Well be prepared to shell out for it as some are $10 a head.
Cauliflower was photographed priced at $9.99 yesterday at New World, Wellington.
A Herald photographer this morning found a cauliflower for $9 at Farro Fresh and an out of stock sign for the vegetable at a Ponsonby Countdown.
My ECONS101 class won't cover supply and demand for a few weeks yet (perhaps that seems odd, but it's a consequence of adopting the CORE textbook), but that simple model does an excellent job of explaining what is going on. From the article:
Annette Laird, of Foodstuffs NZ, which owns the New World, Pak'n'Save and Four Square brands, said recent wet weather had affected the quality and reduced supply of cauliflower.
"It's been raining a lot. Cauliflower don't like the rain..."
Consider the market for cauliflower, shown in the diagram below. Initially the market is at equilibrium, with the price of cauliflower at P0 and the quantity traded at Q0. Then, adverse weather hits and farmers are unable to supply as much cauliflower (regardless of the price). This shifts the supply curve up and to the left (a decrease in the supply of cauliflower from S0 to S1). The equilibrium (where the demand curve meets the supply curve) shifts along the demand curve up to the left. The equilibrium price increases from P0 to P1, while the equilibrium quantity of cauliflower traded decreases from Q0 to Q1.

Which shows why you're paying $10 for cauliflower this week. But don't blame the farmers. It is important to note that, in a market like this, the sellers have no direct control over the price. Woodhaven managing director John Clarke is quoted in the article:
"We are purely price takers, what the wholesale market pays is what we get and they are paying more for what we don't have much of."
The supermarkets of course have more control over the retail price than farmers do over the wholesale price, but remember that supermarkets are paying the (higher) wholesale price for cauliflower, and they will be passing that onto the consumer.

Friday, 2 March 2018

Relative wages and the teacher shortage

Jack Boyle (president of the Post-Primary Teachers Association) wrote in the New Zealand Herald today:
Another way to look at it is that relativity matters. It's not the absolute value of the reward that's going to determine whether a task (or job) is seen as worth it, but how it compares with what others are getting too.
What's this got to do with teachers?
Over the past 10 years teachers have kept on getting cucumber while other workers are getting grapes. Relativity between what teachers earn and other workers has seriously declined.
A simple measure of this is the comparison between secondary teacher salaries and the median earnings of all employees. Since the early 2000s we've dropped significantly, from 1.8 times the median (for experienced teachers on the top of the pay scale) to just over 1.5. In effect, this means jobs that used to earn less than teachers now earn more, and jobs that require less than our four years' study are catching up.
Boyle is right that relativity matters for decision-makers, including students who are deciding whether to study to become teachers or something else. In this case, it the relative wage between teaching and other alternatives is one of the important determinants of the students' decisions. If teachers' salaries have risen less than the salaries that graduates in other disciplines earn (which is what the fall from 1.8 times the median wage to 1.5 times implies), then the relative wage has shift in favour of those other occupations. As a result, fewer students will choose to study to become teachers.

The relative wage between teachers and other occupations is made even worse by changes the non-monetary characteristics of teaching (which should also be relevant to students' decision about whether to study to be a teacher):
The work that secondary teachers do has become far more complex. Just in the past few years paper-work demands have exploded. There are now forms to fill out when you confiscate a student's phone, or go on a class trip, not to mention the planning, marking and moderation requirements of NCEA.
The relative wage between teaching and alternative occupations is a problem, and will be especially acute in Auckland, as I've noted before. If we really want to address the teacher shortage, then the government needs to step up and make the occupation more attractive for graduates, while at the same time heading off ill-conceived proposals that would make the problem worse.

Read more: