Thursday, 19 October 2017

The evidence favours paying people to stop smoking

Back in July I wrote a post about the effectiveness of monetary incentives to quit smoking:
Rational (and quasi-rational) decision-makers weigh up the costs and benefits of their actions (as we discussed in my ECON110 class today). If the student nurses didn't give up smoking, they faced a financial penalty relative to if they had given up smoking (they missed out on the scholarship). This creates an additional opportunity cost of smoking, raising the cost of smoking. When you increase the costs of an activity, people will do less of it. So, less smoking as a result of the incentive.
A couple of weeks ago, Mai Frandsen (University of Tasmania) picked up on the same issue in The Conversation, arguing in favour of paying people to stop smoking, and helpfully linking to a lot of the latest research that demonstrates that this works:
One evidence-based approach that has not received much attention in Australia is using financial incentives. Incentives programs reward quitters for not smoking by giving them a monetary voucher. The quitter’s abstinence is verified using biochemical tests of either their saliva, urine or breath...
Financial incentive programs are one of the most effective and cost effective strategies for getting people to quit. They are considered the most effective strategy for pregnant smokers. They are also cost effective, with the calculated net benefit (after taking into account of the incentives used) being around A$4,300 per smoker, per attempt to quit. There have been a number of studies showing their benefits.
Using a multinational company as a test site, a team of US researchers found people who were offered US$750 (A$938) to quit smoking were three times more successful than those who were not given any incentives. Even six months after the vouchers had stopped, previously incentivised quitters were 2.6 (21.9% vs 11.8%) times more likely to still be smoke-free compared to non-incentivised quitters.
A team of UK researchers randomised over 300 pregnant women to receive up to £400 (A$661) worth of shopping vouchers if they quit during the pregnancy. Again, women in the incentives group were 2.6 (22.5% vs 8.6%) times more likely to have stopped smoking at the end of pregnancy, compared to the women who had received counselling and nicotine replacement therapy.
A Swiss program, offering low-income smokers up to US$1,650 (A$2,063) worth of quit-contingent vouchers staggered over six months, found smokers were 1.6 (18.2% vs 11.4%) times more likely to be smoke-free at 18 months compared to non-incentivised smokers.
It's past time that we gave up on outdated views about avoiding monetary incentives for promoting health behaviours. In the case of smoking, the benefits accrue to the individual who is quitting smoking, their unborn children, their families, and to the community that doesn't face covering the healthcare costs associated with smoking. If we're looking for cost-effective ways to improve health, this should be high on the list.

Tuesday, 17 October 2017

Regulation of charlatans in high-skill professions

The title of this post is also the title of a recent working paper by Jonathan Berk (Stanford) and Jules van Binsbergen (University of Pennsylvania). The paper is theoretical and quite technical, but can be fairly easily summarised (I think) as follows. In many professions, it is possible for unskilled producers (charlatans) to pass themselves off as skilled, and essentially sell a worthless service to consumers. Consider an unskilled real estate agent, or financial advisor, for instance. To protect consumers from this possibility then, governments might require certification (that professionals in these fields must pass some sort of certification test in order to practice), or licensing (that professionals must hold some licence in order to practice). Berk and van Binsbergen show that these requirements may make consumers worse off, because even though the quality of services they receive would increase (by preventing charlatans from providing services), this would be more than offset by reductions in competition between professionals. I'm probably oversimplifying their arguments, which require some deeper thinking to get your head around. Here's what the authors say in the paper:
It is often argued that informing consumers better about the products they buy, leaves them better off, because they can make better choices. However, this logic ignores the equilibrium effect on prices that such information revelation has. Consider the case where the government is perfectly informed about who the charlatans are, and instead of setting standards, simply communicates this information to consumers. It is tempting to conclude that by providing this information, the government will make consumers better off. We prove the opposite. Even though the information fully drives charlatans out of the market, consumers are left worse off. The reason is that once again, there are two price adjustments that follow from such information revelation. First, prices of the remaining professionals will go up to reflect that consumers now have a zero probability of dealing with a charlatan. This leaves consumers indifferent, they pay more, but they get better service. But second, because competition amongst providers is reduced, prices will go up further, and this second effect reduces consumer surplus...
These insights also have important implications for the debate in the economics literature on certification and licensing. The difference between these two regulations is that a professional cannot practice without a license, but can practice without being certified. Consequently, we can interpret a licensing requirement as a minimum standard and certification as requiring information disclosure. The existing literature comes down in favor of certification because it provides consumers with information they otherwise do not have access to... Although the observation that certification reduces competition less than licensing is correct, the argument misses the fact that certification also reduces competition and as a result it also reduces consumer surplus... In summary, in our setting, certification is never preferred, not by producers nor by consumers.
Professionals prefer the licensing rather than certification, because it allows them to increase their own profits (although it reduces total welfare - a classic result of their increasing market power). There isn't much in the way of empirical support in the paper, but I expect this is something that others will follow up on in the future. The paper left me wondering whether having risk averse consumers would make a big difference, and in that case whether licensing or certification may be welfare enhancing given that consumers would be willing to pay to avoid the risk of dealing with a charlatan. The authors allude to this in the final sections of the paper, but I don't think they have fully dealt with this issue.

One last point: I really loved this footnote from the paper:
Even homeopaths and psychic readers have certification boards. One Chief Examiner of The National Certifying and Testing Board of the American Federation of Certified Psychics and Mediums Inc. is specialized in pet communication. On a more existential note, one wonders why a psychic examiner would even need to administer an examination to determine whether a candidate is qualified.

[HT: Marginal Revolution, back in June]

Monday, 16 October 2017

Call to ban alcohol sales in supermarkets is jumping the gun

Stuff reported last Wednesday:
New Zealand children are being exposed to alcohol nearly every time they go to the supermarket, sparking a call from researchers to have it banned from such stores. 
The over-exposure of alcohol to children put it on par with everyday products such as bread and milk, causing children to drink alcohol earlier in their life, Tim Chambers from Otago University's Department of Public Health said. 
The department's research found that 85 per cent of children were exposed to alcohol in Wellington supermarkets. 
The original research paper, by Chambers et al. and published in the journal Health & Place, is available here (sorry I don't see an ungated version anywhere).  The idea behind the research is kind of cool (albeit a bit Big Brother-ish) - the researchers set up children with GPS trackers that tracked location every five seconds and a wearable camera that took a picture every seven seconds, over a four-day period between July 2014 and June 2015. They then geo-located those images to supermarkets and coded the captured images in supermarkets as to whether or not they contained exposure to alcohol promotions. It's a pretty cool idea, but there are a number of problems with the analysis.

First, about 23% of the GPS location data was missing, so had to be imputed - that means that they replaced the missing values with a non-missing value, based on a Python script that used information "on spatial and temporal parameters". In spite of that, they still had to impute some of the missing data manually using "identifiable information in the images".

Second, the children were not very representative of the population, with 40% European, 36% Maori, and 25% Pacific (and no Asian children), and 44% from the lowest three school deciles. You can probably expect these children to have somewhat different experiences in terms of exposure to alcohol from Asian children and possibly from those in the middle four deciles (that made up just 19% of the sample).

Third, the sample size was small - only 168 children, of which only 56 children made at least one trip to a supermarket. So, to say that "85 per cent of children were exposed to alcohol in Wellington supermarkets" (the quote from the Stuff story) is just plain wrong. It was only 85 percent of 56 of the 168 children, or 29 percent of the full sample.

They then base a quantitative analysis on only a subset of this already small dataset, while unsurprisingly shows nothing statistically significant. The small sub-sample means that any statistical analysis is going to be underpowered.

However, the biggest problem with the study is their misunderstanding of the practical aspects of the legal changes to the way supermarkets sell alcohol, after the implementation of the Sale and Supply of Alcohol Act 2012 (SSAA). The Act came into force on 19 December 2013, and restricted supermarkets and grocery stores to selling alcohol in a single area. The appropriate section of the Act is s112(5):
The authority or committee must describe an alcohol area within the premises only if, in its opinion,—
(a) it is a single area; and
(b) the premises are (or will be) so configured and arranged that the area does not contain any part of (or all of)—
(i) any area of the premises through which the most direct pedestrian route between any entrance to the premises and the main body of the premises  passes; or
(ii) any area of the premises through which the most direct pedestrian route between the main body of the premises and any general point of sale passes.
The authors are correct when they note that supermarkets had a period of up to eighteen months in which to implement such single areas. However, that period didn't commence when the legislation became operative, but would commence on the date when each supermarket was granted a licence renewal. To extend this further, under the direction of the Alcohol Regulatory and Licensing Authority, all District Licensing Committees (the initial decision-makers on licensing issues) were asked to delay the hearing of any supermarket licence applications until after the appeal of the first such decision (known as the Vaudrey decision) was determined. That did not happen until earlier this year, and so the first supermarkets subject to single alcohol areas would not have been operating these areas until earlier this year (although some may have voluntarily imposed such conditions on themselves earlier, in practice few have done so).

So effectively, none of the supermarkets that the children in this study visited in 2014-15 were subject to the new laws in terms of single alcohol areas. That means that the analysis in the research paper, where the authors compared exposure to alcohol in one supermarket, before and after a change in the supermarket's configuration, tells us absolutely nothing about whether the new law has had any effect on exposure of children to alcohol in supermarkets. That analysis was based on only sixteen visits (11 in 2014, and just five visits in 2015). Which means it is a huge over-step for the authors to conclude that:
In a case study within this research, the current 2012 SSAA was ineffective at preventing children's exposure to alcohol marketing at supermarkets.
You simply can't tell whether that is the case based on this research. They could re-run their research now that many supermarkets are actually implementing these areas, and see if things are better (but in that case they should use a bigger sample size than a few children if they are really genuine about quantitatively testing whether there is any impact). However, the authors clearly had their conclusions in mind before they began the study, because they also conclude that:
Banning alcohol sales in supermarkets appears to be the only way to prevent such exposure.
If your goal is to prevent any exposure of children to alcohol, then why bother going through the expense of doing this research? You could simply say: (1) Any exposure to alcohol is bad; (2) there is alcohol in supermarkets; therefore (3) to prevent any exposure of children to alcohol, you must ban alcohol sales from supermarkets. You wouldn't even need to dress it up as a research paper, since it is so obvious.

[HT: Marcus]

Sunday, 15 October 2017

You might be an economist if... Butter price edition

You might be an economist if a headline like this one from the New Zealand Herald on Thursday, "NZ butter now so expensive Kiwis are turning to French butter for baking", makes you angry. But not because you're asking "how dare they raise the price of New Zealand butter so that French butter is cheaper?", but because you're asking "how can anyone think that this is newsworthy enough to turn into a headline?".

New Zealand butter and French butter are substitutes. When one substitute increases in price, consumers will buy less of the now-relatively-more-expensive substitute (New Zealand butter), and more of the now-relatively-less-expensive substitute (French butter). Simple, yes. Headline stuff, not even on a slow news day.

Are we New Zealand butter consumers supposed to rise up against the oppression of New Zealand butter sellers as a result of this article? You know what? The dairy companies selling New Zealand butter don't care. They're receiving a higher price for selling their butter on the world market. If New Zealand consumers want to buy French butter instead because it's cheaper, then that's fine by the dairy companies.

The article does leave one mystery open though, when it explains why New Zealand butter has increased in price:
According to ASB economist Nathan Penny, demand for butter skyrocketed worldwide after scientists debunked the link between animal fats and heart disease.
That makes sense. When consumers preferences shift towards a good (like butter), then demand increases, and the equilibrium price increase. What makes less sense is, why hasn't the increase in global demand for butter led to an increase in the price of French butter as well?

Friday, 13 October 2017

Nobel Prize for Richard Thaler

The 2017 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) has been awarded to Richard Thaler of the University of Chicago, "for his contributions to behavioural economics". Some excellent summaries of Thaler's work can be found in this New York Time article, or this Economist article. Marginal Revolution has much more detail (here and here).

I think this award was very well-deserved. I've been using Thaler's work in the first topic of my ECON110 class for many years, especially his characterisation of decision-makers as quasi-rational, rather than purely rational. We even refer to some of his really early research, on mental accounting among buyers of pizza, in the first tutorial.

His book, "Misbehaving: The Making of Behavioral Economics" is on my list of books to read before the end of this summer, so you can expect a book review on that in due course.

Finally, I really liked this bit from the New York Times article:
“In order to do good economics, you have to keep in mind that people are human,” Professor Thaler said at a news conference after the announcement.
Asked how he would spend the prize money of about $1.1 million, Professor Thaler replied, “This is quite a funny question.” He added, “I will try to spend it as irrationally as possible.”

Tuesday, 10 October 2017

This couldn't backfire, could it?... Europe-Libya refugee agreement edition

There's some classic unintended consequences coming soon from the new agreement between Italy and Libya on preventing refugee migration to Europe. Jalel Harchaoui and Matthew Herbert report in the Washington Post:
The seas off western Libya have been quiet since late July. Before that, they swarmed with smugglers’ boats overfilled with migrants, mostly sub-Saharan Africans heading for Europe. From 23,000 migrants per month, the flow of arrivals has slowed to a trickle.
The migrants are accumulating on Libya’s coast and many are incarcerated in opaque circumstances. Their movement has been stymied by militias, who have turned on the northbound flow of migrants they once profited from. Deep in the southern desert, emergent militia groups evince the goal of closing the border with Niger and Chad to migrants moving north — attempting to patrol areas that none of Libya’s three rival governments ever secured.
Motivating the Libyan militias’ newfound zeal for blocking migrant movement is a new policy spearheaded by the Italian government and embraced by the European Union. The approach relies on payment to militias willing to act as migrant deterrent forces. Italian government representatives use intermediaries such as mayors and other local leaders to negotiate terms of the agreements with the armed groups. They also build local support in the targeted areas by distributing humanitarian aid.
What happens when you offer to pay Libyan militias for detaining migrant refugees before they reach Europe? Harchaoui and Herbert suggest that it empowers nonstate armed groups, stunts efforts at building credible security for the future, and may breed conflict. All of that is probably true. But they fail to recognise one of the other unintended consequences, which relates to the incentives the payments create.

When you offer to pay Libyan militias for detaining migrant refugees, you get more detention of 'migrant refugees'. That doesn't sound bad - it sounds like what you intended. Until you take a closer look at who the 'migrant refugees' are who are being detained. Maybe some of them are genuine refugees who were intending on migrating to Europe. But catching those refugees may be costly for the militia (especially if the militia are being paid by the refugees to help them get to Europe). Better then to find some non-paying disadvantaged group and round them up to be detained. Or perhaps, strike a deal with some group who can pose as refugees and split the payment with the militia? There are many ways for the militia to game this system, where the refugee flows towards Italy would barely reduce, while the payments to the militias greatly increase. This is Sudanese slave redemption all over again (see my post on that topic here). Or Project Phoenix (but probably without the killing).

One alternative may be not to pay the militias for the number of migrants they detain, but for the reduction in the number of migrants actually reaching Italy. Although, if you do that you might not want to find out how the militia reduce the number of refugees reaching Italy. And you'd have to find some way of allocating the payment between competing militias. And there would be a free riding problem, since militias might then get paid even though they were doing nothing to reduce refugee flows. Aligning incentives without unintended consequences is hard!

Monday, 9 October 2017

David Roberts on why he avoids talking about overpopulation

In the last week of ECON100 lectures, we talked about economic sustainability and, among other things, the economics of climate change. So, this recent article by David Roberts caught my attention. In the article, Roberts (an environmental journalist) explains why he avoids talking about population:
Anyone who’s ever given a talk on an environmental subject knows that the population question is a near-inevitability (second only to the nuclear question). I used to get asked about it constantly when I wrote for Grist — less now, but still fairly regularly.
I thought I would explain, once and for all, why I hardly ever talk about population, and why I’m unlikely to in the future...
It is high risk — very, very easy to step on moral landmines in that territory — with little reward.
And where talk of population control is rarely popular (for good reason), female empowerment and greater equality are a) goals shared by powerful preexisting coalitions, b) replete with ancillary benefits beyond the environmental, and c) unquestionably righteous.
So why focus on the former when the latter gets you all the same advantages with none of the blowback? That’s how I figure it anyway.
I've blogged before about my research on climate change and population, but from the alternative angle - how climate change affects the population distribution, and not how population affects climate change. Roberts operates in a more fraught space, and appears to negotiate it in an intelligent way. I encourage you to read his article, and follow up especially on the Drawdown Project (which Roberts has written about here), which "maps, measures, models, and describes the 100 most substantive solutions to global warming". You can jump straight to the rankings here. Spoiler: You may be surprised at which solutions rank #6 and #7, but potentially more surprised by the solutions that rank in the bottom five.

[HT: Marginal Revolution]

Sunday, 8 October 2017

Hospital emergency departments follow Goodhart's law

Goodhart's law states that "When a measure becomes a target, it ceases to be a good measure". In other words, when you reward (or punish) behaviour based on some targeted measure, people will game the system to ensure you get more (or less) of whatever is being measured, regardless of whether it is what you intended. Which brings us to this story on New Zealand hospital emergency departments from earlier this week:
Wait times dropped after emergency department time targets were introduced but a report has found some hospitals shuffled patients around just to meet the target.
A University of Auckland-led study published in BioMed Central Health Services Research studied emergency department (ED) waiting times at four New Zealand hospitals between 2006 and 2012.
With hospitals under pressure, a target measure was introduced in an effort to minimise crowding, which left some patients in hospital corridors.
Official DHB reports found most EDs met the 95 per cent target to be seen, treated or discharged within six hours. However, the introduction of "short-stay units" in the last 10-years has seen researchers question those reports.
Hospitals record the length of stay in EDs, but shifting patients into the short-stay units isn't counted in reported ED figures...
Associate Professor of the University of Auckland's School of Population Health Dr Tim Tenbensel said moving patients to the short-stay-units was reasonable in most cases...
"Having patients in these short-stay units is certainly preferable to having them wait in hospital corridors as was common before 2009.
"However, we know from our interviews that there were some instances where the only reason patients were transferred to short-stay was to avoid breaching the target." 
The most surprising thing about this was that no one appears to have foreseen this possibility before the target. Every time a target is introduced, someone really needs to think about the answer to the question, "If I had to meet this target, what would be the lowest cost way for me to do so?", since that's effectively what the decision-makers faced with meeting the target are going to do. I guess we are fortunate that the unintended consequence in this case didn't make things any worse.

Thursday, 5 October 2017

CORE on 'missing women in economics'

Homa Zarghamee (Barnard College) recently posted two interesting articles on the CORE Economics blog, on what CORE are calling 'missing women in economics' (if you don't know about the CORE Project, you can read more about it here). In the first article, Zarghamee essentially outlines the state of knowledge on the gender gap in economics, and in the second article, she makes some excellent suggestions on changes to the way economics is taught that might attract more women to the discipline. These included:

  1. Start with social issues and use theory as a tool
  2. Incorporate behavioural and experimental economics
  3. Don’t assume your treatment of students is unbiased!
  4. Highlight the achievements of female economists
Both articles contain more detail than it is feasible to excerpt here. If you're interested in the gender gap in economics, I recommend reading both of them, as well as their earlier article in The Conversation, which starts with their estimate of 300,000 missing women in economics (in the U.S. alone).

As I've mentioned before, I have a Summer Research Scholarship student working on this topic over the summer, and I hope to share some of their results early next year.

Read more:

Wednesday, 4 October 2017

Natural disasters and news media bias

In ECON110, we discuss the economics of news media bias. In discussing bias, I explicitly discuss sensationalism (bias of the exceptional over the ordinary), recency bias (bias of the new over the status quo), and exaggerated influence of minority views (bias by “fair” representation of both sides of the argument). We then go on to discuss how the 'normal' operation of the media market introduces bias into the way news stories are reported. However, we don't really go into much detail on the consequences of new media bias.

A recent paper by Thomas Eisensee and David Strömberg (both University of Stockholm), published in the Quarterly Journal of Economics (ungated version here), provides an excellent example of the consequences of media coverage. In the paper, Eisensee and Strömberg look at how media coverage of natural disasters outside the U.S. affects the likelihood of disaster relief being given, using a dataset of 5,212 disasters over the period from 1968 to 2002. Specifically, they look at how news coverage of natural disasters is affected by coverage of other news stories (e.g. Olympics coverage), and how that affects whether or not the U.S. declares a natural disaster and provides aid. The news data is based on television evening news stories (ABC, CBS, NBC, and CNN).

They found that:
...2.4 extra minutes spent on the first three news segments (two standard deviations) decrease the probability that a disaster is covered in the news by four percentage points and the probability that the disaster receives relief by three percentage points. Recall that around 10 percent of all disasters are covered in the news and that 20 percent receive relief, and so the effects are sizeable... The estimated coefficients imply that a disaster occurring during the Olympics is 5 percent less likely to be in the news and 6 percent less likely to receive relief, on average.
The effects are especially large for disasters that are "marginally newsworthy" (those that would appear in the news if and only if there was little else happening that was newsworthy). More severe disasters (measured by the number of people killed or 'affected') are both more likely to receive news coverage and more likely to receive relief. However, the effects differ by the type and location of the disaster:
...we have computed the casualties ratio that would make media coverage equally likely, all else equal (controlling for the same factors as in the fixed effects regression). For example, for every person that dies in a volcano disaster, 38,920 people must die of food shortage to receive the same expected media coverage. The conclusion is that media induces extra relief to volcano and earthquake victims, at the expense of victims of epidemics, droughts, cold waves and food shortages...
The estimates suggest that it requires 45 times as many killed in an African disaster to achieve the same probability of media coverage as for a disaster in Europe. We conclude that media coverage induces extra U. S. relief to victims in Europe and on the American continent, at the expense of victims elsewhere.
In the Pacific, it requires 91 times as many killed to achieve the same probability of media coverage as for a disaster in Europe. In ECON110, we conclude that news media bias reflects the underlying bias in the news-consuming public (that prefers to consume news that is consistent with their own preferences and biases). I guess the U.S. news-consuming public cares less about people who die in famines or from natural disasters in the Pacific?

[HT: Marginal Revolution, but also see the excellent write-up in Our World in Data]

Tuesday, 3 October 2017

Trade and the Atlas of Economic Complexity

Last week in ECON100 we covered the gains from trade. The simple model we employ is essentially a model based on Ricardian trade, which assumes that each country specialises in producing (and exporting) goods that they have comparative advantage in producing (goods that they can produce at a lower opportunity cost), and imports goods that they have comparative disadvantage in producing (goods that they produce at a high opportunity cost). However, the real world is significantly more nuanced than this simple model, as Noah Smith noted recently:
Most academic models of international trade are pretty simplistic. Some of these models are surprisingly effective for making certain types of predictions -- for example, economists are very good at predicting how much different countries will trade with each other. But they’re not so good at predicting what kind of things the countries will specialize in, which country will have a trade deficit or surplus, how trade will affect growth, or which workers and businesses will benefit from trade...
Now, a number of economists are working on new empirical approaches that take into account the huge variety and complicated connections between the products and services that get traded across international borders.
Two such economists are Ricardo Hausmann of Harvard’s Kennedy School and Cesar Hidalgo of Massachusetts Institute of Technology. They and their research team have a theory that the more different products a country makes, the better positioned it is to grow. This idea runs counter to the conventional wisdom -- and the predictions of many standard models -- that different countries hyperspecialize in only a few goods and services. According to Hausmann and Hidalgo, countries are better off when they can make a multitude of things. Countries such as Saudi Arabia that rely on a single product will perform worse, all else equal, than countries such as Japan that can make almost anything they want.
The economists claim that their so-called economic complexity index is much better at predicting long-term economic growth than other forecasting methods based on things like the level of regulation or the amount of investment in education. They recently put out a report predicting that China’s growth will slow over the next decade, while India’s will remain rapid.
Hausmann and Hidalgo's Atlas of Economic Complexity is well worth looking at. There is a wealth of trade data, and excellent visualisations (if you click on 'Visualizations' in the top bar). For instance, here's New Zealand's exports by category for 2015 (it's much easier to see at the website):

I was surprised that raw aluminium was as much as 1.7% of exports. And here's a similar visualisation of export destinations (again for New Zealand in 2015; here's the direct link):

No surprises about China, Australia, the United States and Japan being the biggest export destinations, but Algeria (1.2%) and Egypt (1.0%) were a bit surprising to me. Anyway, there's lots more to explore on the site, and lots of surprises (try playing the 'which country is the biggest exporter of *some random product*?' game with your family or friends). Minutes of fun, guaranteed. Enjoy!

Sunday, 1 October 2017

The magicians' dilemma and repeated games

Marginal Revolution University's latest video covers game theory, which is timely given that we covered this only a couple of weeks ago in my ECON100 class:

Unfortunately, like most treatments of game theory in principles of economics classes, it stops well short of what is possible. So, I want to take it further. The video does a good job of explaining dominant strategies, and correctly identifies the one Nash equilibrium. To confirm that this is the only Nash equilibrium, we should use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). Here's the game from the video (but note that I've made the payoffs easier to track by making more explicit which player gets which payoff):

And here are the best responses:
  1. If Al cheats, Bob's best response is to cheat (since $6000 is better than $1000) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Al promises, Bob's best response is to cheat (since $15,000 is better than $10,000);
  3. If Bob cheats, Al's best response is to cheat (since $6000 is better than $1000); and
  4. If Bob promises, Al's best response is to cheat (since $15,000 is better than $10,000).
Note that Al's best response is always to cheat. This is her dominant strategy. Likewise, Bob's best response is always to cheat, which makes it his dominant strategy as well. The single Nash equilibrium occurs where both players are playing a best response (where there are two ticks), which is where both magicians choose to cheat.

However, that still isn't the end of this. That solution is for a non-repeated game. A non-repeated game is played once only, after which the two players go their separate ways, never to interact again. Most games in the real world are not like that - they are repeated games. In a repeated game, the outcome may different from the equilibrium of the non-repeated game. In this case, the two magicians probably make their choices every week, interacting with each other many times.

The 'best' choice for each magician in a repeated game may be to promise. That makes both magicians better off. However, this outcome relies on each magician being able to trust the other. How would they ensure trust? By agreeing to play the promise strategy, and then following through on the agreement. Each magician would develop a reputation for cooperation, and the other magician would then trust them. However, if either magician were to cheat, that trust would then be broken.

Robert Axelrod wrote, in The Evolution of Cooperation, about repeated prisoners' dilemmas (like the game presented in the MRU video). He found that the optimal strategy to ensure cooperation was a tit-for-tat strategy. That involves a player initially cooperating in the first play of the game, then copying the strategy of the other player from then onwards. So, if the other player cheats, you would then cheat in the next play of the game, thereby punishing them. And if they cooperate, you cooperate in the next play of the game, thereby rewarding them.

If you don't think rewards provide enough incentive, you might try and alternative - the grim strategy. This starts off the same as tit-for-tat (with cooperation), but when the other player cheats, you start cheating and never go back to cooperating again. This maximises the punishment for cheating. Of course, it only works if the other player knows (and credibly believes) that you are playing the grim strategy.

So, while the MRU video alludes to more nuance in game theory, you can now see there is a lot more to it than a simple Nash equilibrium.

Saturday, 30 September 2017

Extrapolating linear time trends... Male teacher edition

I recently read this article from The Conversation, by Kevin McGrath and Penny Van Bergen (both Macquarie University). In the article, the first two sentences read:
Male teachers may face extinction in Australian primary schools by the year 2067 unless urgent policy action is taken. In government schools, the year is 2054.
This finding comes from our analysis of more than 50 years of national annual workplace data – the first of its kind in any country.
Take a look at The Conversation article. It's hilarious. The authors take time series data on the proportion of teachers in Australia who are male, and essentially fit a linear time trend to the data (and in some cases a quadratic time trend also), then extrapolate. I took a look at the paper, which was just published in the journal Economics of Education Review. Any half-decent second-year quantitative methods student would be able to do the analysis from that paper, but most would not then extrapolate and conclude:
Looking forward, it is not possible to determine whether the decreasing representation of male teachers in Australia will continue unabated. If so, however, the situation is dire. In primary schools Australia-wide, for example, male teachers were 28.49% of the teaching staff in 1977. Taking the negative linear trend observed in male primary teaching staff and extrapolating forward, it is possible to determine that Australian male primary teachers will reach an ‘extinction point’ in the year 2067. In Government primary schools, where this decline is sharpest, this ‘extinction point’ comes much sooner – in the year 2054.
There is nothing to suggest a linear time trend is going to continue into the future. Certainly, it seems unlikely when you have a variable (like the proportion of teachers who are male) that is bounded by 0% and 100% that it will behave in any way linearly when you get close to the extremes, even if it is behaving linearly for past data. Here's the key data from their paper (there's also a more interactive version at The Conversation):

If you're set on trying out polynomials of time trends, why stop at a quadratic? The primary school data (the lower line in the diagram above) looks like it might be a cubic since it starts off upward sloping then starts going downwards, but at a decreasing rate. I manually scraped their data from the article in The Conversation for male primary teachers, then ran different polynomial time trends through it. The linear time trend had an R-squared of 0.939 (close to the 0.95 they report in the paper), a quadratic had an R-squared of 0.939, a cubic increased this to 0.982. In the cubic, all three variables (time, time-squared, and time-cubed) are highly statistically significant. Moreover, this model has a much higher R-squared than their quadratic, so is more predictive of the actual data. In the cubic model (shown below), the forecast shows an increase in male teacher proportions from about now!

Now, I'm not about to use my model to suggest that the proportion of male primary school teachers would accelerate to 100% (if you extrapolate the cubic model, this happens by 2049, which is sooner than the proportion reaches zero under McGrath and Van Bergen's model extrapolation!), but I could. And then I would be just as wrong as McGrath and Van Bergen. The British mathematician George Box once said: "All models are wrong, but some are useful". In this case, both the linear time trend model of McGrath and Van Bergen and the cubic model I've shown here are both wrong and not useful. Hopefully people haven't taken McGrath and Van Bergen's results too seriously. The gender gap in teaching is potentially a problem, but trying to show this by extrapolating time trends using a very simple model is not helpful.

Thursday, 28 September 2017

Pharmac vs. Keytruda - The sequel

Back in 2015 I wrote a post about Pharmac's decision not to fund the drug Keytruda for melanoma patients. Keytruda is back in the news this week:
A 44-year-old father of four given six to nine months to live when he was diagnosed with lung cancer has seen his tumour halve in size thanks to a new treatment he describes as a "miracle drug".
Patients and advocates are calling on Keytruda to be publicly funded for lung cancer, the country's biggest form of cancer death which claims five lives a day, because many patients could not afford the tens of thousands of dollars required to pay for it...
Pharmac director of operations Sarah Fitt said they had received funding applications for Keytruda, also known as pembrolizumab, for the first and second-line treatment of advanced non-small cell lung cancer and would continue to review evidence.
Clinical advisers would now review extra information requested to decided (sic) on funding for it as a first-line treatment.
I'll simply reiterate some of the points that I made in that 2015 post (and note that this issue is quite timely given that my ECON110 class covered the health economics topic just this week).

It is worth starting by noting that Pharmac has a fixed budget to pay for pharmaceuticals. If it agrees to pay for Keytruda for lung cancer, at a cost of tens of thousands of dollars per patient, then that is tens of thousands of dollars that cannot be spent on pharmaceuticals for other patients. There is an opportunity cost to funding this treatment.

Now, that problem could be mitigated by the government increasing Pharmac funding by enough to pay for the Keytruda costs. But if Pharmac receives additional funding, is Keytruda the best use of that funding? Are there other treatments that could be funded instead? Even with extra resources, Pharmac's budget would still be limited, so how should we decide whether Keytruda is the best use of that additional funding?

Fortunately, there is a solution to these tricky questions: work out which treatments are most cost-effective and fund those first. Health economists use cost-effectiveness analysis to measure the cost of providing a given amount of health gains. If the health gains are measured in a common unit called a Quality-Adjusted Life Years (QALYs) then we call it cost-utility analysis (you can read more about QALYs here, as well as DALYs - an alternative measure). QALYs are essentially a measure of health that combines length of life and quality of life.

Using the gain in QALYs from each treatment as our measure of health benefits, a high-benefit treatment is one that provides more QALYs than a low-benefit treatment, and we can compare them in terms of the cost-per-QALY. The superior treatment is the one that has the lowest cost-per-QALY.

You might disagree that cost-effectiveness is a suitable way to allocate scarce health funding resources. I refer you to the Australian ethicist Toby Ord, who makes an outstanding moral argument in favour of allocating global health resources on the basis of cost-effectiveness (I recommend this excellent essay).

Finally, here's what I wrote about funding Keytruda in 2015 (for melanoma, but the same points apply in terms of Pharmac funding the drug for lung cancer):
Of course, it would be good for the melanoma patients who would receive Keytruda free or heavily subsidised. But, in the context of a limited funding pool for Pharmac, forcing the funding of Keytruda might mean that savings need to be made elsewhere [*], including treatments that provide a lower cost-per-QALY. So at the level of the New Zealand population, some QALYs would be gained from funding Keytruda, but even more QALYs would be lost because of the other treatments that would no longer be able to be funded.
Unfortunately, New Zealand doesn't have an equivalent of the UK's National Centre for Health and Care Excellence (NICE), which calculates cost-effectiveness of potential treatment options for the National Health Service and ranks them against an objective standard cost-per-QALY (of £30,000) to work out which options should or should not be funded. That makes so many of Pharmac's decisions subject to political interference, which really could end up costing us in terms of overall health and wellbeing.

Tuesday, 26 September 2017

Tesla, damaged goods and price discrimination

I'm a bit late to this story, as reported in TechCrunch:
Tesla has pushed an over-the-air update to some of its vehicles in Florida that lets those cars go just a liiiittle bit farther, thus helping their owners get that much farther away from the devastation of Hurricane Irma.
Wondering how that’s even possible?
Up until a few months ago, Tesla sold a 60kWh version of its Model S and Model X vehicles — but the battery in those cars was actually rated at 75kWh. The thinking: Tesla could offer a more affordable 60kWh version to those who didn’t need the full range of the 75kWh battery — but to keep things simple, they’d just use the same 75kWh battery and lock it on the software side. If 60kWh buyers found they needed more range and wanted to upgrade later, they could… or if Tesla wanted to suddenly bestow owners with some extra range in case of an emergency, they could.
And that’s what’s happening here.
Price discrimination occurs when different consumers (or groups of consumers) are charged different prices for the same good or service, and where the difference in price does not arise because of a difference in cost. This is clearly price discrimination, since the same battery was selling for different prices to different customers (even though the customers didn't know this!). The excellent Jodi Beggs also covered the story at Economists Do It With Models:
First, I guess I should point out that this [the update that increased battery capacity for Tesla owners in Florida] is a nice thing to do. But…you mean to tell me this whole time you were just sandbagging some of the batteries???? That’s…bold, among other things. I hope the warm fuzzies you get for this gesture outweigh whatever customer fury may be heading in your direction…(personally, I can’t decide whether I would be more irritated if I had or hadn’t paid for the better battery)...
People typically aren’t thrilled when they hear the phrase “price discrimination,” since they seem to assume it’s just another fun way for a company to rip them off. Not all of these customers are wrong- it’s entirely possible that some customers pay higher prices than they would otherwise if a company decides to price discriminate. That said, it’s almost always the case that price discrimination results in lower prices for some customers, and it’s even possible that price discrimination results in lower prices for some customers without subjecting any customers to higher prices.
The interesting thing about the story is that Tesla was effectively selling the same product to both groups of customers, but purposely degrading the performance of the battery for the cheaper version of the cars. Alex Tabarrok on Marginal Revolution notes:
But why would Tesla damage its own vehicles?
The answer to the second question is price discrimination! Tesla knows that some of its customers are willing to pay more for a Tesla than others. But Tesla can’t just ask its customers their willingness to pay and price accordingly. High willing-to-pay customers would simply lie to get a lower price. Thus, Tesla must find some characteristic of buyers that is correlated with high willingness-to-pay and charge more to customers with that characteristic...
The classic paper in this literature is Damaged Goods by Deneckere and McAfee who write:
"Manufacturers may intentionally damage a portion of their goods in order to price discriminate. Many instances of this phenomenon are observed. It may result in a Pareto improvement." 
Note the last sentence–damaging goods can be beneficial to everyone!
It makes sense for firms to damage their own goods, if that allows them to effectively price discriminate - to charge high prices to those who are willing to pay a high price (or who have relatively inelastic demand) for the undamaged good, and charge low prices to those who are not willing to pay a high price (or who have relatively elastic demand) for the undamaged good. But this will only work if those with high willingness-to-pay (or relatively inelastic demand) would not be attracted by the lower-priced damaged goods. Which it appears was the case for Tesla. It remains to be seen what post-Irma fallout, if any, Tesla will face.

Sunday, 24 September 2017

Why study economics? Economics and computer science edition...

MIT is offering a new degree in economics and computer science, which again illustrates that there are lots of jobs available for economics graduates in the tech sector:
The new major aims to prepare students to think at the nexus of economics and computer science, so they can understand and design the kinds of systems that are coming to define modern life. Think Amazon, Uber, eBay, etc.
“This area is super-hot commercially,” says David Autor, the Ford Professor of Economics and associate head of the Department of Economics. “Hiring economists has become really prominent at tech companies because they’re filling market-design positions.”
Because these companies need analysts who can decide which objectives to maximize, what information and choices to offer, what rules to set, and so on, “companies are really looking for this skill set,” he says...
Asu Ozdaglar, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and acting head of the Department of Electrical Engineering and Computer Science (EECS), says...
“If you’re thinking about humans making decisions in large-scale systems, you have to think about incentives,” she says. “How, for example, do you design rewards and costs so that people behave the way you desire?”
These issues will be familiar to any Uber user caught in a downpour. Suddenly, the cost of getting anywhere increases dramatically, which is also an incentive for Uber drivers to move toward the storm of demand. Surge pricing may be a scourge to customers, but it's also a way to match supply with demand — in this case, cars with riders.
Read more about the new degree at the link above. Fortunately, you don't need to go all the way to MIT to study economics and computer science. The University of Waikato has one of the top two economics departments in New Zealand, as well as a highly regarded computer science department.

[HT: Marginal Revolution]

Read more:

Friday, 22 September 2017

Elections, temperature and the irony of the 2000 US presidential election

Last month, Jamie Morton wrote in the New Zealand Herald about this article (open access), by Jasper van Assche (Ghent University) and others, published in the journal Frontiers in Psychology back in June. In the article, van Assche et al. look at data from US presidential elections and temperature (specifically, they look at changes between elections in both those variables). They found that:
For each increase of 1°C (1.8°F), voter turnout increased by 0.14%.
Importantly though, there was also an effect on which party voters voted for. Specifically:
...although positive changes in temperature motivate some citizens to cast their votes for the non-system parties, they are an even stronger motivator for some citizens to vote for the incumbent government.
 I found this bit from the final paragraph of the paper laughably ironic:
Another example concerns the 2000 presidential election. Based on our model, an increase of only 1°C (1.8°F) may have made Al Gore the 43rd United States President instead of George W. Bush, as Gore would have won in Florida.
That's right. There wasn't nearly enough climate change to make Al Gore president in 2000.

New Zealand goes to the polls tomorrow (although in reality, many voters have made their choice already). Will the incumbent National government be worrying about the weather forecast?

Thursday, 21 September 2017

Gary Becker on human capital

In ECON110 this week, we've been covering the economics of education. In this topic we theorise that, from the perspective of the individual, education is an investment in human capital. This 'human capital theory' comes from the works of Jacob Mincer and from 1992 Nobel Prize winner Gary Becker, who sadly passed away in 2014 (although it was Arthur Pigou who much earlier coined the term human capital). So it is timely that the Economist has had two excellent articles on Gary Becker and human capital over the last couple of months. It was the second one, from The Economist Explains, that caught my attention this week, but I think the earlier article from August is better so I'll quote from that one:
...human capital refers to the abilities and qualities of people that make them productive. Knowledge is the most important of these, but other factors, from a sense of punctuality to the state of someone’s health, also matter. Investment in human capital thus mainly refers to education but it also includes other things—the inculcation of values by parents, say, or a healthy diet. Just as investing in physical capital—whether building a new factory or upgrading computers—can pay off for a company, so investments in human capital also pay off for people. The earnings of well-educated individuals are generally higher than those of the wider population...
Becker observed that people do acquire general human capital, but they often do so at their own expense, rather than that of employers. This is true of university, when students take on debts to pay for education before entering the workforce. It is also true of workers in almost all industries: interns, trainees and junior employees share in the cost of getting them up to speed by being paid less.
Becker made the assumption that people would be hard-headed in calculating how much to invest in their own human capital. They would compare expected future earnings from different career choices and consider the cost of acquiring the education to pursue these careers, including time spent in the classroom. He knew that reality was far messier, with decisions plagued by uncertainty and complicated motivations, but he described his model as an “economic way of looking at life”. His simplified assumptions about people being purposeful and rational in their decisions laid the groundwork for an elegant theory of human capital, which he expounded in several seminal articles and a book in the early 1960s.
His theory helped explain why younger generations spent more time in schooling than older ones: longer life expectancies raised the profitability of acquiring knowledge. It also helped explain the spread of education: advances in technology made it more profitable to have skills, which in turn raised the demand for education. It showed that under-investment in human capital was a constant risk: young people can be short-sighted given the long payback period for education; and lenders are wary of supporting them because of their lack of collateral (attributes such as knowledge always stay with the borrower, whereas a borrower’s physical assets can be seized).
So many of the things we covered in class this week are found there, including the decision about private investment in education, the credit constraints that low income students face in borrowing towards their education costs (which is part of the rationale for a system of student loans), and one of the rationales for government involvement (that students would under-invest in their own education). Even though, as the article notes, behavioural economics has been used to attack the foundations of Becker's theories, on that last point I think behavioural economics actually makes the case stronger. One of the biases that behavioural economics has identified is present bias - quasi-rational decision-makers heavily discount the future (much more so that a standard time-value-of-money treatment would). Since the benefits of education happen in the future, those benefits are discounted greatly compared with the costs of education that occur in the present. So, quasi-rational people would tend to under-invest in education because they under-weight the value of the future benefits relative to the current costs.

The whole article (or both articles) is a good introduction to Becker's work on human capital. For a broader perspective on Becker's work, from the man himself, I highly recommend his 1992 Nobel lecture.

Tuesday, 19 September 2017

Book Review: Inequality - What Can Be Done?

Earlier this month, I finished reading "Inequality - What Can Be Done?" by the late Tony Atkinson, who sadly died at the start of the year. This book is thoroughly researched (as one might expect given it was written by one of the true leaders of the field) and well written, although the generalist reader might find some of it pretty heavy going. The book is also fairly Britain-centric, which is to be expected given that it has a policy focus, although there is plenty for U.S. readers as well. Unfortunately for those closer to my home, New Zealand rates only a few mentions.

Atkinson uses the book to outline his policy prescription for dealing with inequality (hence the second part of the title: "What can be done?"). This involves fifteen proposals, and five 'ideas to pursue'. I'm not going to go through all of the proposals, but will note that many of them are unsurprising. Others are clearly suitable for Britain, but would take much more work to be implemented in a different institutional context (that isn't to say that they wouldn't work in other contexts, only that they would be even more difficult to implement).

Atkinson also isn't shy about the difficulties with his proposals and the criticisms they might attract, and he addresses most of the key criticisms in the later chapters of the book. However, in spite of those later chapters, I can see some problems with some of the proposals that make me doubt whether they are feasible (individually, or as part of an overall package). For instance, Proposal #3 is "The government should adopt an explicit target for preventing and reducing unemployment and underpin this ambition by offering guaranteed public employment at the minimum wage to those who seek it". These sorts of guaranteed employment schemes sound like a good solution to unemployment on the surface, but they don't come without cost. I'm not just talking about the monetary cost to government of paying people the guaranteed wage. This guaranteed employment offer from the government might crowd out low-paying private sector employment, depending on the jobs that are on offer. Minimum-wage-level jobs are already unattractive for many people to work (consider the shortage of workers willing to work in the aged care sector, even though there are many unemployed people available for such jobs). So in order to encourage the unemployed to take up the guaranteed work offer, these jobs would need to be more attractive than existing minimum-wage-level jobs in other ways. Maybe they will require less physical or mental effort, or maybe they will have hours of work that are more flexible or suitable for parents with young children. These non-monetary characteristics would encourage more of the unemployed to take up the guaranteed employment offer, but they might also induce workers in other minimum-wage-level jobs to become 'unemployed' in order to shift to the more attractive guaranteed work instead. Maybe. The system would need to be very carefully designed, and I don't think Atkinson fully worked through the incentive effects on this one.

Proposal #4 advocates for a living wage, which I've already pointed out only works well when not all employers offer the living wage, but a higher minimum wage would simply lower employment, as the latest evidence appears to show. Proposal #7 is to set up a public 'Investment Authority' (that is, a sovereign wealth fund) to invest in companies and property on behalf of the state, but the link to inequality reduction of this proposal is pretty tenuous. In his justification for this proposal, I felt the focus on net public assets being a problem ignores the value to the government of the ability to levy future taxes, which is very valuable So, it's not clear to me that low (or negative) net public assets are necessarily a problem that needs solving.

Finally, it is Proposal #15 that is most problematic for the book: "Rich countries should raise their target for Official Development Assistance to 1 per cent of Gross National Income". I'm not arguing against the proposal per se (in fact, I agree that rich country governments should be providing more development aid to poorer countries). But if the goal of these proposals is to reduce inequality in Britain, this proposal would have at best no effect. If the goal instead is to reduce global inequality, the policy prescription is quite different, and could be more effectively achieved by avoiding most (if not all) of the other proposals put forward in the book, and simply raising the goal in Proposal #15 from 1 percent to 2 percent of Gross National Income, or 3 percent, or 5 percent. None of the other proposals would be as cost-effective in reducing global inequality as would increasing development aid.

That's about all my gripes about the book (note that they only relate to four of the fifteen proposals). Overall it is worth reading and I'm sure most people will find some things to take away from it. I certainly have a big page of notes, that I'll be using to revise the inequality and social security topic for my ECON110 class that's coming up in a couple of weeks. Especially, there is an excellent discussion that explains changes in inequality over time, and especially the increases in inequality that have happened across many countries since 1980 (this is an interesting place to start, since the time period then covers the period in the 1980s through to the mid-1990s, when inequality really was increasing in New Zealand).

If you're looking for an easy introduction to the economics of inequality, this probably isn't the book for you. But if you're looking for a policy prescription, or ideas on policy, to deal with the problems of inequality, then this may be a good place to start.

Sunday, 17 September 2017

The Greens vs. Labour on carbon emissions, taxes and permits

Brian Fallow wrote an interesting article in the New Zealand Herald on Friday, contrasting the climate policies of Labour and the Greens. It was doubly interesting given that we had just covered this topic in ECON110 last week. Here's what Fallow wrote:
The climate change policies the two parties have recently released overlap a lot, in ways that distinguish them from National and the status quo.
But they are also at odds over which is the better way to put a price on emissions that will influence behaviour in the economy.
Labour wants to restore the emissions trading scheme (ETS), as designed by David Parker and enacted by the fifth Labour Government in the last few weeks of its ninth year in power, then promptly gutted by the incoming National Government.
But the Greens favour a tax on emissions, the proceeds of which would be used to plant trees on erosion-prone land, and the rest (most of it) recycled as an annual payment to everyone over the age of 18.
Pigovian taxes (e.g. a tax on carbon emissions) and tradeable pollution permits (e.g. the emissions trading scheme) are essentially two ways of arriving at the same destination - a reduction in emissions. Consider the diagram below, which represents a simple model for the optimal quantity of pollution (or carbon emissions). The MDC curve is the marginal damage cost (the cost to the environment of each additional unit of carbon emitted) and is upward sloping - this is because at low levels of carbon emissions, there is relatively less damage because the environment is able to absorb it. The capacity for the environment to do this is limited, so as carbon emissions increase the damage increases at an accelerated rate. The MAC curve is the marginal abatement cost (the cost to society of each unit of carbon emissions abated, or reduced) and is upward sloping from right to left. This is because, as more resources are applied to reducing carbon emissions, the opportunity costs increase. This may be because less suitable resources (meaning more costly resources) have to begin to be applied to pollution reduction. The optimal quantity of carbon emissions occurs where the MDC and MAC curves intersect - at Q*. Having less carbon emissions than Q* (such as at Q1) means that MAC is greater than MDC. In other words, the cost to society of reducing that last unit of carbon emissions was greater than the cost in terms of environmental damage. Having pollution at Q1 must make us worse off when compared with Q*.

The diagram illustrates that there are two ways of arriving at the optimal quantity of carbon emissions. One way is to regulate the quantity of emissions to be equal to Q*, as you would in an emissions trading scheme. You allocate carbon permits equal to exactly Q*, and legislate that no one is allowed to emit carbon unless they have permits (and have appropriately large penalties in place for those that break the rules).

An alternative is to price emissions at P*, as you would through a carbon tax. If the price of emissions is P*, you will have exactly Q* emissions. This is because no one would want to emit more than Q*, because the MAC is lower than the tax they would have to may (so it is cheaper to abate one unit of carbon emissions than it is to pay the tax, so at quantities above Q* the quantity of emissions would reduce). Similarly no one would want to emit less than Q*, because the MAC is greater than the tax (so it is cheaper to emit one more unit of carbon and pay the tax, rather than pay the cost of abating that unit).

Which should we prefer - an emissions tax, or an emissions trading scheme? There are arguments for and against either (as I have noted before). Neither system is particularly flexible if new cleaner technology becomes available. Both provide incentives to reduce carbon emissions to Q* (and no further). Taxes may be less subject to corrupt practices (such as in deciding who would get any initial allocation of permits). Permits may be more efficient in the economic sense, since the emitters who can reduce their emissions at the lowest cost would sell their permits to those who can only reduce emissions at high cost.

Fallow doesn't conclude that either system is better though. However, one thing is clear, and that is that all countries doing nothing about carbon emissions is unambiguously worse than either system. And both emissions taxes and emissions trading schemes are better than old-school command-and-control regulation.

Read more:

Saturday, 16 September 2017

Should trade unions be subsidised?

An externality is defined as an uncompensated impact of the actions of one person on the wellbeing of a third party. A positive externality is an externality that makes the third party better off. This creates a problem because the person creating the positive externality has no incentive to take into account the fact that they also creates benefits for other people. This leads to a situation where the market produces too little of a good, relative to the quantity that would maximise total wellbeing (or total welfare).

This is illustrated in the diagram below, which shows a positive consumption externality. The marginal social benefits (MSB - the benefits that society receives when people consume the good) are greater than the marginal private benefits (MPB - the benefits that the individual receives themselves by consuming the good). The difference is the marginal external benefits (MEB - the benefits that others receive when a person consumes the good). The market will operate at the quantity where supply meets demand, which is QM on the diagram. However, total welfare is maximised at the quantity where marginal social benefit is equal to marginal social cost, which is QS on the diagram. The market produces too little of this good, because every unit beyond QM (and up to QS) would provide more additional benefit for society (MSB) than what it costs society to produce (MSC). However, the buyers have no incentive to take into account those external benefits, so they don't consume enough.

What does that have to do with trade unions (as in the title of this post)? When a person belongs to a trade union, that provides them with some private benefit (MPB) - they can call on the union if they have a problem with their employer, they can use the union to negotiate for better pay and conditions on their behalf, and so on. However, a person's union membership also creates benefits for others (MEB), because the more people who are union members, the more negotiating power the union will have. So, it seems clear that in the case of unions, the marginal social benefits exceed the marginal private benefits, and the market for union membership will lead to too few people being members of trade unions (just as in the diagram above).

When a positive externality leads a market to produce too little of a good, we could rely on the Coase Theorem, which suggests that when private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own. In the case of unions, the Coase Theorem suggests that employees should be able to develop a solution to the externality that leads to the 'right' number of people becoming union members. However, the Coase Theorem relies on transaction costs being small, which is not the case when there are a large number of parties involved (which is the case when there are many employees). If the Coase Theorem fails, then that leaves a role for government.

Public solutions to a positive externality problem could be based on a command-and-control policy. That is, a policy that regulates the quantity. Compulsory union membership would be a potential command-and-control solution to the positive externality problem, but it seems unlikely that the quantity QS in the diagram above is equal to (or more than) every person belonging to a union.

In most cases of positive externalities, the government relies on a market-based solution to positive externality problems, such as providing a subsidy. The effect of a subsidy on the market is illustrated in the diagram below. The curve S-subsidy illustrates the effect of paying a subsidy to the trade union for every union member (you could achieve the same effect by partially reimbursing every union member - a subsidy on the demand side of the market). This lowers the price of union membership for members to PC (which incentivises more people to join the union), and raises the effective price for the unions to PP. The quantity of union membership increases to QS, which is the optimal quantity of union membership.

Many governments like to subsidise their favoured sectors of the economy, even though that lowers total welfare. Perhaps they should be looking to subsidise trade unions instead?

Friday, 15 September 2017

Lottery tickets and the endowment effect

Behavioural economics teaches us that people are not purely rational. The behavioural economist Richard Thaler notes that people are quasi-rational, which means that they are affected by heuristics and biases. One of the biases that we are affected by is loss aversion - we value losses much more than otherwise-equivalent gains. That makes us are unwilling to give up something that we already have, or in other words we require more in compensation to give it up than what we would have been willing to pay to obtain it in the first place. So if we buy something for $10 that we were willing to pay $20 for, we may choose not to re-sell it even if someone offers us $30 for it. We call this an endowment effect.

As a graphic illustration of endowment effects, and timely given that Lotto Powerball in New Zealand jackpots to $30 million this weekend, take this recent video from Business Insider:

Most of the people in the video were unwilling to give up their Powerball tickets for what they paid for them, or even for double what they paid for them (after which, they could have bought twice as many Powerball tickets and doubled their chances of winning). Crazy.

[HT: Marginal Revolution]

Thursday, 14 September 2017

How airlines use extra charges to boost their profits

Grant Bradley wrote in the New Zealand Herald back in July:
Airline revenue from frequent flier schemes, charging for bags and food has grown more than 10 times in the past decade to nearly $40 billion.
A study of 10 airlines which are among the biggest ancillary earners show that in 2007 it generated US$2.1 billion ($2.87b).
Last year the top 10 tally has leapt to more than US$28 billion.
While base air fares are near historic lows, if passengers want extras they are increasingly being forced to pay for them, especially on budget carriers...
"Low cost carriers rely upon a la carte activity by aggressively seeking revenue from checked bags, assigned seats, and extra leg room seating. Some of the best in this category have extensive holiday package business with route structures built upon leisure destinations," the report says.
None of this should be terribly surprising. The airlines are making use of a simple business strategy that we discuss in ECON100: taking advantage of customer lock-in.

In the usual discussion of customer lock-in, customers become locked into buying from a particular seller if they find it difficult (costly) to change to an alternative seller once they have started purchasing a particular good or service. Switching costs (like contract termination fees) typically generate customer lock-in, because a high cost of switching can prevent customers from changing to substitute products.

In this case, once the airline customer has purchased a ticket from an airline, they are locked into travelling with that airline (and often, they are locked into a particular flight, if they have selected a ticket type that is non-transferable). The airline knows that the customer won't switch to another airline (or flight) if they charged additional fees for complementary services [*], such as for checked bags, in-flight meals, selecting their own seat, and so on.

This is a highly profitable proposition for the airlines (see Bradley's figures above), and this is because customer demand for those extra services is relatively inelastic. Once you have purchased a plane ticket for a given flight, there are few (if any) substitutes that allow you to get your checked baggage to the same destination as you are going. So your demand for checking a bag onto your own flight (if you have a bag that needs checking in) is probably very inelastic. Similarly, if you are not prepared for your flight and buy some snacks to take onto the plane with you (and/or you don't have a meal before boarding and are unwilling to wait until you land to eat), there are no substitutes to buying a meal while in the air. When there are few substitutes for a good or service, demand will be relatively more inelastic, and the optimal mark-up over marginal cost is high. As many of you will have observed, the mark-up on in-flight snacks and meals is very high. It is these high mark-ups that leads these extra charges to be highly profitable for the airlines.

While the extra charges have been increasing, ticket prices have been declining. Airlines can afford to lower ticket prices if they know they will more than make up for the lost profits on tickets with the additional profits from these extra charges. In fact, they could (and may yet) go as far as using economy-class tickets as a loss leading product! Economy-class tickets will be effective as a loss leader if demand for tickets is relatively elastic (so that lowering the price leads to a large increase in the number of ticket buyers), and where there are many close complements (so that the airline will sell a lot of the extra services, which are highly profitable). Both conditions appear to be being met, so airline economy-class ticket prices may have further to fall, but don't expect those extra charges to disappear any time soon.


[*] Note that this is complementary, meaning services that are consumed along with the airline ticket, and not complimentary, meaning free!