Saturday 30 September 2017

Extrapolating linear time trends... Male teacher edition

I recently read this article from The Conversation, by Kevin McGrath and Penny Van Bergen (both Macquarie University). In the article, the first two sentences read:
Male teachers may face extinction in Australian primary schools by the year 2067 unless urgent policy action is taken. In government schools, the year is 2054.
This finding comes from our analysis of more than 50 years of national annual workplace data – the first of its kind in any country.
Take a look at The Conversation article. It's hilarious. The authors take time series data on the proportion of teachers in Australia who are male, and essentially fit a linear time trend to the data (and in some cases a quadratic time trend also), then extrapolate. I took a look at the paper, which was just published in the journal Economics of Education Review. Any half-decent second-year quantitative methods student would be able to do the analysis from that paper, but most would not then extrapolate and conclude:
Looking forward, it is not possible to determine whether the decreasing representation of male teachers in Australia will continue unabated. If so, however, the situation is dire. In primary schools Australia-wide, for example, male teachers were 28.49% of the teaching staff in 1977. Taking the negative linear trend observed in male primary teaching staff and extrapolating forward, it is possible to determine that Australian male primary teachers will reach an ‘extinction point’ in the year 2067. In Government primary schools, where this decline is sharpest, this ‘extinction point’ comes much sooner – in the year 2054.
There is nothing to suggest a linear time trend is going to continue into the future. Certainly, it seems unlikely when you have a variable (like the proportion of teachers who are male) that is bounded by 0% and 100% that it will behave in any way linearly when you get close to the extremes, even if it is behaving linearly for past data. Here's the key data from their paper (there's also a more interactive version at The Conversation):


If you're set on trying out polynomials of time trends, why stop at a quadratic? The primary school data (the lower line in the diagram above) looks like it might be a cubic since it starts off upward sloping then starts going downwards, but at a decreasing rate. I manually scraped their data from the article in The Conversation for male primary teachers, then ran different polynomial time trends through it. The linear time trend had an R-squared of 0.939 (close to the 0.95 they report in the paper), a quadratic had an R-squared of 0.939, a cubic increased this to 0.982. In the cubic, all three variables (time, time-squared, and time-cubed) are highly statistically significant. Moreover, this model has a much higher R-squared than their quadratic, so is more predictive of the actual data. In the cubic model (shown below), the forecast shows an increase in male teacher proportions from about now!


Now, I'm not about to use my model to suggest that the proportion of male primary school teachers would accelerate to 100% (if you extrapolate the cubic model, this happens by 2049, which is sooner than the proportion reaches zero under McGrath and Van Bergen's model extrapolation!), but I could. And then I would be just as wrong as McGrath and Van Bergen. The British mathematician George Box once said: "All models are wrong, but some are useful". In this case, both the linear time trend model of McGrath and Van Bergen and the cubic model I've shown here are both wrong and not useful. Hopefully people haven't taken McGrath and Van Bergen's results too seriously. The gender gap in teaching is potentially a problem, but trying to show this by extrapolating time trends using a very simple model is not helpful.

Thursday 28 September 2017

Pharmac vs. Keytruda - The sequel

Back in 2015 I wrote a post about Pharmac's decision not to fund the drug Keytruda for melanoma patients. Keytruda is back in the news this week:
A 44-year-old father of four given six to nine months to live when he was diagnosed with lung cancer has seen his tumour halve in size thanks to a new treatment he describes as a "miracle drug".
Patients and advocates are calling on Keytruda to be publicly funded for lung cancer, the country's biggest form of cancer death which claims five lives a day, because many patients could not afford the tens of thousands of dollars required to pay for it...
Pharmac director of operations Sarah Fitt said they had received funding applications for Keytruda, also known as pembrolizumab, for the first and second-line treatment of advanced non-small cell lung cancer and would continue to review evidence.
Clinical advisers would now review extra information requested to decided (sic) on funding for it as a first-line treatment.
I'll simply reiterate some of the points that I made in that 2015 post (and note that this issue is quite timely given that my ECON110 class covered the health economics topic just this week).

It is worth starting by noting that Pharmac has a fixed budget to pay for pharmaceuticals. If it agrees to pay for Keytruda for lung cancer, at a cost of tens of thousands of dollars per patient, then that is tens of thousands of dollars that cannot be spent on pharmaceuticals for other patients. There is an opportunity cost to funding this treatment.

Now, that problem could be mitigated by the government increasing Pharmac funding by enough to pay for the Keytruda costs. But if Pharmac receives additional funding, is Keytruda the best use of that funding? Are there other treatments that could be funded instead? Even with extra resources, Pharmac's budget would still be limited, so how should we decide whether Keytruda is the best use of that additional funding?

Fortunately, there is a solution to these tricky questions: work out which treatments are most cost-effective and fund those first. Health economists use cost-effectiveness analysis to measure the cost of providing a given amount of health gains. If the health gains are measured in a common unit called a Quality-Adjusted Life Years (QALYs) then we call it cost-utility analysis (you can read more about QALYs here, as well as DALYs - an alternative measure). QALYs are essentially a measure of health that combines length of life and quality of life.

Using the gain in QALYs from each treatment as our measure of health benefits, a high-benefit treatment is one that provides more QALYs than a low-benefit treatment, and we can compare them in terms of the cost-per-QALY. The superior treatment is the one that has the lowest cost-per-QALY.

You might disagree that cost-effectiveness is a suitable way to allocate scarce health funding resources. I refer you to the Australian ethicist Toby Ord, who makes an outstanding moral argument in favour of allocating global health resources on the basis of cost-effectiveness (I recommend this excellent essay).

Finally, here's what I wrote about funding Keytruda in 2015 (for melanoma, but the same points apply in terms of Pharmac funding the drug for lung cancer):
Of course, it would be good for the melanoma patients who would receive Keytruda free or heavily subsidised. But, in the context of a limited funding pool for Pharmac, forcing the funding of Keytruda might mean that savings need to be made elsewhere [*], including treatments that provide a lower cost-per-QALY. So at the level of the New Zealand population, some QALYs would be gained from funding Keytruda, but even more QALYs would be lost because of the other treatments that would no longer be able to be funded.
Unfortunately, New Zealand doesn't have an equivalent of the UK's National Centre for Health and Care Excellence (NICE), which calculates cost-effectiveness of potential treatment options for the National Health Service and ranks them against an objective standard cost-per-QALY (of £30,000) to work out which options should or should not be funded. That makes so many of Pharmac's decisions subject to political interference, which really could end up costing us in terms of overall health and wellbeing.

Tuesday 26 September 2017

Tesla, damaged goods and price discrimination

I'm a bit late to this story, as reported in TechCrunch:
Tesla has pushed an over-the-air update to some of its vehicles in Florida that lets those cars go just a liiiittle bit farther, thus helping their owners get that much farther away from the devastation of Hurricane Irma.
Wondering how that’s even possible?
Up until a few months ago, Tesla sold a 60kWh version of its Model S and Model X vehicles — but the battery in those cars was actually rated at 75kWh. The thinking: Tesla could offer a more affordable 60kWh version to those who didn’t need the full range of the 75kWh battery — but to keep things simple, they’d just use the same 75kWh battery and lock it on the software side. If 60kWh buyers found they needed more range and wanted to upgrade later, they could… or if Tesla wanted to suddenly bestow owners with some extra range in case of an emergency, they could.
And that’s what’s happening here.
Price discrimination occurs when different consumers (or groups of consumers) are charged different prices for the same good or service, and where the difference in price does not arise because of a difference in cost. This is clearly price discrimination, since the same battery was selling for different prices to different customers (even though the customers didn't know this!). The excellent Jodi Beggs also covered the story at Economists Do It With Models:
First, I guess I should point out that this [the update that increased battery capacity for Tesla owners in Florida] is a nice thing to do. But…you mean to tell me this whole time you were just sandbagging some of the batteries???? That’s…bold, among other things. I hope the warm fuzzies you get for this gesture outweigh whatever customer fury may be heading in your direction…(personally, I can’t decide whether I would be more irritated if I had or hadn’t paid for the better battery)...
People typically aren’t thrilled when they hear the phrase “price discrimination,” since they seem to assume it’s just another fun way for a company to rip them off. Not all of these customers are wrong- it’s entirely possible that some customers pay higher prices than they would otherwise if a company decides to price discriminate. That said, it’s almost always the case that price discrimination results in lower prices for some customers, and it’s even possible that price discrimination results in lower prices for some customers without subjecting any customers to higher prices.
The interesting thing about the story is that Tesla was effectively selling the same product to both groups of customers, but purposely degrading the performance of the battery for the cheaper version of the cars. Alex Tabarrok on Marginal Revolution notes:
But why would Tesla damage its own vehicles?
The answer to the second question is price discrimination! Tesla knows that some of its customers are willing to pay more for a Tesla than others. But Tesla can’t just ask its customers their willingness to pay and price accordingly. High willing-to-pay customers would simply lie to get a lower price. Thus, Tesla must find some characteristic of buyers that is correlated with high willingness-to-pay and charge more to customers with that characteristic...
The classic paper in this literature is Damaged Goods by Deneckere and McAfee who write:
"Manufacturers may intentionally damage a portion of their goods in order to price discriminate. Many instances of this phenomenon are observed. It may result in a Pareto improvement." 
Note the last sentence–damaging goods can be beneficial to everyone!
It makes sense for firms to damage their own goods, if that allows them to effectively price discriminate - to charge high prices to those who are willing to pay a high price (or who have relatively inelastic demand) for the undamaged good, and charge low prices to those who are not willing to pay a high price (or who have relatively elastic demand) for the undamaged good. But this will only work if those with high willingness-to-pay (or relatively inelastic demand) would not be attracted by the lower-priced damaged goods. Which it appears was the case for Tesla. It remains to be seen what post-Irma fallout, if any, Tesla will face.

Sunday 24 September 2017

Why study economics? Economics and computer science edition...

MIT is offering a new degree in economics and computer science, which again illustrates that there are lots of jobs available for economics graduates in the tech sector:
The new major aims to prepare students to think at the nexus of economics and computer science, so they can understand and design the kinds of systems that are coming to define modern life. Think Amazon, Uber, eBay, etc.
“This area is super-hot commercially,” says David Autor, the Ford Professor of Economics and associate head of the Department of Economics. “Hiring economists has become really prominent at tech companies because they’re filling market-design positions.”
Because these companies need analysts who can decide which objectives to maximize, what information and choices to offer, what rules to set, and so on, “companies are really looking for this skill set,” he says...
Asu Ozdaglar, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and acting head of the Department of Electrical Engineering and Computer Science (EECS), says...
“If you’re thinking about humans making decisions in large-scale systems, you have to think about incentives,” she says. “How, for example, do you design rewards and costs so that people behave the way you desire?”
These issues will be familiar to any Uber user caught in a downpour. Suddenly, the cost of getting anywhere increases dramatically, which is also an incentive for Uber drivers to move toward the storm of demand. Surge pricing may be a scourge to customers, but it's also a way to match supply with demand — in this case, cars with riders.
Read more about the new degree at the link above. Fortunately, you don't need to go all the way to MIT to study economics and computer science. The University of Waikato has one of the top two economics departments in New Zealand, as well as a highly regarded computer science department.

[HT: Marginal Revolution]

Read more:



Friday 22 September 2017

Elections, temperature and the irony of the 2000 US presidential election

Last month, Jamie Morton wrote in the New Zealand Herald about this article (open access), by Jasper van Assche (Ghent University) and others, published in the journal Frontiers in Psychology back in June. In the article, van Assche et al. look at data from US presidential elections and temperature (specifically, they look at changes between elections in both those variables). They found that:
For each increase of 1°C (1.8°F), voter turnout increased by 0.14%.
Importantly though, there was also an effect on which party voters voted for. Specifically:
...although positive changes in temperature motivate some citizens to cast their votes for the non-system parties, they are an even stronger motivator for some citizens to vote for the incumbent government.
 I found this bit from the final paragraph of the paper laughably ironic:
Another example concerns the 2000 presidential election. Based on our model, an increase of only 1°C (1.8°F) may have made Al Gore the 43rd United States President instead of George W. Bush, as Gore would have won in Florida.
That's right. There wasn't nearly enough climate change to make Al Gore president in 2000.

New Zealand goes to the polls tomorrow (although in reality, many voters have made their choice already). Will the incumbent National government be worrying about the weather forecast?

Thursday 21 September 2017

Gary Becker on human capital

In ECON110 this week, we've been covering the economics of education. In this topic we theorise that, from the perspective of the individual, education is an investment in human capital. This 'human capital theory' comes from the works of Jacob Mincer and from 1992 Nobel Prize winner Gary Becker, who sadly passed away in 2014 (although it was Arthur Pigou who much earlier coined the term human capital). So it is timely that the Economist has had two excellent articles on Gary Becker and human capital over the last couple of months. It was the second one, from The Economist Explains, that caught my attention this week, but I think the earlier article from August is better so I'll quote from that one:
...human capital refers to the abilities and qualities of people that make them productive. Knowledge is the most important of these, but other factors, from a sense of punctuality to the state of someone’s health, also matter. Investment in human capital thus mainly refers to education but it also includes other things—the inculcation of values by parents, say, or a healthy diet. Just as investing in physical capital—whether building a new factory or upgrading computers—can pay off for a company, so investments in human capital also pay off for people. The earnings of well-educated individuals are generally higher than those of the wider population...
Becker observed that people do acquire general human capital, but they often do so at their own expense, rather than that of employers. This is true of university, when students take on debts to pay for education before entering the workforce. It is also true of workers in almost all industries: interns, trainees and junior employees share in the cost of getting them up to speed by being paid less.
Becker made the assumption that people would be hard-headed in calculating how much to invest in their own human capital. They would compare expected future earnings from different career choices and consider the cost of acquiring the education to pursue these careers, including time spent in the classroom. He knew that reality was far messier, with decisions plagued by uncertainty and complicated motivations, but he described his model as an “economic way of looking at life”. His simplified assumptions about people being purposeful and rational in their decisions laid the groundwork for an elegant theory of human capital, which he expounded in several seminal articles and a book in the early 1960s.
His theory helped explain why younger generations spent more time in schooling than older ones: longer life expectancies raised the profitability of acquiring knowledge. It also helped explain the spread of education: advances in technology made it more profitable to have skills, which in turn raised the demand for education. It showed that under-investment in human capital was a constant risk: young people can be short-sighted given the long payback period for education; and lenders are wary of supporting them because of their lack of collateral (attributes such as knowledge always stay with the borrower, whereas a borrower’s physical assets can be seized).
So many of the things we covered in class this week are found there, including the decision about private investment in education, the credit constraints that low income students face in borrowing towards their education costs (which is part of the rationale for a system of student loans), and one of the rationales for government involvement (that students would under-invest in their own education). Even though, as the article notes, behavioural economics has been used to attack the foundations of Becker's theories, on that last point I think behavioural economics actually makes the case stronger. One of the biases that behavioural economics has identified is present bias - quasi-rational decision-makers heavily discount the future (much more so that a standard time-value-of-money treatment would). Since the benefits of education happen in the future, those benefits are discounted greatly compared with the costs of education that occur in the present. So, quasi-rational people would tend to under-invest in education because they under-weight the value of the future benefits relative to the current costs.

The whole article (or both articles) is a good introduction to Becker's work on human capital. For a broader perspective on Becker's work, from the man himself, I highly recommend his 1992 Nobel lecture.

Tuesday 19 September 2017

Book Review: Inequality - What Can Be Done?

Earlier this month, I finished reading "Inequality - What Can Be Done?" by the late Tony Atkinson, who sadly died at the start of the year. This book is thoroughly researched (as one might expect given it was written by one of the true leaders of the field) and well written, although the generalist reader might find some of it pretty heavy going. The book is also fairly Britain-centric, which is to be expected given that it has a policy focus, although there is plenty for U.S. readers as well. Unfortunately for those closer to my home, New Zealand rates only a few mentions.

Atkinson uses the book to outline his policy prescription for dealing with inequality (hence the second part of the title: "What can be done?"). This involves fifteen proposals, and five 'ideas to pursue'. I'm not going to go through all of the proposals, but will note that many of them are unsurprising. Others are clearly suitable for Britain, but would take much more work to be implemented in a different institutional context (that isn't to say that they wouldn't work in other contexts, only that they would be even more difficult to implement).

Atkinson also isn't shy about the difficulties with his proposals and the criticisms they might attract, and he addresses most of the key criticisms in the later chapters of the book. However, in spite of those later chapters, I can see some problems with some of the proposals that make me doubt whether they are feasible (individually, or as part of an overall package). For instance, Proposal #3 is "The government should adopt an explicit target for preventing and reducing unemployment and underpin this ambition by offering guaranteed public employment at the minimum wage to those who seek it". These sorts of guaranteed employment schemes sound like a good solution to unemployment on the surface, but they don't come without cost. I'm not just talking about the monetary cost to government of paying people the guaranteed wage. This guaranteed employment offer from the government might crowd out low-paying private sector employment, depending on the jobs that are on offer. Minimum-wage-level jobs are already unattractive for many people to work (consider the shortage of workers willing to work in the aged care sector, even though there are many unemployed people available for such jobs). So in order to encourage the unemployed to take up the guaranteed work offer, these jobs would need to be more attractive than existing minimum-wage-level jobs in other ways. Maybe they will require less physical or mental effort, or maybe they will have hours of work that are more flexible or suitable for parents with young children. These non-monetary characteristics would encourage more of the unemployed to take up the guaranteed employment offer, but they might also induce workers in other minimum-wage-level jobs to become 'unemployed' in order to shift to the more attractive guaranteed work instead. Maybe. The system would need to be very carefully designed, and I don't think Atkinson fully worked through the incentive effects on this one.

Proposal #4 advocates for a living wage, which I've already pointed out only works well when not all employers offer the living wage, but a higher minimum wage would simply lower employment, as the latest evidence appears to show. Proposal #7 is to set up a public 'Investment Authority' (that is, a sovereign wealth fund) to invest in companies and property on behalf of the state, but the link to inequality reduction of this proposal is pretty tenuous. In his justification for this proposal, I felt the focus on net public assets being a problem ignores the value to the government of the ability to levy future taxes, which is very valuable So, it's not clear to me that low (or negative) net public assets are necessarily a problem that needs solving.

Finally, it is Proposal #15 that is most problematic for the book: "Rich countries should raise their target for Official Development Assistance to 1 per cent of Gross National Income". I'm not arguing against the proposal per se (in fact, I agree that rich country governments should be providing more development aid to poorer countries). But if the goal of these proposals is to reduce inequality in Britain, this proposal would have at best no effect. If the goal instead is to reduce global inequality, the policy prescription is quite different, and could be more effectively achieved by avoiding most (if not all) of the other proposals put forward in the book, and simply raising the goal in Proposal #15 from 1 percent to 2 percent of Gross National Income, or 3 percent, or 5 percent. None of the other proposals would be as cost-effective in reducing global inequality as would increasing development aid.

That's about all my gripes about the book (note that they only relate to four of the fifteen proposals). Overall it is worth reading and I'm sure most people will find some things to take away from it. I certainly have a big page of notes, that I'll be using to revise the inequality and social security topic for my ECON110 class that's coming up in a couple of weeks. Especially, there is an excellent discussion that explains changes in inequality over time, and especially the increases in inequality that have happened across many countries since 1980 (this is an interesting place to start, since the time period then covers the period in the 1980s through to the mid-1990s, when inequality really was increasing in New Zealand).

If you're looking for an easy introduction to the economics of inequality, this probably isn't the book for you. But if you're looking for a policy prescription, or ideas on policy, to deal with the problems of inequality, then this may be a good place to start.

Sunday 17 September 2017

The Greens vs. Labour on carbon emissions, taxes and permits

Brian Fallow wrote an interesting article in the New Zealand Herald on Friday, contrasting the climate policies of Labour and the Greens. It was doubly interesting given that we had just covered this topic in ECON110 last week. Here's what Fallow wrote:
The climate change policies the two parties have recently released overlap a lot, in ways that distinguish them from National and the status quo.
But they are also at odds over which is the better way to put a price on emissions that will influence behaviour in the economy.
Labour wants to restore the emissions trading scheme (ETS), as designed by David Parker and enacted by the fifth Labour Government in the last few weeks of its ninth year in power, then promptly gutted by the incoming National Government.
But the Greens favour a tax on emissions, the proceeds of which would be used to plant trees on erosion-prone land, and the rest (most of it) recycled as an annual payment to everyone over the age of 18.
Pigovian taxes (e.g. a tax on carbon emissions) and tradeable pollution permits (e.g. the emissions trading scheme) are essentially two ways of arriving at the same destination - a reduction in emissions. Consider the diagram below, which represents a simple model for the optimal quantity of pollution (or carbon emissions). The MDC curve is the marginal damage cost (the cost to the environment of each additional unit of carbon emitted) and is upward sloping - this is because at low levels of carbon emissions, there is relatively less damage because the environment is able to absorb it. The capacity for the environment to do this is limited, so as carbon emissions increase the damage increases at an accelerated rate. The MAC curve is the marginal abatement cost (the cost to society of each unit of carbon emissions abated, or reduced) and is upward sloping from right to left. This is because, as more resources are applied to reducing carbon emissions, the opportunity costs increase. This may be because less suitable resources (meaning more costly resources) have to begin to be applied to pollution reduction. The optimal quantity of carbon emissions occurs where the MDC and MAC curves intersect - at Q*. Having less carbon emissions than Q* (such as at Q1) means that MAC is greater than MDC. In other words, the cost to society of reducing that last unit of carbon emissions was greater than the cost in terms of environmental damage. Having pollution at Q1 must make us worse off when compared with Q*.


The diagram illustrates that there are two ways of arriving at the optimal quantity of carbon emissions. One way is to regulate the quantity of emissions to be equal to Q*, as you would in an emissions trading scheme. You allocate carbon permits equal to exactly Q*, and legislate that no one is allowed to emit carbon unless they have permits (and have appropriately large penalties in place for those that break the rules).

An alternative is to price emissions at P*, as you would through a carbon tax. If the price of emissions is P*, you will have exactly Q* emissions. This is because no one would want to emit more than Q*, because the MAC is lower than the tax they would have to may (so it is cheaper to abate one unit of carbon emissions than it is to pay the tax, so at quantities above Q* the quantity of emissions would reduce). Similarly no one would want to emit less than Q*, because the MAC is greater than the tax (so it is cheaper to emit one more unit of carbon and pay the tax, rather than pay the cost of abating that unit).

Which should we prefer - an emissions tax, or an emissions trading scheme? There are arguments for and against either (as I have noted before). Neither system is particularly flexible if new cleaner technology becomes available. Both provide incentives to reduce carbon emissions to Q* (and no further). Taxes may be less subject to corrupt practices (such as in deciding who would get any initial allocation of permits). Permits may be more efficient in the economic sense, since the emitters who can reduce their emissions at the lowest cost would sell their permits to those who can only reduce emissions at high cost.

Fallow doesn't conclude that either system is better though. However, one thing is clear, and that is that all countries doing nothing about carbon emissions is unambiguously worse than either system. And both emissions taxes and emissions trading schemes are better than old-school command-and-control regulation.

Read more:

Saturday 16 September 2017

Should trade unions be subsidised?

An externality is defined as an uncompensated impact of the actions of one person on the wellbeing of a third party. A positive externality is an externality that makes the third party better off. This creates a problem because the person creating the positive externality has no incentive to take into account the fact that they also creates benefits for other people. This leads to a situation where the market produces too little of a good, relative to the quantity that would maximise total wellbeing (or total welfare).

This is illustrated in the diagram below, which shows a positive consumption externality. The marginal social benefits (MSB - the benefits that society receives when people consume the good) are greater than the marginal private benefits (MPB - the benefits that the individual receives themselves by consuming the good). The difference is the marginal external benefits (MEB - the benefits that others receive when a person consumes the good). The market will operate at the quantity where supply meets demand, which is QM on the diagram. However, total welfare is maximised at the quantity where marginal social benefit is equal to marginal social cost, which is QS on the diagram. The market produces too little of this good, because every unit beyond QM (and up to QS) would provide more additional benefit for society (MSB) than what it costs society to produce (MSC). However, the buyers have no incentive to take into account those external benefits, so they don't consume enough.


What does that have to do with trade unions (as in the title of this post)? When a person belongs to a trade union, that provides them with some private benefit (MPB) - they can call on the union if they have a problem with their employer, they can use the union to negotiate for better pay and conditions on their behalf, and so on. However, a person's union membership also creates benefits for others (MEB), because the more people who are union members, the more negotiating power the union will have. So, it seems clear that in the case of unions, the marginal social benefits exceed the marginal private benefits, and the market for union membership will lead to too few people being members of trade unions (just as in the diagram above).

When a positive externality leads a market to produce too little of a good, we could rely on the Coase Theorem, which suggests that when private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own. In the case of unions, the Coase Theorem suggests that employees should be able to develop a solution to the externality that leads to the 'right' number of people becoming union members. However, the Coase Theorem relies on transaction costs being small, which is not the case when there are a large number of parties involved (which is the case when there are many employees). If the Coase Theorem fails, then that leaves a role for government.

Public solutions to a positive externality problem could be based on a command-and-control policy. That is, a policy that regulates the quantity. Compulsory union membership would be a potential command-and-control solution to the positive externality problem, but it seems unlikely that the quantity QS in the diagram above is equal to (or more than) every person belonging to a union.

In most cases of positive externalities, the government relies on a market-based solution to positive externality problems, such as providing a subsidy. The effect of a subsidy on the market is illustrated in the diagram below. The curve S-subsidy illustrates the effect of paying a subsidy to the trade union for every union member (you could achieve the same effect by partially reimbursing every union member - a subsidy on the demand side of the market). This lowers the price of union membership for members to PC (which incentivises more people to join the union), and raises the effective price for the unions to PP. The quantity of union membership increases to QS, which is the optimal quantity of union membership.


Many governments like to subsidise their favoured sectors of the economy, even though that lowers total welfare. Perhaps they should be looking to subsidise trade unions instead?

Friday 15 September 2017

Lottery tickets and the endowment effect

Behavioural economics teaches us that people are not purely rational. The behavioural economist Richard Thaler notes that people are quasi-rational, which means that they are affected by heuristics and biases. One of the biases that we are affected by is loss aversion - we value losses much more than otherwise-equivalent gains. That makes us are unwilling to give up something that we already have, or in other words we require more in compensation to give it up than what we would have been willing to pay to obtain it in the first place. So if we buy something for $10 that we were willing to pay $20 for, we may choose not to re-sell it even if someone offers us $30 for it. We call this an endowment effect.

As a graphic illustration of endowment effects, and timely given that Lotto Powerball in New Zealand jackpots to $30 million this weekend, take this recent video from Business Insider:


Most of the people in the video were unwilling to give up their Powerball tickets for what they paid for them, or even for double what they paid for them (after which, they could have bought twice as many Powerball tickets and doubled their chances of winning). Crazy.

[HT: Marginal Revolution]

Thursday 14 September 2017

How airlines use extra charges to boost their profits

Grant Bradley wrote in the New Zealand Herald back in July:
Airline revenue from frequent flier schemes, charging for bags and food has grown more than 10 times in the past decade to nearly $40 billion.
A study of 10 airlines which are among the biggest ancillary earners show that in 2007 it generated US$2.1 billion ($2.87b).
Last year the top 10 tally has leapt to more than US$28 billion.
While base air fares are near historic lows, if passengers want extras they are increasingly being forced to pay for them, especially on budget carriers...
"Low cost carriers rely upon a la carte activity by aggressively seeking revenue from checked bags, assigned seats, and extra leg room seating. Some of the best in this category have extensive holiday package business with route structures built upon leisure destinations," the report says.
None of this should be terribly surprising. The airlines are making use of a simple business strategy that we discuss in ECON100: taking advantage of customer lock-in.

In the usual discussion of customer lock-in, customers become locked into buying from a particular seller if they find it difficult (costly) to change to an alternative seller once they have started purchasing a particular good or service. Switching costs (like contract termination fees) typically generate customer lock-in, because a high cost of switching can prevent customers from changing to substitute products.

In this case, once the airline customer has purchased a ticket from an airline, they are locked into travelling with that airline (and often, they are locked into a particular flight, if they have selected a ticket type that is non-transferable). The airline knows that the customer won't switch to another airline (or flight) if they charged additional fees for complementary services [*], such as for checked bags, in-flight meals, selecting their own seat, and so on.

This is a highly profitable proposition for the airlines (see Bradley's figures above), and this is because customer demand for those extra services is relatively inelastic. Once you have purchased a plane ticket for a given flight, there are few (if any) substitutes that allow you to get your checked baggage to the same destination as you are going. So your demand for checking a bag onto your own flight (if you have a bag that needs checking in) is probably very inelastic. Similarly, if you are not prepared for your flight and buy some snacks to take onto the plane with you (and/or you don't have a meal before boarding and are unwilling to wait until you land to eat), there are no substitutes to buying a meal while in the air. When there are few substitutes for a good or service, demand will be relatively more inelastic, and the optimal mark-up over marginal cost is high. As many of you will have observed, the mark-up on in-flight snacks and meals is very high. It is these high mark-ups that leads these extra charges to be highly profitable for the airlines.

While the extra charges have been increasing, ticket prices have been declining. Airlines can afford to lower ticket prices if they know they will more than make up for the lost profits on tickets with the additional profits from these extra charges. In fact, they could (and may yet) go as far as using economy-class tickets as a loss leading product! Economy-class tickets will be effective as a loss leader if demand for tickets is relatively elastic (so that lowering the price leads to a large increase in the number of ticket buyers), and where there are many close complements (so that the airline will sell a lot of the extra services, which are highly profitable). Both conditions appear to be being met, so airline economy-class ticket prices may have further to fall, but don't expect those extra charges to disappear any time soon.

*****

[*] Note that this is complementary, meaning services that are consumed along with the airline ticket, and not complimentary, meaning free!

Tuesday 12 September 2017

Landlords can and will pass the cost increases from Labour's policies onto tenants

Business Editor Hamish Fletcher wrote in the New Zealand Herald on Sunday:
The landlord lobby had a clever response to Labour's plan for rental reform.
Given the complaints of property investors don't garner much public sympathy, it was smart to focus on how Labour's moves would harm the very people they are meant to help: tenants.
Their argument went something like this: Labour's proposed changes - which include extending all notice periods to 90 days, removing landlords' ability to evict tenants without cause and requiring rental properties to "be warm, healthy and dry" - would backfire.
They would deter investors from the rental market and aggravate an existing housing shortage. "It's going to put people off being landlords," said Andrew King, executive officer of the New Zealand Property Investors' Federation.
"People already can't find rentals, and this will push up prices ... why would we restrict supply when what we need is more?" he said...
Applying for a rental can feel like a lottery in some cases and an auction in others, with people offering over the advertised rent to secure the tenancy.
It certainly didn't feel like landlords were the ones on the back foot. And although the concern for tenants is commendable, it is nevertheless misplaced.
That's because Labour's policies won't cause houses to suddenly vanish.
Nor will they remove demand for homes under construction or in the planning stages, given the pace of new builds versus the rate of immigration.
Fletcher is right - houses won't just suddenly vanish if Labour's policy changes come into force. But he is also wrong- the policies will decrease supply and tenants will face higher rents. Here's why.

Consider the market for rental housing operating in equilibrium, as shown in the diagram below. Note that the supply of housing is very inelastic, and there are a couple of reasons for this. First, if a landlord owns a rental property, it doesn't have many alternative uses and if they sell the property it is likely to be bought by another landlord, so as Fletcher notes above, the houses mostly stay in the market. Second, if we include owner-occupiers in this market (who provide house rental services to themselves) then it is easy to see that the supply side of the market doesn't respond much to changes in rents [*]. The market is in equilibrium, with the rent at R1 and the quantity of rental housing at Q1.


Now consider the policy changes proposed by Labour. Requiring rental properties to be "warm, healthy and dry" increases the costs to landlords. Sure, Labour is offering to contribute to one-off costs of making houses warm, healthy and dry, but they won't necessarily cover the full cost, the remainder of which may be capitalised in a mortgage and increase interest costs for the landlord. Also, there will be ongoing costs of ensuring houses stay warm, healthy and dry. Higher costs for landlords shift the supply curve up and to the left. Extending notice periods to 90 days reduces flexibility for landlords, and that inflexibility may make being a landlord less attractive for some landlords. However, that won't affect supply in this market, because we included owner-occupiers (so if landlords sold their properties and they were bought by other landlords or by owner-occupiers, it wouldn't affect the supply). Overall though, supply has shifted from S1 to S2. This increases the equilibrium rent to R2 (and decreases the quantity of rental housing to Q2).

Notice that the increase in rent is quite small. Most of the increase in costs in this market ends up being borne by the landlords. But, we are not in a rental market that looks like the diagram above. As Fletcher himself notes, there is a shortage of rental housing:
I can attest to that shortage. Having moved last year, only to have our landlord take back possession after six months, I know competition can be intense.
A shortage means that the market rent must be below equilibrium (e.g. at R0 in the diagram above), such that the quantity of rental housing demanded (Qd0) is much greater than the quantity of rental housing supplied (Qs0). That changes things dramatically. It gives landlords more leverage in negotiating with tenants. They can ask tenants to pay a higher rent, and to cover additional costs (as I mentioned in this post last week). Landlords can easily pass cost increases onto tenants when there is a shortage, and if tenants don't like it they will miss out on the property and some other tenant (who is less averse to paying the additional costs) will get the rental property. When supply decreases, the size of the shortage (which was previously Qd0-Qs0) increases even further (to Qd0-Qs2), and this increases the power that landlords have to pass cost increases onto tenants.

The landlord lobby may well be self-serving in their complaints against Labour's proposed policies, but they are also right. Tenants will end up paying higher rents. That doesn't mean that those are bad policies - only that pretending the policies can be implemented without some (and perhaps most) of the costs being borne by tenants is clearly wrongheaded.

*****

[*] Including owner-occupiers in the market is fairly conventional, but in this case it is also the most advantageous to Fletcher's arguments. If we excluded owner-occupiers, the supply would be more elastic, the leftward shift of supply due to the "warm, healthy and dry" requirements would be bigger (since this doesn't affect owner-occupiers), and the 90-day notice period would reduce supply even further in the market (since those houses sold to owner-occupiers would be removed from the market). This would make Fletcher's assertions even more wrong.

Sunday 10 September 2017

The optimal size of groups in a settlement negotiation

Last week the student-led Waikato Economics Discussion Group (EDG) discussed "Are large natural groupings (iwi) the best people to be negotiating treaty settlements?". It was an interesting discussion, which really boiled down to a debate about whether it was better to negotiate with a small number of large groups, or a larger number of small groups (and you can find some context that precipitated the choice of topic here). At the end of the session I pointed out that there is an existing framework that can be used to understand this decision, which comes from 1986 Nobel Prize winner James Buchanan.

In deciding the optimal number of people to be involved, there are two types of costs that need to be balanced:

  1. Decision-making costs - the costs of coming to an agreement (usually associated with the time and effort required to agree on a decision), which increase when the number of people (or groups) involved in the decision-making increases
  2. External costs - the costs borne by members of society who disagree with the decision (usually because they were not involved in the decision-making), which increase when the costs are borne by a larger number of people (that is, when fewer people, or groups, are involved in the decision-making)
So, if you think about the number of people (or groups) involved in negotiating a settlement, the decision-making costs increase with the number of groups, and the external costs decrease with the number of groups. This is illustrated in the diagram below, where the x-axis (Q) is the number of people (or groups) included in the settlement negotiations.


On the left of the diagram, there are a few large groups included in the negotiations (at the limit, there is just one representative of everyone). The decision-making costs are lowest, since agreement between the Crown and a single representative is relatively easy. However, the external costs are large, because many people may disagree with what the single representative has agreed on their behalf.

On the right of the diagram, there are many small groups included in the negotiations (at the limit, every person is individually included in the negotiations). The decision-making costs are highest, since agreement between the Crown and every person individually is going to take a lot of time and effort (and there may be 'holdout minorities' who hold out for a better deal). However, the external costs are minimised, because no individual will agree to the settlement if it makes them worse off.

The optimal number of groups to be included in the negotiation occurs where the two curves intersect (at Q*). At that point, the total of decision-making costs and external costs is minimised.

This only provides a framework for understanding how to decide how many groups should be included in the negotiations. If decision-making costs are high, then that would favour having fewer groups that are larger in size in the negotiations (because Q* will be further to the left). If external costs are high, then that would favour having many smaller groups in the negotiations (because Q* will be further to the right).

In the case of the Crown negotiating with iwi over Treaty of Waitangi settlements, it seems to me that the external costs are likely to be high. The decision-making costs may be high as well, but Maori culture is much more collective and inclusive than Western culture, and there are likely to be intrinsic costs faced by Maori who feel disenfranchised by being included within a larger grouping. I would suggest that this tends to favour negotiations with smaller groups, where necessary.

Saturday 9 September 2017

The relative (un)certainty of subnational population decline

Over the three years up to March of this year, I was involved in a Marsden Fund project led by Natalie Jackson, looking at subnational depopulation in New Zealand. That is, among other things we were trying to explain why some areas of New Zealand have been declining in population over time, and continue to do so. The outputs of that project have been summarised in a recent issue of the journal Policy Quarterly.

My contribution to that issue of Policy Quarterly (from pages 55-60) is entitled "The relative (un)certainty of subnational population decline", and looks at how certain (or uncertain) population decline is for different territorial authorities in New Zealand. However, the article has a broader purpose, and is worth reading because it outlines some of the key points that decision-makers need to understand about population projections, especially in terms of their uncertainty. If you know nothing about population projections, other than that they are forecasts of the future that can be useful for decision-making, then you should read the article.

The main results categories New Zealand's territorial authorities (TAs) by the probability that they will experience a decline in population over the decades 2023-2033 and 2043-2053. I won't spoil the results by naming particular TAs, but here's a summary:
...the number of TAs appearing in each category increases between the two periods. More TAs are facing population decline in the 2043–2053 decade than in the 2023–2033 decade. This corroborates recent work that has shown similar results... In the 2023–2033 decade 20 TAs face a 90 percent or greater probability of population decline, compared with 26 TAs in the 2043–2053 decade. Granted, these TAs have relatively small population, representing 12.2 percent of the national population in 2023 (for the 2023–2033 group based on median population size) and 17.2 percent of the national population in 2043 (for the 2043–2053 group).
Unsurprisingly, rural and peripheral areas face the highest probability of future population decline. Other papers in that issue of Policy Quarterly posit some reasons why we observe population decline in particular areas. My paper is descriptive and future-focused, and doesn't explore the institutional or other non-demographic factors that might explain why some rural areas, rather than others, are projected to experience population decline. That's something for future research.

Read more:


Thursday 7 September 2017

This is what happens when you have shortages of rental property

On Tuesday, Hawkes Bay Today reported:
A Napier woman was shocked to see the price of a rental home increase by $10 in just one day.
Mandy Clayton of Tamatea, who has rented for three decades, fears people are bidding on rental properties out of sheer desperation.
"I saw it advertised in the morning at a price and then later that night when I decided to apply for it, it had gone up - and that's honestly not the first time I'd seen it [a price increase]," she said.
Clayton, suspecting applicants had offered to pay more, asked the landlord about the price increase but said they told her the first price was incorrect.
A Tenancy Services spokesperson said there was nothing in the Residential Tenancies Act 1986 that precluded a tenant indicating what they would be willing to pay for a rental property.
"The process of rental bidding reverses the offer/acceptance role that usually occurs. It is the potential tenants who make the offer - it is then the decision of the landlord which offer they accept.
"This process removes the landlord from the rent-setting equation and allows the prospective tenant to be an active participant in the setting of 'market rent'."
To be clear, this doesn't remove the landlord from the rent-setting equation. However, this is exactly what we would expect to happen if there is a shortage of rental housing, based on the simple model of supply and demand - when there is a shortage of a good, the price should rise.

As shown in the diagram below, if the current rent (R1) is below the equilibrium rent (R0), the quantity of rental properties demanded (QD1) exceeds the quantity of rental properties supplied (QS1). There is a shortage. In other words, at least some of the tenants who want to rent at the current market rent (R1) miss out on a property. So, what do they do? If they are willing and able to pay a higher rent, they could find themselves a willing landlord, and offer to pay slightly more than R1, to ensure they don't miss out. So, tenants will bid the rent up, until eventually the market reaches equilibrium at R0, where the quantity demanded and quantity supplied are both equal to Q0.


Other than tenants bidding up rents, what else might we see? Tenants might be willing to pay 'letting fees' or 'key payments' to secure their rental property. They might pay the landlord to have their references checked, or they may pay more weeks of rent in advance, or pay a higher rent that has a 'discount' after six months. Tenants may agree to cover costs like lawn mowing or simple maintenance that often the landlord would pay for. There are lots of ways that additional payments to secure a rental can be 'hidden', all of which makes Labour's plans to ban letting fees somewhat of a joke. There are so many alternatives, and it isn't only the landlords to blame - they wouldn't charge those fees if tenants were not willing to pay them to secure a rental property. And tenants wouldn't feel the need to pay those fees if there wasn't a shortage of rental housing.

Read more:

Wednesday 6 September 2017

Free agents, the draft, and the winner's curse

The NBA season is just six weeks away, and the NFL starts this Thursday (or Friday for those of us in New Zealand). So, thinking about this recent article by Ben Falk is timely, and gives an excellent explanation of the winner's curse:
Three real estate investors are bidding on a building of unclear value. The first doesn’t think it’s a great property and enters a low bid. The second decides it has a lot of untapped potential and enters a very high bid. The third can see both pros and cons and offers something in between. The winner, of course, will be the second investor, the one who values it the most.
But here’s the problem: that investor is also the most likely to have overvalued the property. If this second investor is wrong about their guess that the building has untapped potential they’ll be stuck paying too much. This is a phenomenon known as the Winner’s Curse, where the winner of the bid will be the most likely to have priced it incorrectly. It applies to many types of bidding situations, and NBA free agency is no exception.
The team paying the most for a free agent is the team who values that free agent the most — who likely overvalues the player the most. It’s possible that all of the other teams with money undervalue the player and so the winner gets good value. But more likely than not the winning bidder will be the one that overpays the most.
The winner's curse doesn't just apply to free agents, but also to the draft. Teams don't have perfect information about each player available in the draft. Some teams will over-estimate the value of a player (and want to draft that player earlier), and other teams will under-estimate the value of a player (and want to draft that player later). Holding other things constant (like each team's needs for players in particular positions), the team that holds the most positive views of a player (that is, the team that most over-estimates their value) will draft that player. This suggests that many players are drafted too high - the winner's curse. As evidence of this, the NFL in particular has a long list of high draft picks who turned out to be busts (try this list, or this one).

Falk's article is also interesting in that it talks about endowment effects. Quasi-rational decision makers are loss averse - we value losses much more than otherwise-equivalent gains. That makes us are unwilling to give up something that we already have, or in other words we require more in compensation to give it up than what we would have been willing to pay to obtain it in the first place. Falk notes that:
Some overpay to keep their own free agents, falling victim to the endowment effect and thinking more highly of something they already have.
I'm less sold that endowment effects are an issue in pro sports. An endowment effect would require the teams to be trading players. With free agency that doesn't happen - they deal direct with the players. However, where teams do trade players (in exchange for draft picks or other players), they may hold out for more than the player is actually worth. Which probably reinforces the winner's curse for the team that receives the traded player.

[HT: Marginal Revolution, back in July]

Tuesday 5 September 2017

The economics of the money-back guarantee

Back in July in The Conversation, Yalcin Akcay (Melbourne Business School) and Tamer Boyaci (ESMT Berlin) wrote an interesting piece on the economics of money-back guarantees:
...retailers are increasingly offering extra services such as warranty plans, free shipping and guarantees to reassure them. Selling with the “money-back guarantee” is a prime example of this.
This is because the economics of the money-back guarantee can work for retailers. These businesses allow customers to return products that do not meet their expectations — as a result of poor quality or a mismatch in taste — for a full or partial refund. Essentially offering their customers an insurance against the perceived risk of the product.
And research shows these retailers make a profit with this type of guarantee, given specific conditions.
For an article on "the economics of the money-back guarantee" though, I think it missed the most important economic aspects of this practice. Many goods are what economists call experience goods - goods where some of the characteristics (such as the quality of the good) are known to the seller, but are unknown to the buyer until after they have completed their purchase and received the good. The quality of the good is private information, and this creates an adverse selection problem which is especially problematic in the case of online sales.

In the absence of any other information (see below), since buyers don't know the quality of the goods they are purchasing online, they will assume that any goods on offer online are low quality. This is a pooling equilibrium - in the buyers' eyes, all goods are the same (low) quality. This lowers the amount that buyers are willing to pay for online purchases. Since the buyers aren't willing to pay much for goods of unknown quality, this drives sellers of high-quality goods out of the marketplace, because they can't receive a high enough price to justify selling their high-quality goods. The market for high-quality goods online collapses - the market fails.

Markets (including online markets) have developed ways to deal with this adverse selection problem, and generate a separating equilibrium (where high-quality and low-quality goods can be separated). One way is through signalling. With signalling, the informed party (the seller in this case) finds a way to credibly reveal the private information (the quality of the good that is being sold) to the uninformed party (the buyer). There are two important conditions for a signal to be effective: (1) it needs to be costly; and (2) it needs to be costly in a way that makes it unattractive for those selling low-quality goods to attempt (such as being more costly to those selling low-quality goods). These conditions are important, because if they are not fulfilled, then those with low quality could signal themselves as having high quality.

Money-back guarantees are a good example of an effective signal of high quality. They are costly - as Akcay and Boyaci note:
Customer returns cost retailers more than US$260 billion (equivalent to 8% of total retail sales) annually in the United States alone.
Those goods that are returned must then be sold at a discount, which is costly to the retailer. Money-back guarantees are also more costly if you are selling low-quality goods, since low-quality goods will be more likely to be returned. This makes it unattractive for sellers of low-quality goods to offer money-back guarantees. So buyers can be more confident that they are buying goods that are high-quality, when a money-back guarantee is offered. They can use this information to separate goods that are more likely to be high quality (those with money-back guarantees) from those that are more likely to be low quality (a separating equilibrium).

It is this signalling aspect that is the most important, when it comes to the economics of the money-back guarantee.