Monday 30 July 2018

Fonterra, monopsony, and market power

This week in ECONS101, we will be discussing firms with market power. Market power refers to the ability of the seller (or sometimes the buyer) to have an influence over market prices. In our case, we will be focusing on firms who have market power over markets into which they are selling. In those cases, the firm will raise the price above the price that would arise in a more competitive market (and will consequently sell a lower quantity, but at a higher profit). At the extreme end of market power are monopolies, where there is just a single seller of the product and where there are no close substitutes.

Sometimes though, it is the buyers who have market power. Think about the case of a single employer in a small isolated town (like a company town, for instance). If people in the town want a job, they have to work for the single employer. This gives the employer market power, and they can react by lowering wages - after all, the employees aren't going to go work for some other firm, as there are no other employers in the town. This is an example of a monopsony.

That brings me to Fonterra. It isn't quite a monopsony (there are other dairy companies in New Zealand), but it does have a high degree of market power due to its dominant position as a buyer of milk from farmers. If Fonterra was to act on its market power, it could easily drive down the price of milk paid to farmers. Until relatively recently, this wouldn't have been worthwhile, because the farmers were also the shareholders of Fonterra, so any profit gains obtained from buying milk more cheaply from farmers would have simply been returned to the same farmers in the form of higher profits. However, Fonterra underwent a capital restructure in 2009-2010, so the concordance between farmers as sellers of milk and shareholders as receivers of dividends from Fonterra profits decreased.

Despite that, there are still a couple of things that keep Fonterra's monopsony market power in check. The first is the existence of smaller dairy companies. If Fonterra screwed its farmers over too badly, they could jump ship to the competition. However, those other dairy companies are small and their ability to absorb large numbers of new farmer suppliers is limited. So, Fonterra could get away with offering a slightly lower price than its local competitors offer.

The second restriction on Fonterra's market power is the legislated requirement that it must accept all milk that its farmer suppliers offer to it. That is why this proposal should be a worry:
Fonterra, a farmer-owned cooperative with listed units, has long pushed back against the DIRA requirement that it take all milk offered to it, which has resulted in the company having to spend hundreds of millions on new stainless steel processing capability as annual milk production climbed in recent years.
Fonterra argues this capital requirement erodes its strategy to move from processing commodities to value-add products, and is helping its internationally-backed competitors.
DIRA is the Dairy Industry Restructuring Act 2001, which was the legislation that enabled the creation of Fonterra, through the merger of New Zealand Dairy Group and Kiwi Co-operative Dairies, the two largest farmer cooperatives at the time, and the New Zealand Dairy Board, which was the exporting agent for all of the country's dairy cooperatives. DIRA is currently under review, and unsurprisingly Fonterra wants the shackles removed. They are arguing that:
The industry had become "highly competitive" particularly with the relatively large number of new entrants in the past five years.
"These international new entrants are often backed by deep capital and global businesses. They do not need an extra leg-up via milk from New Zealand farmers. Given this new competitive environment, the issue of open entry – which means having to accept all milk from new suppliers – is a critical part of the review," it said.
"Open entry limits our farmer-shareholders and the industry's ability to maximise value for New Zealand. It distorts investment decisions and leaves Fonterra's farmers underwriting risk for competitors who cherry-pick their suppliers."
Fonterra still collects over 80 percent of the milk production in New Zealand. Its competitors are much smaller. If the Government removes the requirement for Fonterra to accept all milk offered to it, then its market power naturally increases. If a farmer wants Fonterra to accept milk, and Fonterra doesn't have to accept it, then Fonterra can say, "We'll take your milk, but only at a discount of X%". How large X% is will depend on whether the competition could feasibly take the milk. In areas where there isn't local collection by Synlait, Westland, Tatua, etc., those farmers are at very real risk of being seriously screwed over by such a change.

There may well be benefits to Fonterra's shareholders from freeing Fonterra up from the requirement to accept all milk offered to it. But that doesn't mean that the proposal won't also come with real costs to farmers attached to it.

Sunday 29 July 2018

How does New Zealand's alcohol control policy regime rate?

Given the fairly constant flow of news stories about New Zealand's drinking environment, you'd be forgiven for thinking that New Zealand is very permissive in terms of alcohol. Take, for instance, this recent New Zealand Herald story:
New Zealand's drinking culture has come under fire following a new study which shows a link between alcohol consumption and trips to the emergency department.
Experts are linking the result to New Zealand's binge drinking culture and easy access to cheap booze.
The study, recently published in Addiction, shows data from 62 emergency departments in the 28 countries, and includes more than 14,000 patients.
It ranks New Zealand second out of 28 countries for the proportion of injury cases presenting to emergency departments where the person had consumed alcohol in the preceding six hours...
[Alcohol Healthwatch director Nicki] Jackson said such injuries were costing New Zealand billions of dollars.
If we want to improve mental health, reduce suicides and shorten hospital waiting lists, government needed to start making changes around accessibility to alcohol, she said.
"We haven't taken any strong measures against the most harmful drug in our society. We will keep paying this cost until we take strong action."
There were issues around easy access to alcohol, increasing affordability, and marketing.
"We need to raise excise tax on alcohol," she said, adding taxes were "nowhere near the level they should be" when it comes to drinking.
I tracked down the research paper mentioned in the article (sorry I don't see an ungated version). It was written by Cheryl Cherpitel (Alcohol Research Group in the U.S.) and others (including Bridget Kool of the University of Auckland). [*]

In the paper, they use a measure called the International Alcohol Policy and Injury Index (IAPII) as an omnibus measure of the policy environment with a range from zero to 100. Higher values of this measure represent a stricter policy environment, and lower values represent a more permissive environment. Based on the 33 studies included in Table 1 of the paper (and treating them all as independent observations [**]), the average value of the IAPII is 59.8. New Zealand's IAPII was 76 in 2000, and 78 in 2015-16. In fact, only Sweden (91 for two observations), Canada (80 for three observations), and Ireland (79 for one observation) have stricter alcohol policy environments than New Zealand. Australia's IAPII in 1997 was also 78.

That doesn't strike me as particularly bad. But equally, I'm not sure that we want a permissive alcohol policy environment, so I would want to know how the policy environment relates to harm. There's another measure in the paper that will help with that question: the detrimental drinking patterns (DDP) measure. DDP is "an indicator of the ‘detrimental impact’ on health and other drinking-related harms at a given level of alcohol consumption", and is measured on a 1-4 scale, "from 1 (the least detrimental pattern of drinking) to 4 (the most detrimental)." New Zealand rates as a 2 on the DDP score, equal with Canada, but better than Sweden or Ireland (both have DDP scores of 3). Only Australia and Switzerland rate a 1 on the DDP measure.

I graphed the relationship between DDP and IAPII for the observations from the Cherpitel et al. paper, and that graph is shown below. The IAPII is on the vertical axis, and the DDP is on the horizontal axis (there are only four values of the DDP). Each blue dot represents on study site from Table 1 of the paper. The two red dots are the observations for New Zealand (the lower dot is the observation for 2000, and the higher dot is the observation for 2015-16). The dotted green line is a linear regression line showing the relationship between the two variables. As you might expect, the relationship is negative, meaning that countries with more permissive alcohol control policies (lower IAPII) have worse detrimental drinking patterns (higher DDP).


One thing to note from this diagram is that the dotted line essentially shows the 'average' IAPII for sites at that level of DDP. Observations above the line are under-achieving - their DDP is higher than would be expected, given the strictness of their alcohol control policies. New Zealand fits into that category.

All of that suggests to me that the problem isn't our alcohol control policies. If we have high alcohol-related emergency department admissions, as the Cherpitel et al. paper contends, then the solution is unlikely to be found in stricter alcohol control policies. At least, based on the comparison with other sites in this study. Our policies are already one of the strictest, and are stricter than would be expected given our level of detrimental drinking patterns.

One notable aspect that isn't covered in the IAPII measure is pricing, and that could be a fruitful policy avenue to explore. If we want to reduce alcohol-related harm, we might get more value out of pricing interventions (e.g. minimum pricing, higher excise taxes) than stricter policy controls.

*****

[*] I haven't written too much about the Cherpitel et al. paper itself in this post, because that wasn't my focus. They show that IAPII and DDP have independent and statistically significant relationships with the chance of a person presenting at ED for an injury having consumed alcohol in the previous six hours. In other words, higher IAPII is associated with a lower proportion of injuries that are alcohol-related, and higher DDP is associated with a higher proportion of injuries that are alcohol-related. That isn't unexpected. However, when they include both variables in the same model, only IAPII is statistically significant. This is because of multicollinearity - two of their explanatory variables are closely correlated, so that reduces the chance that either of them show up as statistically significant when they are both included in the same model, because they are trying to explain the same part of the variation in the outcome variable. Notice the close relationship between IAPII and DDP in the diagram above.

[**] Strictly speaking, we should down-weight multiple observations from the same country (e.g. two observations from New Zealand, and three from Canada). However, I don't think it would make much difference in this case.

Saturday 28 July 2018

Radiation and house prices after the Fukushima nuclear disaster

How much less would you be willing to pay for a house in an area affected by radiation significantly above background levels, compared with an otherwise-identical house that is unaffected by radiation? It's not a crazy question. In the U.S., hundreds of millions of people (including the populations of 26 of the 100 most populous cities) live within 50 miles of a nuclear reactor. Worldwide, there are 21 nuclear plants that each have more than one million people living within 30 kilometres of them.

Of course, nuclear accidents are thankfully rare. But the risk is not zero, and when an accident does occur, as happened in Fukushima in 2011, people can be understandably reluctant to live in the affected areas due to the risks to their health and wellbeing. Obviously, that has a flow-on impact on house prices, even outside the most heavily affected areas.

Hedonic demand theory (or hedonic pricing), which we discussed in my ECONS102 class last week, recognises that when you buy some (or most?) goods you aren't so much buying a single item but really a bundle of characteristics, and each of those characteristics has value. The value of the whole product is the sum of the value of the characteristics that make it up. For example, when you buy a house, you are buying its characteristics (number of bedrooms, number of bathrooms, floor area, land area, location, etc.). When you buy land, you are buying land area, soil quality, slope, location and access to amenities, etc. You are also buying the exposure to current levels of radiation, as well as the risk of future exposures to radiation in the event of a nuclear accident. If each of those characteristics can be separately valued, then you can place a value on how much people are willing to pay to avoid radiation (or alternatively, how much they are willing to accept to live in a radiation-affected area).

In a new paper in the Journal of Regional Science (sorry I don't see an ungated version anywhere online), Alistair Munro (National Graduate Institute for Policy Studies, Japan) looks at the impact of the Fukushima disaster on house prices in Fukushima and Miyagi prefectures, using data from 2009 to 2017. Fukushima prefecture was most affected by radiation as well as the tsunami that led to the nuclear disaster, while neighbouring Miyagi prefecture was only affected by the tsunami. So, differences between the two in terms of changes in house prices can be attributed to differences in radiation levels (once you control for other characteristics of the properties, of course). He finds that:
...across the subsample of noncondominium residence types a 1 percent rise in radiation leads to a 0.051 percent drop in values, while for condominiums treated separately the elasticity is also 0.051. For housing land the elasticity is 0.044, and 0.032 for land with existing buildings if the age of the building is controlled for.
In other words, areas more affected by radiation have lower house prices. How much lower? Munro reports that:
...using a variety of methods... the impact of radiation translates into a one to two million Yen (US$10,000–20,000)... reduction in housing prices for average residential properties.
That is quite substantial, but is not terribly surprising. However, the next part of the paper is very cool. Having established how much less people are willing to pay for living in an area with more radiation, Munro then uses that information plus information on the risk of cancer arising from environmental radiation, to estimate the value of a statistical life (or VSL).

As I will discuss with my ECONS102 class later this semester, the VSL can be estimated by taking the willingness-to-pay for a small reduction in risk of death, and extrapolating that to estimate the willingness-to-pay for a 100% reduction in the risk of death, which can be interpreted as the implicit value of a life. Munro estimates VSL to be in the region of US$4.5-6.4 million, which is similar to VSL estimated in other studies (and other risk contexts). An additional take-away from that analysis is that there isn't a particularly high element of dread associated with avoiding death from radiation (otherwise, people would be willing to pay more to avoid it, and the estimated VSL would be much higher).

Next we really need to know whether the Fukushima disaster affected people's perceptions of nuclear risk in other areas that are near nuclear plants but which weren't affected by the disaster. That would be much more difficult to establish, but potentially much more interesting.

Thursday 26 July 2018

Evergreening viagra, revisited

Back in April, I wrote:
Of course, if it turns out that Viagra (or sildenafil) is an effective treatment for babies suffering from stunted growth, then that is a great thing. And not just for the babies. Pfizer (the patent-holder for Viagra) would have protection from generic versions of sildenafil for another twenty years, meaning another twenty years of market power - not just for Viagra used for treating babies, but Viagra used for all treatments (including the highly profitable market for treating erectile dysfunction). A really cynical person would probably recognise that the sudden interest in new uses of Viagra now (the four trials mentioned in the article are not the only trials trying to find new uses for Viagra - see here for another example) is because Viagra comes off patent in April 2020. The clock is ticking for Pfizer, if they want to keep milking their Viagra cash cow.
The context was a drug trial for the use of Viagra to treat babies at risk of stunting. If Pfizer can find a new use for Viagra before it comes off patent in 2020, they can re-start the patent clock and retain their market power and high profits from the drug.

In today's news though:
Overseas deaths of babies involved in a clinical trial has prompted a review from researchers who have been running an aligned study here in New Zealand.
The international research STRIDER consortium has involved four trials - one of them carried out here and in Australia - investigating a possible use for the drug sildenafil in treating fetal growth restriction.
Sildenafil is better known as the drug behind Viagra, which is used to treat male erectile dysfunction by dilating blood vessels in the pelvis.
The researchers drew on a generic version of sildenafil, not manufactured by Viagra's makers Pfizer, to investigate whether it might work the same way in pregnant women by increasing blood supply to the placenta.
But last week, one of the trials, which was being carried out in the Netherlands, was terminated early due to safety concerns.
Results from a planned interim analysis showed more babies in the sildenafil group suffered a serious lung condition, called persistent pulmonary hypertension, which may have led to 11 more liveborn babies dying before hospital discharge.
Are we now seeing the real cost of attempts to evergreen the patent for Viagra? It's hard to say. As Thomas Lumley noted on StatsChat this morning:
It looks as though something might have been different about the Amsterdam study — although it’s also possible they were extremely unlucky.
In other words, it's too early to say, and the other drug trials (in the UK, Canada, and New Zealand/Australia) haven't shown such negative effects (but apart from the New Zealand/Australia study, they also haven't shown positive effects). However, eleven additional deaths as a result of a drug trial is far too many, and it's appropriate that the Dutch researchers have called it off. As Lumley notes, a trial that shows a negative effect is still a positive outcome, if it prevents future deaths from inappropriate treatment.

However, would this trial have gone ahead if Pfizer weren't intent on evergreening their Viagra patent? If I extend my cynicism from my April post on evergreening Viagra though, this episode demonstrates that the ability to evergreen patents might not only have costs in terms of lost economic welfare (due to the ongoing market power of the patent-holder), but might have real human costs.

Wednesday 25 July 2018

Are the days of the A2 milk price premium numbered?

In Dairy News this week, Federated Farmers national dairy chair Chris Lewis is quoted:
“If we are all going to start supplying A2 milk, will there still be a premium when the world is awash with it in five or 10 years time?” he queries.
“That is the question mark I have on it. I am looking at doing it… but knowing full well that if lots of other farmers in New Zealand, Australia and worldwide are doing it, will there be a premium for it if it is a common thing?”.
Plenty of farmers are talking about it and have seen the rise of The a2 Milk Company on the stock exchange, he says. Some worldwide companies are looking at it.
 “The only word of caution: is there going to be a flood of A2 milk and will there be a premium after that?”...
Lewis’s supply company Open Country Dairy is also looking at A2 milk.  “But so is everyone else around the world. With everyone looking at it and looking to breed A2 milk, what is going to happen to the supply and demand graph?” he cites as his main reservation.
Having just covered supply and demand in ECONS102 last week, we can answer that last question. What will happen to the supply and demand graph?

Consider the market diagrams below. On the left is the market for standard milk, and on the right is the market for A2 milk. Assuming the scales on the price axes are the same, you can see that there is a price premium for A2 milk (the farm-gate price of A2 milk, PA, is greater than the farm-gate price of standard milk, P0). If the costs of having a herd of A2 cows probably aren't too much different from having standard cows, then the price premium for A2 milk translates into higher profits for farmers with A2 cows.


Consider what happens next, as shown in the second diagram below. Higher profits for A2 milk attract some farmers to switch from standard cows to A2 cows, as Lewis noted above. The supply of A2 milk increases, from SA to SB. This decreases the price of A2 milk from PA to PB. At the same time, the supply of standard milk decreases, from S0 to S1, which pushes the price of standard milk up from P0 to P1. Notice that the price premium has shrunk, which is obviously what Lewis is worried about.


Of course, the price decrease in A2 milk is based on the assumption that demand remains constant. Demand for A2 milk is increasing. The price will fall in absolute terms if the increase in supply is greater than the increase in demand. Otherwise (if the increase in demand is greater than the increase in supply), the price will rise. However, regardless of what happens to the price of A2 milk in absolute terms, the price of standard milk will increase as farmers exit that sub-market and move into A2 milk production. So, relative to the price of standard milk, it does seem likely that the price premium for A2 milk will eventually fall.

Tuesday 24 July 2018

A stylised model of environment-GDP trade-offs

In ECONS101 last week, we covered constrained optimisation models. I spent most of the week's lectures working through examples with the consumer choice model, then at the end I introduced the worker's labour-leisure trade-off model, and then briefly the constrained optimisation model for savers (deciding between consumption today and consumption in the future). I also mentioned that the constrained optimisation model can be used in lots of other situations.

Over on The Visible Hand of Economics blog (good to see that it is back in business), Matt Nolan wrote a spirited defence of GDP as a measure of production, where at one point he wrote:
GDP is surprisingly nifty for these things … as long as we don’t always start with the unconditional prior “more GDPs are good”.  We need to ask “what is the trade-off that creates this GDP”.
Since trade-offs can often be described using the constrained optimisation models we covered in ECONS101 last week, it got me thinking about what such a model might look like. This is a stylised model so I'm making a number of un-stated assumptions, and there are a number of obvious criticisms of the model that you can read more about if you scroll to the notes at the bottom of the post.

First, we need two goods that our decision-maker has to choose between. In this case, let's consider a model of the trade-off between GDP and environmental quality. Our decision-maker in this case is a benevolent social planner (or the government, you choose). [*]

Next, we need to think about what constrains our decision-maker's decision about the quantity of GDP and quantity of environmental quality. In this case, it's resources. Resources (land, labour, capital variously defined) can be applied to produce stuff (increase GDP) or to increase environmental quality. This sets the trade-off as being more GDP (Y) or more environmental quality (Q) - if we want more production (higher GDP), then we end up with lower environmental quality, and if we want higher environmental quality, we end up with less production (lower GDP).

The constraint can be represented as a straight line on the diagram below. [**] To draw a straight line of course, we only need two points. One is the point where we use all of our resources on environmental quality and have no production at all (the point QMAX on the diagram); the other is the point where we use all of our resources on production and have no environmental quality at all (the point YMAX on the diagram). The trade-off between environmental quality and production is represented by the slope of the constraint - this is the opportunity cost of environmental quality (it is the amount of production we would have to give up to get one more unit of environmental quality, however it was measured). The opportunity cost of environmental quality is related to productivity, which is a point we will come back to a bit later in the post.

Lastly, we need to represent the decision-maker's preferences. As with other constrained optimisation models, we do this with indifference curves. Just like indifference curves in the consumer choice model (and other constrained optimisation models) the decision-maker is trying to get to the highest possible indifference curve, which is up to the right. [***] On the diagram below, the highest possible indifference curve is I0, and the optimal bundle of production and environmental quality is E0 (which includes Y0 production or GDP, and Q0 environmental quality). So, we can see that the optimum point contains some positive level of production, and some positive level of environmental quality.

Now we have constructed our model, we can do something useful with it. Let's consider the question: what happens when productivity increases? An increase in productivity means that we can produce more using the same amount of resources. In that case, the constraint in our model pivots outwards and becomes steeper, as in the diagram below. That's because, if we applied all of our resources to production, we could now produce more (so the constraint moves up on the y-axis from YMAX to YMAX1). The productivity increase doesn't affect the maximum environmental quality we could obtain (which stays at QMAX). The steeper constraint means that the opportunity cost of environmental quality has increased - we now need to give up more production for each unit of environmental quality than we did before.

What will the decision-maker do? They could keep operating at the previous optimum (E0), because it is in the feasible set. But they won't, because they can get to a higher indifference curve (remember that the decision-maker is trying to maximise utility by getting to the highest possible indifference curve). That highest possible indifference curve is now I1, and their optimum is the bundle E1 (which includes Y1 production or GDP, and Q1 environmental quality).

This result is interesting. First, notice that the opportunity cost of environmental quality has gone up. Usually, when the cost of something goes up, we would expect people to want less of it, but our decision-maker wants more (notice that Q1 is bigger than Q0). [****] That's because, when you change the relative price of two 'goods' (in this case, the relative price of environmental quality and production), two effects are simultaneously occurring: (1) a substitution effect; and (2) an income effect.

The substitution effect suggests that the decision-maker will want less environmental quality, because it has now become relatively more expensive (higher opportunity cost). Given that the quantity of environmental quality has actually increased, that tells us that the income effect is working in the opposite direction, and is bigger than the substitution effect. In this case, the income effect works like this: the pivoting outwards of the constraint increases the 'purchasing power' of the decision-maker's resources, in the sense that they can purchase more production and more environmental quality (we can refer to this as an increase in their real income). Both production and environmental quality are normal goods (goods that we want to consume more of when our incomes increase), so because the decision-makers real income has increased, they want to consume more environmental quality as a result of this income effect.

So, that's a simple stylised model of environment-GDP trade-offs, with some illustration of how it can be used to explain decisions. It's an example of a constrained optimisation model, but in quite a different context from how it is usually presented in class.

*****

[*] Aggregating the preferences for many people into a single set of aggregate preferences leads to the problem described in Arrow's Impossibility Theorem. To avoid that problem (or to embrace it), we'll just assume there is a single decision-maker (who Kenneth Arrow labelled a 'dictator'), whose preferences (their indifference curves) represent those of society as a whole.

[**] A more realistic model would recognise that resources are not equally useful for production as for increasing environmental quality, and so the opportunity cost is not constant. That means that the constraint is not a straight line, but a curved line (bowed out away from the origin). Some of you would recognise that as a production possibilities frontier, and you would be right - this is a model of constrained production, not constrained consumption. Simplifying the model by assuming that the opportunity cost of environmental quality is constant just makes the model a bit easier to draw, but is otherwise not consequential in terms of the qualitative results we get from the model.

[***] The indifference curves here are curved because of diminishing marginal utility (just as they are in most constrained optimisation models). We gain extra utility (satisfaction) from more production, and from more environmental quality. However, the extra utility we get from one more unit (of production, or of environmental quality) diminishes as we get more of it. Maybe you would argue that there is not diminishing marginal utility of environmental quality (at least, some people would). However, you only need diminishing marginal utility of one of the goods in order for the indifference curves to be curved.

[****] Of course, it is also possible to draw the diagram in such a way that the quantity of environmental quality decreases after the change. The difference is purely a result of differences in the shape of the indifference curves. The case where environmental quality increases is more interesting, which is why I focus on it here.

Monday 23 July 2018

Population ageing can't be balanced by migration

A headline in The Conversation today attracted my attention for all the wrong reasons:
Migration helps balance our ageing population – we don’t need a moratorium
I'm not sure if the researcher who wrote the article (Liz Allen of Australian National University) should be held responsible for the headline, but the data in the article doesn't support the headline, and neither does years of research in Australia and New Zealand, including by Natalie Jackson and myself.

To illustrate, one of the key figures from the article is reproduced below (you can find the actual data in The Conversation article). The vertical axis shows the 'dependency ratio' (the number of people aged 0-14 years old or 65 years and over, for every 100 people of working age (15-64 years)). The different coloured lines track different population projections for the Australian population, based on different assumptions about annual net international migration (between zero migration - the red line, and 280,000 net international migration per year - the grey line). Notice that the dependency ratio increases regardless of migration scenario. Irrespective of the projected level of net international migration in Australia, the dependency ratio increases. While the zero net migration scenario is the worst, there is clearly no 'balancing' of population ageing by international migration.


The reason for this is simple. International migrants may be younger (on average) than the domestic population, but migrants get older just like the domestic population does. In order to offset the ageing of the domestic population plus the ageing of the newly arrived migrants, you would need to increase migration even further. In fact, you would need accelerating net international migration in order to offset population ageing. That simply isn't realistic for mathematical reasons (you'd soon run out of young people internationally who wanted to move to your country), if not for political reasons.

This isn't a new insight for Australia or New Zealand. Rebecca Kippen and Peter McDonald wrote a number of papers in the late 1990s and early 2000s on this issue, based on Australian data (see here and here and here). Natalie Jackson and I had a paper published last year in the Journal of Population Ageing (ungated earlier version here), which included a similar analysis for New Zealand. In that paper, we wrote that:
...extremely high migration levels would have only minimal impact on the proportion of the population aged 65 years and over in 2068. Zero net migration (Scenario 8) would see around 28.1% aged 65 years and over in 2068, while net migration of 150,000 per year would reduce that to 23.6% (Scenario 1). The resulting populations would number around 5 million and 16.3 million respectively. Thus, the reduction of 4.6 percentage points in ageing (by comparison with the zero migration scenario) would come at a ‘cost’ of 11 million additional people. Similarly, the addition of 10.4 million migrants over the period 2013–2068 would reduce the proportion under the equivalent of Statistics New Zealand’s medium variant projection (Scenario 7) in 2068 by just 3.5 percentage points.
In other words, it requires unrealistic levels of net international migration to have an appreciable impact on population ageing. And you can see that for yourself if you look back at the diagram from Allen's article I reproduced above. The difference between 200,000 net international migration and 280,000 net international migration per year is almost imperceptible in terms of the impact on the dependency ratio. It would take millions of annual migrants to 'balance' the increasing dependency ratio arising from population ageing. And then, as I note above, millions more to offset the ageing of those migrants. And so on.

Notwithstanding all the analysis in the papers I mentioned above, international migration actually could be a solution to population ageing. However, this solution only presents itself if the migration is the outward migration of older people (not the inward migration of younger people). As far as I know, no one is yet advocating for rounding up oldies and jetting them off overseas to see out their remaining days, in order to lessen the burden on the working age population.

So, forget balancing population ageing with migration - it isn't going to happen.

Sunday 22 July 2018

Digging holes and filling them in again

This week in ECONS102, among other things we will be discussing the diminishing marginal product of labour. The example I use to illustrate this concept is a simple firm that digs holes, transports the dirt to the other side of the site, and fills in the previous days' holes. It's a ludicrous example, but as it turns out it is now not without precedent, thanks to this example from the clean-up of the California wildfires of 2017:
Over the next seven and a half months, contractors worked across Sonoma, Mendocino, Napa and Lake counties, where they scraped 2 million tons of soil, concrete and burned-out appliances from 4,563 properties, loaded it all into dump trucks, and hauled it away.
In the end, the government-run program was the most expensive disaster cleanup in California history. The project, managed by the Army Corps of Engineers, totaled $1.3 billion, or an average of $280,000 per property. The bulk of that $1.3 billion comes from the Federal Emergency Management Agency (FEMA), but state and local governments are also responsible for about $130 million...
The Army Corps of Engineers said the high cost of the project was necessary to ensure a safe and effective cleanup. But KQED found that these multimillion-dollar federal cleanup contracts actually incentivized unsafe and destructive work...
Critics say many of the problems with the project -- high cost, safety lapses and over-excavation -- are linked to the primary incentive structure that the Army Corps put into place: paying by the ton.
Contracts reviewed by KQED show that the Army Corps of Engineers paid upward of $350 per ton for wildfire debris. Dan’s truck could haul about 15 tons. That’s more than $5,000 per load -- a powerful financial incentive to haul as much heavy material as possible as quickly as possible.
Dan said he saw workers inflate their load weights with wet mud. Sonoma County Supervisor James Gore said he heard similar stories of subcontractors actually being directed to mix metal that should have been recycled into their loads to make them heavier.
“They [contractors] saw it as gold falling from the sky,” Dan said. “That is the biggest issue. They can’t pay tonnage on jobs like this and expect it to be done safely.”...
Paying contractors by the ton incentivizes them to haul away as much dirt, rocks and concrete as they can.
“It's such a needless waste of our society's resources to pay by the ton,” said Sonoma County contractor Tom Lynch, who was an early and vocal critic of the program.
So many sites were over-excavated that the Governor’s Office of Emergency Services recently launched a new program to refill the holes left behind by Army Corps contractors. That’s estimated to cost another $3.5 million.
As Steven Levitt and Stephen Dubner noted in their book Think Like a Freak (which I reviewed here), no individual or government will ever be as smart as all the people out there scheming to take advantage of an incentive plan. So, when you pay contractors for every ton of debris they remove, you create an incentive for contractors to maximise the number of tons of debris they remove. Sounds good in theory, but nobody should be surprised that the contractors 'find' additional tons of 'debris' to remove.

[HT: Marginal Revolution]

Saturday 21 July 2018

The ancestral characteristics of modern populations

Economic development is remarkably persistent. There is plenty of research that demonstrates that historical patterns of development are predictive of current patterns of development (for example, refer to the research by Daron Acemoglu and James Robinson, as detailed in their book Why Nations Fail (which is on my long list of books-waiting-to-be-read).

Paola Giuliano (UCLA) and Nathan Nunn (Harvard) have a new dataset that, as far as I can see, has enormous potential for looking at a wide range of questions in development, as well as providing a host of candidate variables for use as instruments in otherwise-unrelated analyses. The development of the dataset is described in an article published earlier this year in the journal Economic History of the Developing Regions (ungated version here). The dataset itself is available from Nathan Nunn's website here.

The journal article by Giuliano and Nunn explains:
We contribute to this line of research by providing a publicly accessible database that measures the economic, cultural, political, and environmental characteristics of the ancestors of current population groups... Specifically, we construct measures of the average pre-industrial characteristics of the ancestors of the populations in each country of the world. The database is constructed by combining preindustrial ethnographic information for approximately 1,300 ethnic groups with information on the current distribution of approximately 7,500 language groups measured at the grid-cell level.
Giuliano and Nunn then go on to describe the dataset, as well as providing illustrations of the data. What particularly caught my eye was a brief analysis they did of the relationship between their historical geographic characteristics (meaning the average ancestral characteristics of populations living in current countries) and current GDP. They find that:
Not surprisingly, being further from the equator is positively associated with real per capita GDP. However, what is more surprising is that the ancestral measure appears to be much more strongly correlated than the contemporary measure. This is particularly striking since we would expect the ancestral measure to be more imprecisely measured than the contemporary measure.
They find similar results for ancestral ruggedness of the land, and ancestral distance from the coastline. The reason these results caught my eye was that it suggests to me that these variables might be suitable instruments for GDP in other analyses (such as when GDP would be endogenous in the particular model you are trying to run. If that was a bit too pointy-headed for you, don't worry. It just suggests that these variables have a lot of potentially cool uses for economists.

[HT: Marginal Revolution]

Friday 20 July 2018

Of mice and men

When considering a decision about whether to do something or not, we are thinking about the future. For example, say that we are managing a firm that has an ongoing project and we are considering whether to persist with the project or to stop the project and divert the resources to an alternative project. In this case, we should only be considering the future. Costs (and benefits) that have already occurred and that cannot be recovered are sunk costs. They should not affect our decision-making. And yet, so often they do.

Richard Thaler, the 2017 Nobel Prize winner whose work is neatly summarised in his book Misbehaving: The Making of Behavioral Economics (which I reviewed here), says that the sunk cost fallacy arises because of a combination of loss aversion and mental accounting.

In general, people are loss averse because we value losses more than we value equivalent gains. Gaining $10 makes us happier, but losing $10 makes us unhappier to a greater extent than gaining $10 makes us happier. So, we generally try to avoid losses.

Mental accounting suggests that we keep 'mental accounts' associated with different activities. We put all of the costs and benefits associated with the activity into that mental account, and when we stop that activity, we close the mental account associated with it. But if the mental account has more costs in it than benefits, it is a loss. And because we are loss averse, we try to avoid closing the account.

So, you can see why a manager might be reluctant to stop a project that is incomplete, even if (and maybe especially if) it has cost a lot so far. Sunk costs may not affect the decision-making of a purely rational decision-maker, but for someone who is quasi-rational (and therefore affected by loss aversion and mental accounting), the sunk costs are relevant to their decision.

Now, it seems that humans are not the only creatures subject to the sunk cost fallacy. New research, reported in the New York Times last week, suggests that mice have the same problem:
This “sunk cost fallacy,” as economists call it, is one of many ways that humans allow emotions to affect their choices, sometimes to their own detriment. But the tendency to factor past investments into decision-making is apparently not limited to Homo sapiens.
In a study published on Thursday in the journal Science, investigators at the University of Minnesota reported that mice and rats were just as likely as humans to be influenced by sunk costs.
The more time they invested in waiting for a reward — in the case of the rodents, flavored pellets; in the case of the humans, entertaining videos — the less likely they were to quit the pursuit before the delay ended.
“Whatever is going on in the humans is also going on in the nonhuman animals,” said A. David Redish, a professor of neuroscience at the University of Minnesota and an author of the study. 
So take heart. You may not be purely rational but, in the animal kingdom, you're not alone.

[HT: Marginal Revolution]

Wednesday 18 July 2018

Perpetual Guardian, four-day workweeks and the Hawthorne effect

Back in February, the trust management company Perpetual Guardian caused a bit of a media stir by announcing that it would trial moving all of its employees to a four-day week (with no salary reductions). Importantly, they also announced that they would evaluate the trial. We were let in on the results of that trial today, as the New Zealand Herald reported:
The Kiwi boss who trialled giving his staff a full salary for four days' work says it was a success and that he wants it to become permanent at his Auckland company.
Andrew Barnes, the chief executive at Perpetual Guardian, says he's already made a recommendation to his board to take the policy beyond the initial eight –week trial...
During March and April, Perpetual Guardian conducted what was essentially a corporate experiment in allowing the company's 240-person staff to retain full pay as well as a three-day weekend.
To ensure an objective analysis, Barnes invited academic researchers Jarrod Haar, a professor of human resource management at AUT, Dr Helen Delaney, a senior lecturer at the University of Auckland Business School, into the building to observe the impact of the trial on the workforce.
From the outset, there was always the risk that reducing work hours would increase the stress on staff to achieve objectives while also leading to lower levels of output as working time was cut by a fifth.
But, as the trial rolled on, the researchers found quite the opposite to occur. 
"What we've seen is a massive increase in engagement and staff satisfaction about the work they do, a massive increase in staff intention to continue to work with the company and we've seen no drop in productivity," said Barnes.
You can read Jarrod Haar's report (or at least a brief version of it) here. The key point though, that makes me skeptical that we can take too much away from this trial, is that the data are based on surveys of staff and supervisors.

Think about the incentives here. Your boss offers to reduce your workweek to four days as a trial, with no decrease in your pay, and announces that they will be evaluating the impact of that trial. You're then asked to fill out a survey just before the trial starts, and then again just after the trial ends. The survey asks a bunch of questions about how you feel about your job (and other related stuff).

You know this is just a trial. If the trial goes well, then it's likely your boss will want to make the change permanent. If the trial doesn't go well, then you're probably back to working a five-day week. What would you do?

It doesn't take a PhD to work out that staff survey data is basically worthless here. You want something that the staff can't game. The supervisors' survey answers are no more valuable. They have the same incentives to game their responses as the staff do. Maybe you could observe behaviour, or measure workplace productivity? Nice try, but if the staff know what you're measuring they will game that too.

This is an example of the Hawthorne effect, which The Economist does a great job of explaining:
The experiments took place at Western Electric's factory at Hawthorne, a suburb of Chicago, in the late 1920s and early 1930s. They were conducted for the most part under the supervision of Elton Mayo, an Australian-born sociologist who eventually became a professor of industrial research at Harvard.
The original purpose of the experiments was to study the effects of physical conditions on productivity. Two groups of workers in the Hawthorne factory were used as guinea pigs. One day the lighting in the work area for one group was improved dramatically while the other group's lighting remained unchanged. The researchers were surprised to find that the productivity of the more highly illuminated workers increased much more than that of the control group.
The employees' working conditions were changed in other ways too (their working hours, rest breaks and so on), and in all cases their productivity improved when a change was made. Indeed, their productivity even improved when the lights were dimmed again. By the time everything had been returned to the way it was before the changes had begun, productivity at the factory was at its highest level. Absenteeism had plummeted.
The experimenters concluded that it was not the changes in physical conditions that were affecting the workers' productivity. Rather, it was the fact that someone was actually concerned about their workplace, and the opportunities this gave them to discuss changes before they took place.
The Perpetual Guardian situation isn't quite the same as the original Hawthorne experiments, but in general we refer to the potential for Hawthorne effects as occurring whenever you conduct an experiment in a workplace and the workers know they are being monitored more closely than usual.

How do you get around this problem? You need to find something that the workers will find more difficult to game. For instance, if you think the four-day week will reduce workplace stress, you could test workers' cortisol levels before and after the trial. It is much more difficult for workers to game a biophysical response than a survey response.

So, the takeaway message here is: don't read too much into the Perpetual Guardian trial. If they roll out the four-day week on a more permanent basis, it will be interesting to see if they still think it's a good idea after a year or two.

[Update: Jarrod Haar wrote an article about the research on The Conversation. No limitations such as those I have highlighted are mentioned.]

Tuesday 17 July 2018

The benefits (or not) of school uniforms

I've blogged a couple of times about school uniforms (see here and here), mostly to highlight the negative impacts of giving firms market power. That highlights the costs of introducing school uniforms (or more accurately, the costs of introducing school uniforms and then having a single monopoly seller of those uniforms). What about the benefits of school uniforms? Many people argue that there are a number of benefits of school uniforms (see for example here or here). Is there evidence to support the assertions of the benefits of school uniforms?

A 2012 paper by Elisabetta Gentile and Scott Imberman (both University of Houston), published in the Journal of Urban Economics (ungated earlier version here), provides us with some evidence. Gentile and Imberman use data from "a large urban school district in the southwest United States" with "more than 200,000 students and close to 300 schools", and look at the impacts of school uniform policies on school attendance, the rate of disciplinary infractions, suspensions (in-school and out-of-school), and achievement in maths, reading, and language. They look at both elementary schools and middle/high schools. Importantly, over the period they look at (1993-2006):
Initially, only a handful of schools required uniforms. However, uniform adoption grew substantially over the following 13 years. Of schools that responded to our survey of uniform policies, which we describe in more detail below, only 10% required uniforms in 1993. By 2006, 82% of these schools required uniforms. In addition, no schools abandoned uniforms after adoption.
So, we know there is sufficient variation in school uniform policies that we can essentially be looking at before-and-after comparisons within each school of the effects of adopting a school uniform policy. After controlling for student, school, and principal characteristics, they find:
For elementary students we find little evidence of uniforms having impacts on attendance or disciplinary infractions... On the other hand, for middle and high school students, we find significant improvements in attendance rates, particularly for females... female attendance increases by a statistically significant 0.3 percentage points after uniform adoption. This is equivalent to an additional 1/2 day of school per year in a 180 day school-year... For disciplinary infractions estimates for middle/high school students are similar to those for elementary students. 
In other words, there was some evidence that uniforms are associated with greater school attendance (for middle/high school students), but no association with discipline (including suspensions). Interestingly, they also find that:
...attendance improvements mainly accrue to students who are economically disadvantaged, particularly those who are in high poverty schools.
Given that attendance improves, and improves most among disadvantaged students, does this translate into better student achievement? Unfortunately, their:
...results indicate that uniforms have little impact on achievement gains.
There's also no evidence of an impact on students switching schools (either to avoid uniforms or to get into a school that has uniforms) and no association with grade retention. Overall, there is little evidence to support claims that school uniforms reduce bullying or violence. And while it might be good if school uniforms increase school attendance, that isn't much benefit if it isn't reflected in learning gains.

Read more:

Saturday 14 July 2018

The success of smiling football teams and scientists

Which of these two groups will win on Monday morning (New Zealand time)?



Can you tell just by the photos which team is more likely to win? For instance, if the players are smiling, does that indicate self-confidence and a higher likelihood of victory? If they're striking a more angry facial expression, does that demonstrate strength and determination?

In a new paper published in the Journal of Economic Psychology (ungated earlier version here), Astrid Hopfensitz (University of Toulouse Capitole) and Cesar Mantilla (Universidad del Rosario, Colombia) looked at data from player photos (from the Panini stickers collections) for every world cup from 1970 to 2014. First, they identified using automated software (FaceReader):
...the activation level of six basic emotions: anger, happiness, disgust, fear, sadness, and surprise, which are non-exclusive.
They then tested whether those emotions (averaged at the team level, rather than individually) were associated with team success in the World Cup, for the 304 teams that took part in those tournaments. They found that:
...display of anger as well as happiness is positively correlated with a favorable goal difference (i.e. more goals scored than conceded). This correlation is robust to the inclusion of our control variables... We also observe the standardized display of anger and happiness is negatively correlated with the overall ranking in the World Cup... That is, teams that display either more anger or happiness, reach an overall better position in the whole tournament...
We observe a clear difference with respect to the two emotions. While the display of happiness is linked to the scoring of goals... anger is linked to conceding fewer goals...
Interesting, when they separate the analyses for defensive and offensive players, they find that:
...the display of happiness is still predictive in each sub-group. By contrast, the display of anger remains predictive only for defensive players, and for one of the outcomes.
Teams with happy offensive players do well, and teams with happy (or to a lesser extent, angry) defensive players do well.

Now, I know you're scrolling back up to check the France and Croatia teams to see who is smiling more [*], but before you do you should know that the links between smiling and success are an example of correlation, not causation. There isn't anything in the study to suggest that smiling causes success, even though you can tell a plausible story about it.

However, you should also know that the correlation between smiling and success isn't limited to football. In another new paper, published in the Journal of Positive Psychology (ungated version here), Lukasz Kaczmarek (Adam Mickiewicz University, Poland) and co-authors looked at the correlation between smiling and success for scientists. Using data for 220 male and 220 female scientists taken from the research social networking site ResearchGate, Kaczmarek et al. first coded whether the researchers were smiling in their profile picture, and then looked at whether that was related to a range of research metrics. They found that:
As expected, smile intensity was significantly related to the number of citations, the number of citations per paper, and the number of followers after controlling for age and sex... Smile intensity was not significantly related to the number of publications produced by the author or the number of publication reads.
It is plausible that there is causality working in two directions here. More successful researchers (those whose papers are cited more often) are more likely to be smiling, happy people (explaining the correlation between citations and smiling), while smiling, happy researchers are more likely to entice other people to follow them on a social network (explaining the correlation between followers and smiling). However, more work would need to be done to establish whether those explanations hold for a larger sample.

Either way, both studies suggest a strong correlation between smiling and success. Go Croatia!

[HT: Marginal Revolution, for the Kaczmarek et al. paper]

*****

[*] Please note that I take no responsibility for the outcomes of any bets you make as a result of your new knowledge about successful smiling footballers.

Thursday 12 July 2018

Oh, the places you’ll go!

In economics, the cost-benefit principle is the idea that a rational decision-maker will take an action if, and only if, the incremental (extra) benefits from taking the action are at least as great as the incremental (extra) costs. We can apply the cost-benefit principle to find the optimal quantity of things (the quantity that maximises the difference between total benefits and total costs). When we do this, we refer to it as marginal analysis.

Marginal analysis challenges the idea that we are always better off with more of things. Yes, we might like there to be more white rhinos, but if there was one living in every front yard, we'd probably regret it. More is not always better.

The easiest way to understand marginal analysis is to see it in action. A recent article in The Economist provides us with a good example:
When it comes to habitat, human beings are creatures of habit. It has been known for a long time that, whether his habitat is a village, a city or, for real globe-trotters, the planet itself, an individual person generally visits the same places regularly. The details, though, have been surprisingly obscure. Now, thanks to an analysis of data collected from 40,000 smartphone users around the world, a new property of humanity’s locomotive habits has been revealed.
It turns out that someone’s “location capacity”, the number of places which he or she visits regularly, remains constant over periods of months and years. What constitutes a “place” depends on what distance between two places makes them separate. But analysing movement patterns helps illuminate the distinction and the researchers found that the average location capacity was 25. If a new location does make its way into the set of places an individual tends to visit, an old one drops out in response. People do not, in other words, gather places like collector cards. Rather, they cycle through them. Their geographical behaviour is limited and predictable, not footloose and fancy-free.
When it comes to the number of locations we visit, there appears to be an optimal number and that optimal number is 25. Why? Consider the costs and benefits of adding one more location to the number that you regularly visit. We can refer to those costs and benefits as marginal costs and marginal benefits. When economists refer to something as marginal, you can think of it as being associated with one more unit (in this case, associated with one more location that you regularly visit).

The marginal benefit of locations declines as you add more locations to your regular routine. Why is that? Not all locations provide you with the same benefit, and you probably go to the most beneficial places most often. So naturally, the next location you add to your regular routine is going to provide less additional benefit (less marginal benefit) than all of the other locations you already visit regularly. So, as shown in the diagram below, the marginal benefit (MB) decreases as you include more locations in your routine.

The marginal cost of locations increases as you add more locations to your regular routine. Why is that? Every location you choose to go to entails an opportunity cost - something else that you have given up in order to go there. When you add a new location to your routine, you are probably giving up spending some time at one of the other locations you were already going to, which provide you with a high benefit. The more locations you add, the more you need to cut into your time at high-benefit locations. So, as shown in the diagram below, the marginal benefit (MC) increases as you include more locations in your routine.

The optimal number of locations occurs at the quantity of locations where marginal benefit exactly meets marginal cost (at Q*). If you regularly visit more than Q* locations (e.g. at Q2), then the extra benefit (MB) of visiting those locations is less than the extra cost (MC), making you worse off. If you regularly visit fewer than Q* locations (e.g. at Q1), then the extra benefit (MB) of visiting those locations is more than the extra cost (MC), so visiting one more location would make you better off.


And, it turns out, the optimal number of locations (Q*) is limited to roughly 25.

Tuesday 10 July 2018

Congratulations Dr Gemma Piercy-Cameron

Yesterday, my wife had her PhD oral examination. She passed with flying colours, giving a very impressive summary of her very in-depth research on baristas' work identity (the title of her thesis is "Baristas: The Artisan Precariat"), as well as a very impressive display of answering the examiners' questions. Her examination was a model of how an excellent student can demonstrate their deep topic knowledge, their understanding of the relevant literature and how it relates to their own work, and the advantage and limitations of their chosen research methods. It was clear throughout the examination that the examiners were very impressed, and I was not surprised to hear that she passed with only minor revisions required to her thesis (and even the minor revisions are very minor, and will take no more than a couple of hours to action.

Gemma has worked incredibly hard to get to this point, not helped by her ongoing chronic health issues. She should be very proud of her achievement, as I am of her.

For the record, here's the abstract from her thesis (noting that it is a thesis in labour studies, not economics):
My research in the work identit(ies) of baristas demonstrates that different workplaces, in conjunction with individual biographies, produce different kinds of work identities. Connected to these differences are the actual and perceived levels of skill and/or social status ascribed to workers within the service work triadic relationship (customers, co-workers/managers and workers). The higher the level of skill or social status, the greater the capacity of workers to experience more autonomy in their work and/or access improved working conditions. These findings are informed by my research approach, which incorporates three key methods: key informant interviews; observation/participant observation; life history interviews; all of which is underpinned by the mystory approach and autoethnography.
My findings from the case study research of the barista work identity are expressed in the following. (1) The different work identities within a specific occupation contribute to the heterogeneity of service workers and service work. This heterogeneity, in turn, obscures the range of skills utilised in the technical and presentational labour mobilised in service work. The skills are obscured by the social and practice-based nature of knowledge transmission in service work like that of baristas, as well as by the dynamic and shifting alliances that may occur in the triadic relationship of customer, workers and employer/manager. (2) Interactive service workers are involved in providing labour or work that is more complex than is socially understood and recognised. This complexity stems from the ways in which presentational labour is commodified, appropriated and mobilised in the workplace within the spaces of the organisational context, internal practices, and the service encounter. (3) I further argue that service workers are also dehumanised as part of the service encounter through the structure of capitalism, specifically the application of commodity fetishism to workers by customers, colleagues, managers, capital and at times themselves. Commodity fetishism dehumanises workers, creating an empathy gap between customers/managers and workers. As such, the commodity fetishisation of service workers also reinforces and promotes compliance with the insecure and precarious employment practices common to occupations in the service sector. (4) As the conditions of precarious work continue to spread, the employment relationship is being altered in relation to consumption practices. Based on this shift in employment relations, I argue that we are moving towards a labour market and society shaped by the practices of the consumer society as well as the traditional production-based economy. However, the increasing influence of consumption practices stems from neoliberal inspired changes in employment relationships rather than the consumer society emphasis on agentic identity projects. As such, the self-determining identity projects highlighted by researchers engaged in aspects of service work and consumption-based research also need to be accompanied by an understanding of the political economy and structural forces which shape the labour process. 
 Congratulations Dr Piercy-Cameron!

Sunday 8 July 2018

Auckland Airport passport control's accidental nudge fail

I just got back from Europe, where I was attending the EduLearn 2018 conference. On arriving back at Auckland Airport, we were confronted as usual with rows of SmartGate (or eGate) machines, which scan your e-passport and take a photo of you, rather than having to have your passport physically checked by an officer. These SmartGates at Auckland Airport are now usable by many different passport holders (which caused me some disquiet when we arrived in London to find that we couldn't use the same facilities there and yet UK citizens can do so in New Zealand - whatever happened to reciprocity?). Anyway, I digress.

Nudge theory was brought to prominence by Richard Thaler and Cass Sunstein's excellent 2008 book Nudge. The idea is that relatively subtle changes to the decision environment can have significant effects on behaviour. If I remember correctly, one of their examples was the difference between opt-in and opt-out retirement savings schemes, where opt-out schemes have much higher enrolment rates compared with otherwise-identical opt-in schemes. One of the most important insights of behavioural economics and nudge theory is the idea that how a decision is framed can make a difference to our decisions. Governments are making increasing use of nudges to modify our behaviour, including the Behavioural Insights Team in the UK, and similar efforts in the U.S. and Australia.

Not all nudges are intentional or helpful though, as we discovered at Auckland Airport passport control. Above the SmartGate machines was a helpful row of the flags representing all the nations whose passport holders could use SmartGate. However, these flags were lined up in groups of three or four (or two, in the case of Australia and New Zealand), with each group of flags located above a corresponding group of SmartGate machines (I'm really sad I can't share a photo, because of laws prohibiting photography in this area, so you'll have to make do with my description). Unsurprisingly, this gave a strong impression to arriving passengers that they should go to the machines corresponding to the flag of their passport. Passport control framed our decision about which machine to choose by making it seem that the flags mattered. In actuality, all SmartGate machines could handle any of the e-passports.

So, when my wife and I arrived at passport control, there was a huge line for the machines with the New Zealand and Australian flags, and virtually no lines at all for the machines with the European flags. We weren't caught out by this unintentional nudge (because we knew that all of the machines worked the same, and we were willing to buck the trend and not line up in the 'New Zealand and Australia' line), and managed to substantially jump the queue.

I wonder how long it will take for Auckland Airport (or Customs or whoever controls that area) to realise their error and correct it? I'm off to Ireland in August for another conference, so I guess I will see then.

Thursday 5 July 2018

This couldn't backfire, could it?... Paying farmers not to grow coca edition

One of my favourite topics in my ECONS102 class is the first topic, in part because we spend a bit of time considering the unintended consequences of (often) otherwise well-meaning policies. And there are so many examples to choose from, I just can't fit them all into that class. Here's a new one from The Economist last week:
The increased cultivation of coca in Colombia defied expectations that the government’s peace deal with the FARC guerrillas, who relied financially on drug trafficking, would curtail the cocaine trade. One explanation is a textbook example of the law of unintended consequences. The peace agreement required the government to make payments to coca farmers who switched to growing other crops. This wound up creating a perverse incentive for people to start planting coca, so they could receive compensation later on for giving it up.
As we discuss in my ECONS102 class, unintended consequences arise because of the incentives that a policy creates. The incentives to change behaviour arise because the benefits and/or costs have changed. In this case, the benefits of planting coca increased, because the farmers could then switch to another crop and claim the payment from the government. If they weren't growing coca, then they couldn't claim the payment, so an incentive was created to plant more coca. So the payments that were supposed to encourage farmers to plant less coca, actually encouraged farmers to plant more coca.

Of course, this has then led to an increase in the supply of coca, and a consequent increase in the supply of cocaine. Which is exactly the opposite of what was intended.

However, the effect here could be limited to the short run, because farmers can only switch away from planting coca once. So, the incentive to plant coca only occurs until the farmers have switched to something else. Supply should eventually fall. Although it wouldn't surprise me to learn that the incentive payment has been structured in such a way that clever farmers can claim it more than once!
As Steven Levitt and Stephen Dubner noted in their book Think Like a Freak (which I reviewed here), no individual or government will ever be as smart as all the people out there scheming to take advantage of an incentive plan.

Sunday 1 July 2018

New results questioning the beauty premium should be treated with caution

The beauty premium - the empirical finding that people who are more attractive earn higher incomes - seems to be a fairly robust result in labour economics. Daniel Hamermesh's book Beauty Pays: Why Attractive People are More Successful (which I reviewed here) does a great job of summarising the literature, much of which Hamermesh himself has extensively contributed to.

However, in a new paper published in the Journal of Business and Psychology (ungated version here), Satoshi Kanazawa (LSE) and Mary Still (University of Massachusetts - Boston) question the existence of the beauty premium. Using data from several waves of the Add Health survey, and based on a sample size of around 15,000, they look at how attractiveness (as rated by interviewers at ages 16, 17, 22, and 29) is associated with earnings at age 29. Attractiveness is rated on a five-point scale (very unattractive, unattractive, average, attractive, or very attractive). Most previous studies have grouped the bottom two groups (very unattractive and unattractive) into a single category for analysis, due to the small number in the very unattractive category. Kanazawa and Still keep these two groups separate, and it is here where their results differ strikingly from the previous literature:
...it is clear that the association between physical attractiveness and earnings was not at all monotonic, as predicted by the discrimination hypothesis. In fact, while there is some evidence of the beauty premium in Table 4, where attractive and very attractive Add Health respondents earn slightly more than the average-looking respondents, there is no clear evidence for the ugliness penalty, as very unattractive respondents at every age earn more than either unattractive or average-looking respondents.
In other words, there is some evidence for a 'very-unattractive premium' in their data. The results are not driven by outliers, because the same pattern holds for median and mean earnings. They explain this apparent very-unattractive premium as being due to very unattractive Add Health respondents at age 16 being significantly more intelligent, and obtaining more education, than unattractive or average-looking respondents. Once they control in their analysis for education, intelligence, and personality traits, the beauty premium (compared with the very unattractive group) becomes statistically insignificant.

However, there is good reason to treat these results with caution. The sample size was around 15,000 in each year, but only around 200 of those respondents were rated as very unattractive. Kanazawa and Still rightly point out that the small numbers will make us fairly uncertain about the average earnings of that group:
Just like earlier surveys of physical attractiveness, very few Add Health respondents were in the very unattractive category (ranging from 0.9% at 17 to 2.7% at 29). As a result, the standard error of earnings among the very unattractive workers tended to be very large, which prompted earlier researchers in this field to collapse very unattractive and unattractive categories into a below average category. However, the very small number of very unattractive respondents and their large standard errors actually strengthened, rather than weakened, our conclusion because standard errors figured into all the significant tests in the pairwise comparisons. Very unattractive workers earned statistically significantly more than unattractive and average-looking workers despite the large standard errors.
That last point is correct, but you can't have it both ways here. Very unattractive workers may earn statistically significantly more on average than unattractive or average-looking workers in spite of the large standard errors, but one of their other key results is the statistical insignificance of the beauty premium (compared with very unattractive workers) once you control for intelligence, education and personality traits. That null result could be driven by the large standard errors on the very unattractive group. In general, null results are much more difficult to justify because, as in this case, they can be driven purely by a lack of statistical power.

One other result from the paper concerned me slightly. Kanazawa and Still test for whether choice of occupation matters for the beauty premium by including occupation in their regression models. That is fine, but all it does is control for differences in mean earnings between different occupations, but assumes that the beauty premium is the same in all occupations. If you actually wanted to test for self-selection into occupations by attractiveness, you would probably expect that attractive people would self-select into occupations where the beauty premium is largest (Hamermesh makes this point in his book). So, to test for self-selection you really need to allow the beauty premium to be different across occupations, which Kanazawa and Still didn't do.

So, the results of the Kanazawa and Still paper are interesting, but I don't think they overturn the many previous papers that find a robust beauty premium.

[HT: Marginal Revolution]

Read more: