Monday, 31 July 2017

The latest evidence supports negative employment effects of the minimum wage

A couple of years ago, I wrote a post about some research that updated our understanding of the effects of the minimum wage on employment. In contrast with many well-publicised studies, such as this one by David Card and Alan Krueger (ungated), that research showed that the minimum wage did reduce employment (see here for the Adam Ozimek piece that summarises that work). In the last month, two new studies have further added to this evidence. Both studies are summarised by Bryan Caplan here, who also raises some additional points that I will address in a follow-up post, probably tomorrow.

The first study is reported in this NBER Working Paper (ungated here) by Ekaterina Jardim (University of Washington) and others. The paper investigates the impact of the rising minimum wage in Seattle:
The minimum wage rose from the state’s $9.47 minimum to as high as $11 on April 1, 2015. The second phase-in period started on January 1, 2016, when the minimum wage reached $13.00 for large employer...
Given the starting point was already well above the U.S. federal minimum wage level of $7.25 per hour, this gives some evidence as to what might happen if a much higher minimum wage was introduced (although as the authors note they probably overstate the effect of more modest minimum wage changes). Jardim et al. find that:
...the rise from $9.47 to $11 produced disemployment effects that approximately offset wage effects, with elasticity estimates around -1. The subsequent increase to as much as $13 yielded more substantial disemployment effects, with net elasticity estimates closer to -3...
In other words, although workers earned more per hour worked after the $11 minimum wage was introduced, they worked fewer hours so that the overall impact on their earnings was approximately zero. When the minimum wage increased further to $13, the effect on hours was greater than the effect on hourly earnings, leading to a reduction in overall earnings. Students of ECON100 will recognise that when demand is elastic (here the price elasticity of demand for labour was estimated at -3, i.e. elastic), an increase in price (here, the wage) is more than offset by a decrease in quantity demanded, leading to a reduction in total revenue (in this case, a reduction in total earnings for workers). Jardim et al. estimate that the overall effect included the loss of over 6,300 low-wage jobs at single-site employers (or approximately 10,000 jobs if multi-site employers are also included, although their dataset could not accurately evaluate the impact on multi-site employers). 

One of the interesting things about this study is that the authors were able to reconcile their results with those of earlier work, which has generally focused on all employees in one or more low-wage industries (whereas this study limited consideration to only low-wage workers, defined as those earning less than $19 per hour - that is, those most likely to be affected by the increased minimum wage). Often the focal industry of earlier studies has been the restaurant industry. Jardim et al. show that these earlier studies "may have substantially underestimated the impact of minimum wage increases on the target population".

The second study is reported in this CEPR Working Paper by Claus Thustrup Kreiner (University of Copenhagen), Daniel Reck (UC Berkeley), and Peer Ebbesen Skov (Auckland University of Technology). This paper investigates the impact of the Danish youth minimum wage, where:
The average hourly wage rate jumps by DKK46, or about $7, corresponding to a 40 percent change in the wage level at age 18 computed using the midpoint method.
That's quite a substantial change when someone turns 18, and Kreiner et al. demonstrate a substantial disemployment effect, summarised in their Figure 1:

Notice that average hourly wage (in the left panel) increases significantly at age 18, while the employment rate (in the right panel) decreases significantly at that same age (along with a small drop off slightly before age 18, which may be explained by employers anticipating the increase in wage that would occur a few months later). When it comes to the elasticity:
We observe that wages are relatively constant around 90 DKK beforehand, and then increase to about 135 DKK after the wage change... we estimate that this 46 DKK increase constitutes a 40 percent increase in hourly wages.
...In our preferred specification... we estimate a 15 percentage point drop in employment, equivalent to a 33 percent decrease in the number of employed workers. In other words, the presence of the wage hike causes roughly one in three workers employed before 18 to lose their jobs when they turn 18. Combining the percentage change in hourly wages and in employment, we obtain the implied elasticity of -0.82...
That's the elasticity of employment, rather than hours worked. Once they also account for reductions in hours for those who remain employed, the elasticity increases to -1.1, which is eerily similar to that for the Jardim et al. paper (though in a completely different context). An elasticity of -1 implies that when the wage rate increases, total earnings pretty much does not change (because the decrease in hours worked would offset the increase in the wage rate).

Kreiner et al. then go on to show that the effect of age 18 is almost entirely driven by labour market exits (lost jobs) at, or just before, age 18, and not by a decrease in hiring. And finally, the effects of that job loss are persistent: one year after job separation at age 18, only 40 percent of separated individuals are employed, compared to just over 75 percent of individuals who did not experience a separation. Even two years after turning 18, individuals who kept their job at age 18 are about 20 percent more likely to be employed than individuals who did not...
Both papers have a similar advantage over earlier work, in that they are able to use the observed change in wage rates for workers to compute the elasticity, rather than relying on an implied increase in wage rates proxied by the percentage increase in the minimum wage. Given that many workers affected by the higher minimum wage would have had wage rates that were higher than the original minimum wage, this means that the increase in wages is overestimated in those earlier studies, leading to under-estimates of the elasticity.

Finally, with increases in minimum wages we can be fairly certain that at least some workers are made better off (those who retain jobs at the new, higher, minimum wage), whereas others are made worse off (those who lose their jobs, or now work substantially fewer hours, at the new minimum wage). Both papers are silent on these distributional impacts, and do not have the data to adequately address them. But those distributional impacts are also important for understanding the impact of the minimum wage.

[HT: Marginal Revolution, here and here]

Sunday, 30 July 2017

More on the gender gap in economics

Last month I wrote a post on the gender gap in economics. A couple of other sources has since come to my attention, starting with this speech by Luci Ellis, who is Assistant Governor (Economic) at the Reserve Bank of Australia (RBA). RBA has come under fire recently for its gender gap. Ellis paints a picture that looks very similar to the situation in New Zealand:
Unfortunately, both in general and for female students, economics is not exactly popular in Australia... for economics, the share of female university students has always been much lower and appears to have fallen further more recently. Even more concerning is that total student numbers in economics appear to have fallen at our universities over the past couple of decades, though some data show a small pick-up more recently.
The picture is even worse at school level... From what we understand, when business studies subjects were introduced, they expanded at the expense of economics.
Those trends are very similar in New Zealand, and especially the growth of business studies at the expense of economics at high school. Ellis makes a good point though, which is also true here:
Of course, it is not essential to have studied economics at school to select it as a major at university.
She argues that mathematics is another pathway, but I would say that even mathematics is not a strict requirement (although aversion to mathematics would be very unhelpful). I can think of many very good economics students who started with no background in economics or strong background in mathematics or statistics. Despite that, Ellis makes many good points, and I encourage you to read the speech in its entirety, especially if you want to understand some of the key points related to the gender gap in occupations more generally.

The second source is this blog post by Leith Thompson, who writes:
In 2016 the Reserve Bank asked me to do some research on how to encourage female students into the field by creating a more inclusive economics...
We don’t just need to encourage female students to study economics, but we also need to adopt innovative, best practice pedagogy that inclusively encourages all students to embrace economics.
Thompson's solutions don't seem to me to be necessarily focused on encouraging more female students to study economics, but seem to be good practice for all students. Clearly, we still have more work to do, and hopefully my Summer Research Scholarship student this coming summer will help us understand this problem further. We also have a group of very keen students at Waikato who are looking at trialing an intervention with high school students, and I hope to have an update on that sometime in the future as well.

Read more:

Thursday, 27 July 2017

Infrastructure costs are going to rise

We've been covering the interactions between markets in ECON100 and ECON110 over the last couple of weeks, so I thought an example might be useful. Brian Roche (chair of the Aggregate and Quarry Association) wrote in the New Zealand Herald on Tuesday:
Aggregate makes up 75 to 90 per cent of all the concrete used in buildings, roads and other infrastructure like airport runways or bridges...
Current total annual demand in Auckland is 13 million tonnes. This will rise to 16.5 million tonnes by 2031, assuming medium growth, or as much as 20 million tonnes, assuming high growth. The latter is likely given the Auckland Unitary Plan allows for more than 100,000 new houses to be built, mainly in newly developed areas.
Demand is good but only if it's matching supply. Get that out of whack and the greywacke that's widely quarried and used in aggregates will rise in price, adding more costs to our already high cost of building.
If more infrastructure is to be built, that will increase the demand for aggregate. The effect is pretty straightforward, as shown in the diagram below - demand for aggregate has increased from DB to DA, and price increases from PB to PA.

How does that affect the 'market' for infrastructure though? The demand for infrastructure (to be built in any given period of time) is downward sloping - if costs rise, we'll built less infrastructure (perhaps deferring some of the least essential projects to sometime in the future). The increase in the price of aggregate causes an increase in the cost of production for infrastructure, which is essentially the same as a decrease in supply. As shown in the diagram below, supply shifts up and to the left, from S0 to S1. This increases the price of infrastructure from P0 to P1, and because infrastructure is now more expensive, we invest in less of it now (deferring some needed infrastructure to the future).

What about if we don't defer investment in infrastructure in response to the increase in the price of aggregate? In that case, the demand curve is vertical (perfectly inelastic) - the quantity demanded doesn't adjust in response to a change in price (that is, the quantity of infrastructure is fixed at Q0). As shown in the diagram below, the decrease in supply from S0 to S1 now leads to a much higher cost of infrastructure (P2), compared with the price if demand was downward sloping (P1).

Overall, regardless of whether we defer infrastructure spending or not, the increased price of aggregate is going to lead to higher costs of building infrastructure.

Tuesday, 25 July 2017

Sunshine, the value of housing and compensation for externalities

In ECON110 today, we discussed hedonic demand theory (or hedonic pricing). Hedonic pricing recognises that when you buy some (or most?) goods you aren't so much buying a single item but really a bundle of characteristics, and each of those characteristics has value. The value of the whole product is the sum of the value of the characteristics that make it up. For example, when you buy a house, you are buying its characteristics (number of bedrooms, number of bathrooms, floor area, land area, location, etc.). When you buy land, you are buying land area, soil quality, slope, location and access to amenities, etc.

In a new Motu working paper, David Fleming, Arthur Grimes, Laurent Lebreton, Dave Maré, and Peter Nunns show that sunshine is one of the important characteristics that contributes to house values. The New Zealand Herald reported a couple of weeks ago:
Motu Economic and Public Policy Research Trust has released what it calls the first research carried out anywhere in the world to specifically evaluate the extra value house buyers put on extra sunshine hours.
Arthur Grimes, a senior fellow at Motu and co-author of the study, said there was a direct correlation between more sunshine and higher values and the study was precise about how much extra value is added.
"Direct sunlight exposure is a valued attribute for residential property buyers, perhaps especially in a cool-climate city such as Wellington. However, natural and man-made features may block sunlight for some houses, leading to a loss in value for those dwellings," the study said.
The effect is quite large. Quoting from the paper:
...each additional hour of direct sunlight exposure for a house per day (on average across the year) adds 2.4% to a dwelling’s market value.
The paper also has some interesting implications in terms of negative externalities. If a high-rise apartment development will block the sunlight from nearby houses, then it will reduce the value of those houses. This constitutes a negative externality imposed on the affected homeowners. Fleming et al. note that these externalities could be dealt with through compensation:
At a policy level, our estimates may be used to facilitate price-based instruments rather than regulatory restrictions to deal with overshadowing caused by new developments. For instance, consider a new multi-storey development that will block three hours of direct sunlight exposure per day (on average across the year) on two houses, each valued at $1,000,000. The resulting loss in value to the house owners is in the order of $144,000. Instead of regulating building heights or the site envelope for the new development, the developer could be required to reimburse each house owner $72,000. In return, the developer would be otherwise unrestricted (for sunlight purposes) in the nature of development. If the development cannot bear the $144,000 then the efficient outcome is that the development does not proceed. Conversely, if the development can bear that sum, then the socially optimal outcome is for the development to occur and, from an equity perspective, the neighbours are compensated for their loss of sunlight exposure.
The idea that compensation can be used to deal with externalities relies on the Coase Theorem - the idea that, if private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own (i.e. without government intervention). In the case of a bargaining solution to an externality based on the Coase Theorem, the solution depends crucially on the distribution of entitlements (property rights and liability rules). In this case, the homeowners have existing rights to sunlight and because an apartment development would infringe on those rights, the developer would be expected to pay compensation to the affected homeowners. This will only be viable if the total amount of compensation paid to affected homeowners is not so great that it makes the development unprofitable.

The study was based on data from Wellington. Given that development in Auckland is happening faster and involves increasing density and greater numbers of taller mixed-use buildings, it would be interesting to see if the results hold there as well. As noted in the New Zealand Herald story:
"For places other than Wellington, the value of sunshine hours may be higher or lower depending on factors such as climate, topography, city size and incomes. Nevertheless, our approach can be replicated in studies for other cities to help price the value of sunlight in those settings," Grimes said. 
So the approach is transferable, even if the results are not. It's almost certainly extendable to considering the value of volcanic viewshafts in Auckland, and hopefully someone is already thinking about undertaking that work.

Monday, 24 July 2017

Reason to be wary if a job in Taumarunui offers an Auckland salary

The New Zealand Herald reported last week:
If you fancy getting away from the rat-race and settling in small-town New Zealand, the perfect role just came up.
Forgotten World Adventures are advertising for a general manager to be based in Taumarunui while receiving an "Auckland salary" - over $150,000 for the "right" candidate...
The advertisement says candidates don't need tourism experience but will "need to be a true leader".
"We are looking for someone who is excited about doubling our revenue over the next three years, passionate about securing our position as a 'bucket list' experience for our target market, and focused on developing our reputation as an industry leader," the advertisement reads.
If you're wondering why a business in Taumarunui is offering an 'Auckland salary', you're right to wonder. That should be a great big red flag. It screams out "compensating differentials!".

Economists recognise that wages may differ for the same job in different firms or locations. Consider the same job in two different locations. If the job in the first location has attractive non-monetary characteristics (e.g. it is in an area that has high amenity value, where people like to live) then more people will be willing to do that job. This leads the supply of labour to be higher, which leads to lower equilibrium wages. In contrast, if the job in the second area has negative non-monetary characteristics (e.g. it is in an area with lower amenity value, where fewer people like to live) then fewer people will be willing to do that job. This leads the supply of labour to be lower, which leads to higher equilibrium wages. The difference in wages between the attractive job that lots of people want to do and the dangerous job that fewer people want to do is called a compensating differential.

So, coming back to the Taumarunui job with an Auckland salary, you really have to ask yourself what is so bad about the job that it requires a high salary to attract someone to work there? Perhaps the business is struggling (but nonetheless trying to double their revenue over the next three years)? Or maybe the owners or co-workers aren't easy to get along with? Or, maybe living in Taumarunui is truly awful? They're almost certainly compensating for some undesirable characteristic of the job.

Whatever it is, I'd be wary of applying. Is there a prospective employee equivalent to caveat emptor?

Read more:

Sunday, 23 July 2017

Are house prices a self-fulfilling prophecy?

Possibly. But let's start from the beginning, which was neatly summarised in this New Zealand Herald article from a couple of weeks ago:
An economist from one of New Zealand's biggest banks has questioned the role of the media in reporting on Auckland's housing market, asking if significant coverage of Auckland house price declines could be "a self-fulfilling" prophecy.
BNZ senior economist Craig Ebert was writing ahead of tomorrow's release of Real Estate Institute data for June and posed a question about the effect of the media's role in the market.
He referred to other recent data that showed prices dropping in some Auckland areas.
"The recent decline in Auckland house prices is now getting significant media coverage. This can be self-fulfilling to the extent that folk fearful that a market might correct are more likely to withdraw from it - buyers that is - and sellers will either delist their properties, simply not sell or, if under pressure, accept lower prices than might otherwise be the case," Ebert wrote.
One of the factors that affects the current demand in a market is expectations about future prices, which may be affected by media coverage. If a consumer (in this case, a home buyer) believes that the price of a good (in this case, a house) will be lower in the future, then they may hold off on purchasing now and wait for the lower future price. This lowers current demand for the good (houses). As shown in the diagram below, demand falls from D0 to D1, and the effect of that is that the equilibrium price falls from P0 to P1 (and the quantity of houses traded falls from Q0 to Q1). So the price falls, which is exactly what the consumer expected. Hence, this becomes a self-fulfilling prophecy.

But wait, there's more. If potential sellers expect prices to fall in the future, they may choose to sell their houses now, which increases the current supply of houses. As shown in the diagram below, this combination of decreased demand (from D0 to D1) and increased supply (from S0 to S2) leads to an even greater drop in prices, to P2. Note that the change in quantity becomes ambiguous - quantity of houses traded could increase (if the increase in supply is greater than the decrease in demand), decrease (if the increase in supply is less than the decrease in demand), or least likely of all the quantity could stay the same (if the increase in supply exactly offsets the decrease in demand).

But maybe sellers aren't that dumb - maybe they recognise that they can hold onto their houses for now instead (and rent them out), and then sell them at some point in the future once prices have recovered. In this case, the supply of houses for sale would decrease rather than increase. As shown in the diagram below, this combination of decreased demand (from D0 to D1) and decreased supply (from S0 to S3) leads to a certain decrease in quantity (to Q3), but an ambiguous change in prices. House prices could increase (if the decrease in supply is greater than the decrease in demand), decrease (if the decrease in supply is less than the decrease in demand), or least likely of all the price could stay the same (if the decrease in supply exactly offsets the decrease in demand).

So, are house prices a self-fulfilling prophecy? It really depends on the reaction of sellers. If sellers choose to cash out before prices start to fall (which I would suggest is probably the case for short-term speculators) then yes. However, if sellers choose to hold onto houses and wait out the downturn (which is more likely the case for owner-occupiers, landlords and long-term investors), then possibly not. At that point, it becomes an empirical question - if the quantity of houses changing hands falls significantly and house prices hold up, then the latter of those two explanations is probably having the greater effect.

Saturday, 22 July 2017

Surge pricing is coming to a supermarket near you

When demand increases, the standard economic model of supply and demand tells us that the price will increase. However, most businesses don't dynamically adjust prices in this way. For instance, ice cream stores don't raise prices on hot days, and umbrellas don't go up in price when it rains.

There are a few reasons that sellers don't automatically adjust prices in response to changes in demand. The first reason is menu costs - it might be costly to change prices (they're called menu costs because if a restaurant wants to change its prices, it needs to print all new menus, and that is costly). The second reason is that changing prices creates uncertainty for consumers, and if they are uncertain what the price will be on a given day, perhaps they choose not to purchase (in other words, the cost of price discovery for consumers makes it not worth their while to find out the price). The third reason is fairness. Research by Nobel Prize winner Daniel Kahneman (and described in his book Thinking, Fast and Slow) shows that consumers are willing to pay higher prices when sellers face higher costs (consumers are willing to share the burden), but consumers are unwilling to pay higher prices when they result from higher demand - they see those price increases as unfair.

Despite this, there are examples of sellers dynamically adjusting prices. For example, Alvin Roth's book Who Gets What - And Why (which I reviewed here) relates a story about how Coke ran a short-lived experiment, where their vending machines increased prices in hot weather. And many of us will be familiar with Uber's surge pricing (which, as noted in this post, is used to manage excess demand).

It seems that soon Uber may not be the only local example that we will see of this. The New Zealand Herald reported a couple of weeks ago:
On demand surge-pricing is making its way to New Zealand.
The country could soon be in the same boat as the UK, Europe and America, with stores and supermarkets adopting digital e-pricing - prices that change hour to hour, based on demand.
Retail First managing director Chris Wilkinson said variants of surge-pricing had already hit New Zealand, particularly around the Lions tour, with accommodation and campsites prices soaring.
While on demand surge-pricing is not a new phenomenon, Wilkinson said the way it was being administered, overseas, was.
"What is new is the ability to manage on-shelf pricing dynamically and tie this to key commercial opportunities - such as busy times, events, weather or other responsive opportunities," he said.
Asked if he thought it would become standard practice in New Zealand supermarkets and on shelves anytime soon, Wilkinson said it would likely hit service stations first.
"We'll likely see this in service stations first, as they will be able to maximise potential around higher margin products such as hot drinks, bakery and other convenience items," he said.
I'd be interested to know how a supermarket would deal with a customer who picks up an item observing one price at the shelf, but then finds that the price has changed by the time they get to the checkout. Would that breach the Fair Trading Act? As Consumer notes here:
In the past, a supermarket has been convicted and fined for charging higher prices at the checkout than were on display.
Despite that particular problem, surge pricing is coming. When you see the traditional price sticker replaced by a small LCD or LED display, you'll know it has probably arrived.

Friday, 21 July 2017

Health economics and the economics of education in introductory economics

Sometimes it's good to receive some affirmation that what you're teaching is also taught in a similar way, and at a similar level, at top international universities. In the latest issue of the Journal of Economics Education, two of the articles have demonstrated to me that the material I teach in the health economics and economics of education topics in ECON110 is current best practice.

In the first paper (sorry I don't see an ungated version), David Cutler (Harvard) writes about health economics:
Health care is one of the biggest industries in the economy, so it is natural that the health care industry should play some role in the teaching of introductory economics... The class that I teach is an hour long...
In his hour-long class, Cutler covers medical care systems, the financing of medical care, and the demand and supply of medical care. In ECON110, I have a whole topic (three hours of lectures, and two hours of tutorials) devoted to health economics, and we cover the peculiarities of health care as a service (peculiar due to derived demand, positive externalities, information asymmetries, and uncertainty), cost-minimisation/cost-effectiveness/cost-utility analysis (including consideration of expected values to deal with uncertainty), the value of statistical life and cost-benefit analysis, and health systems. Obviously I can cover more ground because I have more time available, but it's good to see that the things that Cutler covers at Harvard are part of my topic.

Similarly in the second paper (also no ungated version), Cecilia Elena Rouse (Princeton) writes about the economics of education:
There are many aspects of the “economics of education” that would make excellent examples for introductory economics students... I chose two related topics that are central to the economics of education and to human capital theory: the economic benefit (or “returns”) to schooling and educational attainment as an investment.
Again, those sub-topics that Rouse identifies are part of the ECON110 topic on the economics of education. In that topic, we cover human capital theory and the private education decision (including introducing the concept of discounting future cash flows), the public education decision (including consideration of the optimal subsidy for education, and dealing with credit constraints). Recently, I've also been discussing the economics of MOOCs (Massive Open Online Courses).

Both papers gave me a few small points to follow up on, but overall it looks like the fact that Harvard and Princeton teach similar topics in a similar way is a good sign of the ongoing quality of the ECON110 paper.

Wednesday, 19 July 2017

Why fire protection is (or was) a club good

Goods and services can be categorised on two dimensions: (1) whether they are rival, or non-rival; and (2) whether they are excludable, or non-excludable. Goods and services are rival if one person’s use of the good diminishes the amount of the good that is available for other peoples' use. Most goods and services that we purchase are rival. In contrast, non-rival goods are those where one person using them doesn’t reduce the amount of the good that is available for everyone else. Listening to the radio is a non-rival good, since if one person listens, that doesn't reduce the number of other people who can also listen.

Goods and services are excludable if a person can be prevented from using or benefiting from them. In other words, there is some way to exclude people from using the good. Often (but not always), the exclusion mechanism is a price - to use the good or service you must pay the price. In contrast, non-excludable goods are available to everyone if they are available to anyone - there isn't a way of excluding people from using or benefiting from the good or service. A fireworks display is non-excludable. If you let off fireworks, you can't easily prevent other people from seeing them.

Based on those two dimensions, there are four types of goods as laid out in the table below.

I want to focus this post on club goods - goods (or services) that are non-rival and excludable. With club goods, often the exclusion mechanism is a price - you have to pay the price in order to be a part of the club (and receive the benefits of club membership).

Some goods or services that are categorised as club goods may be contentious. For instance, according to the table fire protection is a club good - it is non-rival and excludable. Provided there aren't large numbers of fires, if the fire service attends one fire, that doesn't reduce the fire protection available to everyone else [*]. So, fire protection is non-rival. Is fire protection excludable? In theory, yes. People can be prevented from benefiting from fire protection. Say there was some sort of fire service levy, and the fire service decided to only respond to fires at homes or businesses that were fully paid up. For the same reason, tertiary education is also in many cases a club good. [**]

I always thought that fire protection as a club good was purely a theoretical case, but this recent Mac Mckenna article notes:
Since 1906 the Fire Service has been universally available to all New Zealanders. Prior to then, the Fire Service was run by insurance companies to mitigate loss. Firefighters would only respond to save houses which had a red rock outside showing the owners had fire insurance cover.
I was a bit surprised by this piece of history, and I haven't been able to confirm it from another source. But it does demonstrate how fire protection at one point was excludable in more than just the theoretical sense, and therefore it was at that time a genuine club good.

However, in practice the government chooses not to exclude any home or business from fire protection, making it non-excludable, and therefore a public good. Of course, non-excludability comes with problems such as free riding (where a person benefits from the good or service without paying for it). As Mckenna notes, by funding the Fire Service by levying only those who take out insurance, the insured will be subsidising the free-riders who choose not to take out insurance.


[*] Of course, in a large-scale disaster, or in summer when there are large forest or bush fires burning, this may not be true.

[**] Tertiary education is a club good provided it is non-rival. For most university and polytechnic courses, this is the case. However, some courses have limited spaces and in the case of those courses tertiary education is a private good (rival and excludable).

Tuesday, 18 July 2017

Caramilk arbitrage and the endowment effect

As 1974 Nobel Prize winner Friedrich Hayek noted, markets allocate goods to the buyers who value them the most, since those are the buyers who are willing to pay the most for them. So, consumers who purchase the good at a low price, may be willing to give up their purchase in exchange for more money from those who value the good more.

Having said that though, the endowment effect doesn't make Hayek's observation automatic. Quasi-rational decision makers are loss averse - we value losses much more than otherwise-equivalent gains. That makes us are unwilling to give up something that we already have, or in other words we require more in compensation to give it up than what we would have been willing to pay to obtain it in the first place. So if we buy something for $10 that we were willing to pay $20 for, we may choose not to re-sell it even if someone offers us $30 for it.

We've seen a graphic example of both of these effects (goods flowing to the buyers who value them the most, and endowment effects) this week, as Newshub reports:
Ever since Cadbury relaunched its iconic Caramilk chocolate in New Zealand last month, our Aussie neighbours have been desperate to get in on the action.  
The chocolate, a solid bar which is a blend of caramelised white chocolate, was a '90s classic, and appeared back on supermarket shelves around New Zealand at the end of June. The limited edition product is a New Zealand-only release and isn't available in Australia.
Now some clever Kiwis have cottoned on to the demand across the ditch and are putting Caramilk blocks on Ebay Australia for Aussies to buy.
And it turns out they're willing to pay quite a bit.
One auction, which closed early on Monday afternoon (NZ time), saw 19 bids for one $3 block, which eventually sold for AU$40 (NZ$42.64). release and isn't available in Australia.
Savvy New Zealand chocolate buyers have been snapping up Caramilk chocolate for a low price in New Zealand, and promptly on-selling the bars to Australians for a higher price (this practice of buying in a low price market and re-selling in a high price market is known as arbitrage). However, in order to overcome the endowment effect, the price must be high enough to induce them to sell and overcome the endowment effect. But NZ$42 can buy a lot of alternative chocolate!

Eventually though, greater quantities of Caramilk being offered to Australians will leave only those Australians with lower willingness-to-pay for it unsatisfied, and the auction prices will fall. Once the buyers who are willing to pay $42 have their chocolate, that only leaves buyers willing to pay $40, and once they've got their chocolate that only leaves buyers willing to pay $38, and so on. So if you're thinking of trying to take advantage of this arbitrage opportunity, you'd better get in fast.

[HT: Memphis from my ECON100 class]

Monday, 17 July 2017

The optimising behaviour of Italian bank robbers

One of 1992 Nobel Prize winner Gary Becker's many contributions to economics was the development of an economic theory of crime (see the first chapter in this pdf). Becker argued that criminals, like other rational decision-makers, weigh up the costs and benefits of their actions, and will take the action that offers the greatest net benefits. That assumes we are talking about a discrete decision (a yes or no decision) based on incremental benefits and incremental costs. The benefits of crime include the monetary gains, and any 'rush' associated with committing the crime. The costs include any punishment that might be received, conditional on the probability of being caught (and convicted).

However, not all criminal decisions are yes/no type decisions. That is, not all decisions are made on the extensive margin. Some decisions are instead made on the intensive margin, such as how long to spend inside a bank while committing a robbery. The trade-off here is that the longer a criminal spends in the bank, the greater their haul of loot, but also the greater the risk of the police arriving and the criminal being caught. When a question is about the optimal amount of something (e.g. the optimal amount of time for the bank robber to spend in the bank), a rational decision-maker will optimise at the quantity where marginal benefit is equal to marginal cost. In this case, that will be whatever time in the bank where the last minute spent there equates the additional loot collected with the disutility (the negative utility) of being caught and punished.

In a recent discussion paper, Giovanni Mastrobuoni (University of Essex) and David Rivers (University of Western Ontario) exploit this equality using data on nearly 5,000 bank robberies in Italy, to estimate the disutility of imprisonment. Their dataset is quite rich and, while it doesn't include data on the robbers, it includes a lot of data about the robbery including, crucially, the exact duration of the robbery (which is often able to be confirmed using CCTV camera footage). They find that:
...the most successful robbers in terms of hauls use weapons, wear masks, and rob banks with fewer security devices and no guards. Those who work in groups, wear masks, target banks around closing time, and target banks with no security guards and few employees, achieve lower rates of apprehension. Offenders who use a mask and target banks without security guards have higher disutilities of prison. Robber ability is also found to be a strong driver of larger hauls, lower probabilities of arrest, and larger disutilities of prison. The latter finding is consistent with higher ability offenders having a larger opportunity cost of prison.
That latter finding is most interesting. Higher ability offenders tend to earn more from crime (and possibly have better earning opportunities outside of crime as well). So, the foregone earnings (from crime or otherwise) are higher for these offenders if they are imprisoned, which explains their higher opportunity cost of prison and their higher disutility of prison. The other results are mostly unsurprising. Mastrobuoni and Rivers also find that:
...heterogeneity in robber ability generates a positive correlation between criminal harmfulness and disutility. An importance consequence of this is that policies designed to affect those with higher disutilities of prison (for example simply raising overall sentences) have the added benefit of disproportionately targeting the more harmful (higher ability) offenders.
What that means is that the offenders who create the most harm (by being least likely to be caught, and generating the greatest hauls of loot) are also those with the greatest disutility of punishment. So, increasing the punishment for bank robbery would disproportionately deter the highest ability criminals, and have a large effect on reducing bank robberies. Whether the benefits to the state of greater punishment (less bank robbery) exceed the costs (more prisoners who cost money to house and feed) is not considered in the paper. However, for Italy it may well be the case, since:
Each year there are more bank robberies in Italy (approximately 3,000) than in the rest of Europe combined, with a 10 percent chance of victimization on average (there are about 30,000 bank branches).
Over the period 2000-2006, on average 8.7 percent of Italian banks were robbed each year. That compares with 2.2 percent in New Zealand (and a surprising 14.1 percent in Canada!). So, even if the benefits of greater prison terms for bank robberies exceed costs in the case of Italy, they may not do so for New Zealand.

[HT: Marginal Revolution]

Sunday, 16 July 2017

Book Review: Economic Ideas You Should Forget

Imagine you gathered together a bunch of economists, and asked each of them to write two pages about their pet hates (in economics). I imagine you would be able to put together a volume that looks very similar to a new book, "Economic Ideas You Should Forget", edited by Bruno Frey and David Iselin. It would be charitable to describe this book as anything more than an excuse for complaint by several well-known (and many lesser known) economists. The editors are up-front in the introduction that "The essays do not idolize models or references..." but it is the lack of references that make many of the essays seem at the same time both lightweight and unsupported by evidence.

To be fair, there are some excellent chapters including those by Daron Acemoglu (Capitalism), Thomas Ehrmann (Big Data Predictions Devoid of Theory), and Dider Sornette (Decisions are Deterministic). But there are some misses like Jurg Helbling (Boundedness of Rationality) and surprisingly (to me) Richard Easterlin (Economic Growth Increases People's Well-Being), which contrasts starkly with research by Betsy Stevenson and Justin Wolfers (see here). It was interesting to read Victor Ginsburgh (Contingent Valuation, Willingness to Pay, and Willingness to Accept), given that I have written on the contingent valuation debate before (see here and here), but I don't think that essay added much to the debate.

Most of the essays are unconvincing and I doubt anyone will be persuaded to change their thinking on the basis of reading two pages in this book. Overall, there were some good bits but really, this is an economics book you should forget.

Wednesday, 12 July 2017

Strip clubs, externalities, and property values in Seattle

Property values tend to reflect not only the characteristics of the property itself, but also the neighbourhood that the property is located in. This is hedonic pricing - the price of a property reflects the sum of the values of all of the characteristics of the property. If the property includes a dwelling, the price reflects the quality and size of the dwelling, number of bedrooms, bathrooms, whether it has off-street parking, and so on. But the price also reflects the access of the property to local amenities, such as good schools, public transport, and so on (for example, see this post from earlier this year).

But not all local amenities are positive. Some features of the neighbourhood might create disamenity, reducing property prices. One example may be strip clubs. If a strip club attracts unsavoury people and petty (and not-so-petty) crimes, then fewer people will want to live in that neighbourhood, reducing demand for properties in that area and consequently reducing property prices. Another way of thinking about this is that the strip club creates a negative externality on local property owners (an externality is the uncompensated impact of the actions of one party - in this case the strip club locating in a particular neighbourhood - on others, in this case the local property owners).

There is evidence to suggest that some facilities do create disamenities that negatively affect property prices, including meth labs and toxin-emitting industrial plants. But what about strip clubs? A recent working paper by Taggert Brooks (University of Wisconsin - La Crosse), Brad Humphreys and Adam Nowak (both West Virginia University) looks at relevant data for Seattle.

Specifically, Brooks et al. looked at repeated property sales (where the same property was sold multiple times) over the period 2000-2013, a period during which a moratorium on new strip clubs in King County (which includes Seattle [*]) was removed. Using repeated property sales gets around the problem of accounting for the different quality of different properties (provided you assume that property quality doesn't markedly change between sales). Their dataset included over 317,000 property sales, of which about 5,400 were within 2000 feet of a strip club.

What did they find? A whole lot of nothing. In their preferred specification of the mode, the results:
...indicate that the presence of an operating strip club is not associated with any differential in residential property prices over this period. These results indicate price dynamics for those properties within K of an operating strip club are no different from price dynamics for properties between K and 1 [mile] of a strip club.
There did appear to be some weaker evidence that condominium prices were lower when a strip club was nearby though:
However, the results using the condominium sub-sample, and the single family home sub-sample, provide weak evidence that strip clubs are associated with residential property price differentials in some cases... condominiums located within 1000 feet of a strip club have transactions prices about 5.5% lower than condominiums located farther from operating strip clubs. Some weak evidence also suggests that condominiums within 500 feet also sell for lower prices...
These results are interesting, but are based on only a small amount of variation in the sample. If I read the paper correctly, there were only 370 properties that were sold multiple times, where there was a nearby strip club at the time of one of the sales and no nearby strip club at the time of the other sale. So, given the small number of 'identifying observations', I'd be much more cautious than the authors about interpreting the lack of statistical significance here as suggesting that strip clubs have no effect on property values. I would be more inclined to say that they may have an effect, but this study didn't have sufficient statistical power to detect the effect. Although statistically insignificant, the point estimate of the effects from their preferred specification suggests that property prices are 6.5 percent lower when there is a strip club within 500 feet, 2.9 percent lower within 1000 feet, and 1.6 percent lower within 2000 feet. That is quite a large effect.

It would also be interesting to see if similar results obtain for other cities in the U.S. and elsewhere. It also suggests to me that we could use a similar approach to evaluate the negative effects of alcohol outlets in New Zealand. Something to follow up later.

[HT: Marginal Revolution last year]


[*] Yes, as I mentioned in an earlier post I was in Seattle a couple of weeks ago and no, it wasn't to collect observational data on strip clubs. My wife was with me and can attest to the lack of strip clubs in our itinerary.

Tuesday, 11 July 2017

Monetary incentives to quit smoking may work

Can you pay people to quit smoking? It turns out that maybe you can. Back in May, the New Zealand Herald reported:
Offering a cash scholarship as incentive for Maori nursing students to stop smoking has been trialled by an Auckland institute and an anti-smoking organisation says incentives are proving to be an effective tool.
Manukau Institute of Technology Maori faculty leader and nursing lecturer Evelyn Hikuroa conducted a pilot study to assess whether offering a monetary incentive to stop smoking would help people give up, the results of which were published in Nursing Praxis in New Zealand: Journal of Professional Nursing in March.
Co-author of the study and Massey University School of Public Health Associate Professor Marewa Glover said the study showed the incentive did help to a degree.
"We found that the student nurses were highly motivated to stop smoking for their own health, their family and their new career. Providing a cash incentive boosted that, especially because studying can be a financial strain for students," she said.
Rational (and quasi-rational) decision-makers weigh up the costs and benefits of their actions (as we discussed in my ECON110 class today). If the student nurses didn't give up smoking, they faced a financial penalty relative to if they had given up smoking (they missed out on the scholarship). This creates an additional opportunity cost of smoking, raising the cost of smoking. When you increase the costs of an activity, people will do less of it. So, less smoking as a result of the incentive.

You can find the original research paper here (sorry I don't see an ungated version). It was based on a study of twelve student nurses, so I wouldn't hold it up as being a pillar of robust research. However, it does demonstrate that monetary incentives to quit smoking could be effective.

The common counter-argument to using economic incentives (like paying people) is that it reduces the moral or social incentives for changing behaviour. Intrinsic rewards (e.g. to quit smoking for your health or for your family) are replaced by extrinsic rewards, and extrinsic rewards are argued to be neither as effective nor as long-lasting as intrinsic rewards. It would be interesting to see whether this study was effective in terms of longer-run behaviour, especially once the incentives ran out.

Sunday, 9 July 2017

Using devices as clickers in class

I've written before on the negative effects of laptops in lectures (see here and here), and the not-so-negative effects of mobile phones (see here). However, there may also be some positives to students having internet-capable devices in lectures. Several years ago, I experimented with using mobile phones as 'clickers' (classroom response systems) in class, using the now-defunct Votapedia (see here for some detail on Votapedia). For those unfamiliar with the term, a clicker is a device that allows students to answer questions in class, with responses automatically collated and able to be displayed live. If you've ever watched Who Wants to be a Millionaire, it looks very similar to the 'Ask the Audience' lifeline on that show.

My experiment with mobile phones as clickers worked reasonably well, and that was in a time when many students didn't bring a device to class. Obviously, I'm not the only one who tried this bring-your-own-device (BYOD) approach, and I recently read this article by Jennifer Imazeki (San Diego State University) on her experiences, published in the Journal of Economic Education (sorry I don't see an ungated version).

Imazeki helpfully enumerates the costs and benefits of bring-your-own-device as clickers (compared with standalone clicker devices), with the pros being: (1) convenience for students; (2) easy to ask open-ended questions; (3) relatively low commitment (since you need not feel like you need to use the clickers a lot); and (4) potentially low cost. The cons are: (1) students are using their devices (and may be more likely to become distracted, as I have noted in earlier posts); (2) need consistent cell service; (3) students must have phones, tablet, or laptops (which might be a more significant constraint in some student populations than others); and (4) lack of integration with university systems.

My assessment is that it might be time for me to re-evaluate using clickers. They were helpful in getting student engagement, but after Votapedia became unavailable I reverted to shows-of-hands in class. Helpfully, Educause has a useful list of potential clicker or mobile app providers. Unfortunately, the main problem with the options in that list appears to be that most of them are pay-for-use, and the costs for a class of the size I work with seem prohibitive. However, there seem to be many good options that are free for small classes. I'll post more on my search for a useful classroom response system in the future, but feel free to make suggestions in the comments.

[Update]: On Facebook, my ECON110 tutor Rebecca pointed me to Kahoot. It looks good. I think I'll give it a try!

Saturday, 8 July 2017

Rockonomics: The economics of popular music

I really enjoyed my time in Seattle, and especially the Museum of Pop Culture. The plane trip from Seattle to Portland (which we also visited before returning to New Zealand) seemed an opportune time then to read this 2006 chapter (with the same title as this post) from the Handbook of the Economics of Art and Culture (ungated earlier versions here and here), by Marie Connolly and Alan Krueger (both Princeton).

That chapter had been sitting in my must-read pile for about ten years (!), but for whatever reason consistently kept getting bumped a little lower down the pile. It is important to me because the economics of popular music provides a lot of good illustrations of the things we teach in first year microeconomics. So given that ECON100 and ECON110 both start B Semester lectures next week, in this post I'm going to take some brief quotes (taken from the NBER working paper version of the chapter) from the chapter to illustrate some of the things we will cover in those papers.

Besides that, the chapter includes a lot of interesting detail on the structure of the music economy. Consider these bits, which relate to the ECON110 topic on media economics: is clear that concerts provide a larger source of income for performers than record sales or publishing royalties. Only four of the top 35 income-earners made more money from recordings than from live concerts, and much of the record revenue for these artists probably represented an advance on a new album, not on-going royalties from CD sales... 
If a band composed its own music, it will also contract with a publisher to copyright the music... The publisher usually takes half the royalties, and the composer receives the other half (some of which goes to the manager).
...bands receive relatively little of their income from recording companies. Indeed, only the very top bands are likely to receive any income other than the advance they receive from the company, because expenses – and there are many – are charged against the bands advance before royalties are paid out...
Record companies tend to sign long-term agreements with bands that specify an advance on royalties and a royalty rate. The typical new band has very little negotiating power with record labels, and the advance rarely covers the recording and promotion costs, which are usually charged to the band. Because fixed recording costs vary little with band quality, only the most popular artists earn substantial revenue from record sales...
[Quoting Jacob Slichter, the drummer for Semisonic]: If our CD was sold in stores for fifteen dollars, the band’s share of the revenue might be something between fifty cents and a dollar per CD.
We cover moral hazard in both ECON100 and ECON110. Moral hazard occurs when one of the parties to an agreement has an incentive, after the agreement is made, to act in a way that brings additional benefits to themselves at the expense of the other party. In relation to that Connolly and Krueger write:
Caves prosaically notes that, “From the artist’s viewpoint, a problem of moral hazard arises because the label keeps the books that determine the earnings remitted to the artist.”
So, the recording label engages in moral hazard because it provides additional benefits to the label, and because the artists cannot easily monitor what the label is doing when it estimates earnings and costs and what should be paid to the artist. There are also a number of points that Connolly and Krueger write in relation to pricing (which we cover in ECON100), including:
As an economic good, concerts are distinguished by five important characteristics: (1) although not as extreme as movies or records, from a production standpoint concerts have high fixed costs and low marginal costs; (2) concerts are an experience good, whose quality is only known after it is consumed; (3) the value of a concert ticket is zero after the concert is performed; (4) concert seats vary in quality; (5) bands sell complementary products, such as merchandise and records...
The price of a concert ticket is set lower than it would be in the absence of complementary goods, because a larger audience increases sales of complements and raises revenue.
Firms that sell complementary products need not necessarily profit maximise for any of those products individually, if they can profit maximise across the whole range of their products. And in terms of price elasticity (covered in ECON100):
...despite flat or declining tickets sales, total revenues (in 2003 dollars) trended upwards until 2000 because of price increases. Other things equal, these trends suggest the elasticity of demand was less than 1 before 2000. Since 2000, however, there has been a 10 percent drop in ticket revenue for these artists, suggesting that prices increases have been offset by a larger than proportional demand response.
When demand is relatively inelastic, a given percentage increase in price is associated with a smaller percentage decrease in quantity demanded, so total revenues (price x quantity) increases. This is what happened prior to 2000, but after 2000 demand appears to have been elastic, so that the percentage increase in price was more than offset by a larger percentage decrease in quantity demanded, meaning that total revenues (price x quantity) decreased.

Connolly and Krueger cover inequality as well (as we will in ECON110):
...concert revenues became markedly more skewed in the 1980s and 1990s. In 1982, the top 1% of artists took in 26% of concert revenue; in 2003 that figure was 56%. By contrast, the top 1% of income tax filers in the U.S. garnered “just” 14.6% of adjusted gross income in 1998 (see Piketty and Saez, 2003). The top 5% of revenue generators took in 62% of concert revenue in 1982 and 84% in 2003.
And on intellectual property rights (which we cover in ECON110):
How far does intellectual protection go? Are rights strong enough to encourage the optimal amount of innovation? The problem stems from the fact that musical compositions are nonrival goods, whose property rights, as laid out by Nordhaus (1969), generate a trade-off between under-provision of the nonrival good (with weak rights) on the one hand and monopoly distortions (when the property rights are strong) on the other. 
Nordhaus's characterisation of the trade-offs inherent in intellectual property is one of the key pillars of the ECON110 topic on intellectual property rights. There are other bits of interest, including ticket scalping (which we cover in both ECON100 and ECON110), signalling (also both ECON100 and ECON110), and superstar effects (which we discuss in ECON110). A few parts of the chapter are a little technical, but all of it is interesting, and there are lots of gems to take away. Some parts of the chapter are getting a little dated, but mostly it has aged well and if you like to see economics in action, I encourage you to read it.

Friday, 7 July 2017

Subjecting teachers to a WMD

The New Zealand Initiative has a new report, entitled "Amplifying Excellence: Promoting Transparency, Professionalism and Support in Schools" (direct link to the report here). The New Zealand Herald reported on it yesterday, and focused on a point that drew my attention:
Teachers could be rated based on how much they lift student achievement, if a big-business think tank has its way.
The New Zealand Initiative, which says the combined revenue of its 54 member companies represents a quarter of the NZ economy, calls in a new report for better data to measure how good teachers are...
But the report does not take the next step, which teacher unions have feared, of linking teachers' performance directly to their pay.
"I don't say anything about what is happening to the teacher who is not performing well," said the report's author, Martine Udahemuka.
"First and foremost, it's to provide them with the support they need to become good teachers."
The report advocates for creating models of teacher value-added. Such a model would predict the achievement of each student based on their characteristics (e.g. age, ethnicity, parents' income, etc.) and would compare their actual achievement with the achievement predicted by the model. If a teacher on average raises their students' achievement above that predicted by the model, they would be considered a 'good' teacher, and if their students' achievement on average does not reach that predicted by the model, they would be considered a 'not-so-good' teacher.

On the surface, this sounds fairly benign and uses data in a more sophisticated way. However, hiding just below the surface there is a serious problem with the advocated approach. This problem is highlighted by Cathy O'Neill in her book "Weapons of Math Destruction" (which I reviewed just last week).

Essentially, each teacher is being rated on the basis of 25 (or fewer!) data points. O'Neill argues that this isn't a robust basis for rating or ranking teachers, and she provides a couple of (albeit anecdotal) stories to illustrate, where excellent teachers have been seriously under-rated by the model and lost to the system. She labels the teacher value-added model a Weapon of Math Destruction (WMD) because of the serious damage it can do to otherwise-good teachers' careers. Although the effects weren't all bad, as O'Neill observed:
After the shock of her firing, Sarah Wysocki was out of a job for only a few days. She had plenty of people, including her principal, to vouch for her as a teacher, and she promptly landed a position at a school in an affluent district in northern Virginia. So thanks to a highly questionable model, a poor school lost a good teacher, and a rich school, which didn't fire people on the basis of their students' scores, gained one.
I know that Udahemuka says "I don't say anything about what is happening to the teacher who is not performing well", but you would have to be incredibly naive to believe that the results of a teacher value-added model would not be used for performance management, and for hiring and firing decisions. The report acknowledges this:
As part of a supportive performance development system, principals could receive detailed feedback from the Ministry about class level performance to identify and share good practice, support performance improvement, and assign students to teachers best suited to their needs. All the decisionmaking would be at the school level.
Surely decision-making would include hiring and firing decisions.

Having said that, the greatest part of the problem with teacher value-added that I can foresee is if the model is used only to generate point estimates, with no consideration of the uncertainty of those point estimates. If each teacher fixed effect (assuming that is the approach adopted) is based on only 25 observations (a typical class size), the 95 percent confidence interval is likely to be quite wide on these estimates. I doubt that, apart from seriously awful or seriously superstar teachers, most teachers' effectiveness would be indistinguishable from the mean. I'm speculating without having completed or seen the analysis of course, but I think it would be fairly heroic to be making serious decisions about effectiveness at the individual-teacher level on the basis of such a model.

It gets worse though. The report argues that not only could you derive teacher value-added for the class as a whole, but also for sub-groups within that class:
In its sector support role, the Ministry should provide data to schools that more accurately shows areas of strength and weakness so teachers can seek necessary support. For example, a teacher may be highly effective with native English speakers but not students with English as a second language.
Even if you don't agree that a robust model couldn't be constructed on the basis of 25 data points per teacher, you must agree that going below that level (e.g. the number of ESOL students within their class) is ridiculous. And that's without even considering that these models are based on correlations and are not causal.

Another problem that isn't noted in the report is measurement. Unlike the U.S., New Zealand doesn't have a standardised testing system. Students select their NCEA subjects themselves, not all subjects have pre-requisites (e.g. a student can take NCEA Level 3 Economics, without Level 1 or Level 2) and not all subjects exist at all levels. On top of that, NCEA is based on achieving standards (or not) rather than a grade distribution. So it isn't clear how you would measure value-added in the absence of some standardised test.

Not only is the teacher value-added approach advocated in the report problematic for the reasons noted above, the empirical support for its effectiveness is notably thin. Ironically, the examples that Cathy O'Neill uses in her book come from the Washington, D.C. model, which is the sole source of empirical support for the value-added model that Udahemuka uses in her report. The local school example that the report highlights, Massey High School in Auckland, doesn't use a teacher value-added approach at all and is (according to the report) achieving excellent results. So, the key New Zealand example demonstrates that it isn't even necessary! This contrasts with other parts of the report, where more international examples of success are able to be used.

The New Zealand Initiative may have the best intentions. Improving education quality is a laudable goal. But one of the key recommendations in this report seriously misses the mark.

Thursday, 6 July 2017

Why we are sucked in by good deals for things we don't need

Back in April, had an interesting article:
IF you spent the weekend spending up you’re certainly not alone...
Speaking to, Dr Brockis, who specialises in brain health said retailers were cashing in on our buying habits.
The Future Brain author said most people got excited buying things with many feeling a great sense of control when they handed their wallet over.
“Our brain reacts to buying things,” she said.
“We either feel a sense of satisfaction we have something we want or reward if we’re buying for other people.”
Dr Brockis also said sales were an effective tool for retailers because shoppers were far more likely to buy something they didn’t need.
“That thinking we got such a bargain is what retailers have really honed in on,” she said.
“If something is significantly discounted shoppers are far more likely to buy it whereas if it’s small discount they’re not as drawn to it. Big discounts pique our curiosity.”
She said discounts made buying irresistible for some.
“The problem is our shopping bias to pay less for a given item can blind us to the fact we actually don’t need the item at all or it doesn’t suit us or might be the wrong size,” she said.
Of course, if a good becomes less expensive, consumers will buy more of it. That is the simple Law of Demand, which underlies the downward-sloping demand curve. However, even if a good is less expensive than before, it makes little sense for consumers to buy it if they have no use for it, i.e. if "we actually don’t need the item at all or it doesn’t suit us or might be the wrong size". So what is going on?

In behavioural economics, we recognise that consumers not only derive utility from the good or service they purchase, but also from the act of purchasing itself. We call this transaction utility. If a consumer feels that they are 'getting a good deal', this makes them happier (higher utility), and makes them more likely to purchase.

The consumer might feel like they are getting a good deal because the price is below some reference price, e.g. $10, marked down from $15. This feels like a good deal. Or perhaps the good is bundled with other things, such that the bundle feels like a good deal. This explains why we buy combo meals when we don't really want a drink or fries, or why we buy-two-get-one-free when we really only wanted one item.

Now, marketers know about transaction utility and use this to influence our purchasing behaviour. That's why they emphasise the original price whenever a discount is offered. Or even worse, why they may initial offer goods at a crazy high price, in order to maximise the discount that is subsequently offered. Taking advantage of our quasi-rational behaviour increases their profits.

Fortunately, we can somewhat protect ourselves against falling victim to such 'false' transaction utility, but only if we're aware of it. Always ask yourself: "Am I only buying this because it seems like a good deal, or do I really need this?" It may not work all the time, but at least it might save us from our worst excesses.

Monday, 3 July 2017

The economics of Survivor

I love the reality show Survivor, and have done so since the very first season. In fact, it's the only reality show I can stomach. Over the past several years, I have even toyed with the idea of writing a book (or at least, a long monograph) on the economics of Survivor, since there are so many economic concepts that can be illustrated using examples from the show - from game theory, to choice under uncertainty, to comparative advantage.

However, it seems like I've been beaten to this idea. In the latest issue of the Journal of Economics Education, Dean Karlan (Yale) wrote an article on the economics of Survivor (ungated earlier version here). Fortunately for my embryonic book plans, Karlan's article only covers three examples: (1) individual decision-making and how pride and honour can be included in an individual's utility and affect their choices; (2) game theory and backward induction; and (3) repeated interactions (also game theory).

The second example is perhaps the most interesting. It relates to the very first season of Survivor, won by Richard Hatch. Karlan writes about the final immunity challenge in the game, which was a test of stamina where the three remaining competitors had to stand on a stump and touch a pole, with the last remaining one winning immunity and getting to choose which of the other two would join them in the final tribal council:
At two and half hours, all three remained. Then, Richard surprised everyone when he voluntarily took his hands off the pole and disqualified himself from immunity. A real puzzle is why it took him so long to drop out (the answer may be simple: dropping out too soon would have made his strategy obvious, and thus less effective).Why was it optimal for him to lose?
Start at the end. There were four possible paths in this game:
(1) Richard wins and votes out Rudy, thus competing against Kelly in the final. Richard and Rudy had a strong alliance, but Richard did not think Rudy would forgive him for breaking it, even at the final stage. Thus, Richard believed that if he won and voted out Rudy, he would lose Rudy’s vote. He wanted to compete against Kelly, but he wanted Kelly to be the one to vote out Rudy.
(2) Richard wins and votes out Kelly, thus competing against Rudy. Richard loses. Everyone knew that if Rudy made it to the end, he would win the game.
(3) Kelly wins. If Kelly wins, she votes out Rudy (because, again, everyone knew that if Rudy made it to the end, he would win the million dollars). Then, Richard and Kelly compete for the million dollars, but here Richard wins Rudy’s vote, whereas in option #1 above Richard would lose Rudy’s vote.
(4) Rudy wins. Then he votes off either one. It does not really matter, because Rudy would win the game.
Basically, Richard had to ask himself: If I win the immunity challenge and vote off Rudy, what are the odds that the final council will make me the winner? Or, if I lose on purpose, what are the odds that Rudy beats Kelly in the immunity challenge, and then wins the game?
Given these options, Richard actually made the right choice. He released voluntarily. Kelly then won the immunity challenge (after four hours, Rudy lost his concentration and accidentally released the pole) and removed Rudy from the game. Thus, the final vote came down to Kelly vs. Richard, with Richard winning by one vote - that of Rudy’s.
There's much more to the economics of Survivor than this one example illustrates (or even all three that Karlan uses in his article). So maybe there is still space for a book?

Sunday, 2 July 2017

Land rents and urban fertility

This week I've been at the 9th International Conference on Population Geographies in Seattle. I enjoy these conferences as they are a mix of economics, demography, and of course geography, and that intersection is increasingly where my research lies. However, this post isn't about anything I heard at the conference, but is instead about this 2014 paper (ungated earlier version here) by Hiroshi Aiura (Oita University) and Yasuhiro Sato (Osaka University), which was published in the Canadian Journal of Economics.

In the paper, the authors develop a model that links land rents, land consumption (for the rearing of children) and fertility in urban areas, and they particularly distinguish between fertility in the urban core and the suburbs (or urban fringe). The main argument in the paper is that families need land to raise children. Land is more expensive in the urban core than in the suburbs, so when people want children they move out of the central city. However, land is expensive to rent and so rents also have an effect on the number of children that are raised. In the theoretical model Aiura and Sato develop:
...the land rent is higher in the central part of the city, leading to lower land consumption and fewer children. Moreover, city growth results in increases in land rent, which in turn results in a decline in land consumption and fertility.
Most of the paper is quite mathematically technical. However, the authors then go on to calibrate their model with data from the Tokyo metropolitan area over the period from 1950 to 2010. For the most part, their model does replicate the observed results for Tokyo, although the observed fertility differential between urban core and urban fringe is smaller than the differential from the model. However, the key results still hold: the total fertility rate is higher in the suburbs than in the urban core, and as the city grew, total fertility fell.

What should we take away from this? I don't think it explains all, or necessarily even a large part, of the decline in total fertility that countries have been experiencing. However, it may help to explain some of the localised differences. I wouldn't be at all surprised if total fertility was lower in central Auckland than outside the urban core (though there are ethnic differences at play across those areas as well).

Also, we currently live in a period where house prices have been rising (particularly in the large urban centres like Auckland). If this research holds for New Zealand (as it appears to for Japan), we might expect further declines in urban fertility as a result, and lower than expected fertility to persist as long as house prices remain high.