Sunday 28 February 2016

Alcohol minimum pricing, elasticity and profits

One of the common refrains from public health advocates is that New Zealand should introduce unit minimum pricing for alcohol, similar to Scotland and Canada (and soon to be introduced in Ireland). A recent article in the New Zealand Medical Journal made the case (gated; I don't see an ungated version anywhere), which was picked up by the New Zealand Herald, and critiqued by Eric Crampton.

I want to take a different angle, of relevance to my ECON100 students. What would be the effects of alcohol minimum pricing, if it was introduced. For simplicity, I'm going to ignore the existing excise tax on alcohol, and any externalities, and just concentrate on the minimum price.

The effect is shown in the diagram below. Without minimum pricing, the market equilibrium price is P0, and the quantity of alcohol sold (and presumably consumed) is Q0. But with a binding minimum price (above the equilibrium price) of P1, the quantity of alcohol demanded falls to Q1. In other words, alcohol consumption falls.


Now consider the economic welfare effects. The consumer surplus is the difference between the price the consumers are willing to pay, and the price they actually pay. Without the minimum price, consumer surplus is AEP0, but with the minimum price this falls to ABP1 (which makes sense - consumers consume less alcohol with the minimum price than without it, and pay a higher price for that alcohol). The producer surplus is the difference between the price the alcohol retailers receive, and their costs (which are shown by the supply curve). Without the minimum price, producer surplus is P0EF, but with the minimum price producer surplus is P1BCF. However, we can't easily tell if producer surplus has increased or decreased - retailers are selling less alcohol, but they are receiving a higher price for it.

Whether producer surplus increases or decreases when minimum pricing is introduced depends mainly on the price elasticity of demand for alcohol. If alcohol is price elastic, then alcohol consumers are relatively sensitive to price changes. That means that if price increases by x%, then the quantity of alcohol demanded will decrease by more than x%, and the firm's revenue (price multiplied by quantity sold) will decrease. Whether that results in decreased profit depends on whether the loss of revenue is larger than the costs saved by selling fewer units (which in most cases it will be).

On the other hand if alcohol is price inelastic, then alcohol consumers are relatively insensitive to price changes. That means that if price increases by x%, then the quantity of alcohol demanded will decrease by less than x%, and the firm's revenue (price multiplied by quantity sold) will increase. Given that costs will likely decrease (because the firm is selling fewer units), then profits will necessarily rise.

So, is demand for alcohol price elastic or price inelastic? There is a fair amount of debate on this point, and Eric Crampton has some of the latest evidence here. However, we need not go too far to determine whether the sellers themselves believe that demand is price elastic - if they think that is the case, they will argue against any minimum pricing (because if demand was price inelastic, minimum pricing would increase their profits!). And indeed, supermarkets (who have the largest market share of the retail alcohol market) are arguing against this policy. That suggests alcohol demand is price elastic. [*]

As an addendum I recently had a student, who worked for a large brewer, estimating the price elasticity of demand for products (note: not for alcohol as a whole). This student (who I can't name given their position in the industry) found that demand for specific products was highly elastic. This shouldn't be too surprising - there are many substitutes for a specific brand of beer. However, the student's results also suggests that consumers are very willing to shift to cheaper options (within the beer category).

Overall then, it appears that alcohol minimum pricing would reduce profits (else the retailers would be less likely to argue strongly against it), suggesting that alcohol is price elastic.

[*] The supermarkets might also argue against the policy if they believed that it would increase their costs, such as if they had to change their pricing policies. The costs of complying with minimum pricing don't seem to me to be large, so this seems unlikely as an explanation.

Saturday 27 February 2016

Why women pay more

Back in December, Danielle Paquette wrote in the Washington Post about gender differences in pricing:
Radio Flyer sells a red scooter for boys and a pink scooter for girls. Both feature plastic handlebars, three wheels and a foot brake. Both weigh about five pounds.
The only significant difference is the price, a new report reveals. Target listed one for $24.99 and the other for $49.99.
The scooters' price gap isn't an anomaly. The New York City Department of Consumer Affairs compared nearly 800 products with female and male versions — meaning they were practically identical except for the gender-specific packaging — and uncovered a persistent surcharge for one of the sexes. Controlling for quality, items marketed to girls and women cost an average 7 percent more than similar products aimed at boys and men.
When a seller offers a product (or service) for different prices to different customers (or groups of customers), and those price differences don't relate to differences in cost, we refer to that as price discrimination. In order for price discrimination to work, sellers need to meet three conditions:
  1. Different groups of customers (a group could be made up of one individual) who have different price elasticities of demand (different sensitivity to price changes);
  2. You need to be able to deduce which customers belong to which groups (so that they get charged the correct price); and
  3. No transfers between the groups (since you don't want the low-price group re-selling to the high-price group).
What about the case of boys' scooters and girls' scooters? The sellers must believe that buyers of scooters for boys have more elastic demand for scooters than buyers of scooters for girls. How could that be? In my experience (having both a son and a daughter), if your child really wants a scooter you probably want to shop around for the best option. If you do so, you quickly realise that there are lots of scooter options targeted at boys, but far fewer options targeted at girls. This is of course related to the relative sizes of the markets for boys' scooters and girls' scooters. The larger market size means that more firms want to sell scooters for boys than for girls (as well as more variety of scooters for boys than for girls), which has a flow-on impact on pricing.

Because there are more options for boys' scooters, we say that there are more substitutes. Customers have more choice, and that makes demand relatively more elastic, so firms find it harder to raise prices (because customers would simply buy a boys' scooter from a different seller instead). With girls' scooters, there are fewer options (fewer substitutes), so firms find it easier to raise prices (or rather, to not lower them to the same price as scooters for boys).

This isn't just happening in the market for scooters. Paquette goes on to note other products where women pay more, including razor cartridges, haircuts, and clothing. You can make elasticity-related arguments for those differences in pricing too, though not all are related to the number of available substitutes. As Tim Harford notes:
This female insensitivity to price — if it really exists — might be driven by all kinds of things. Perhaps women tend to be busier and have less time to shop around. Or perhaps they care more about quality when it comes to deodorant or shampoo, whereas men just want something cheap.
Uri Gneezy and John List, in their book book "The Why Axis: Hidden Motives and the Undiscovered Economics of Everyday Life", argue that this type of discrimination is unfair, and for some products (like the scooter) it is hard to see the fairness in the pricing. However, as I have argued, for many products it may be more unfair not to have price discrimination. Either way, this is a more common pricing practice than many people realise. The next time you want to buy a scooter for your daughter, maybe you should buy the red one.

Friday 26 February 2016

ED data doesn't tell us much about the spatial distribution of acute alcohol-related harm

One of my main research areas in recent years has been the social affects of alcohol outlet density - essentially looking at the relationship between the number of outlets in an area and measures of alcohol related harm. So, this recent paper in the journal Addiction (sorry I don't see an ungated version anywhere) by Michelle Hobday, Tanya Chikritzhs, Wenbin Liang, and Lynn Meuleners (all Curtin University) interested me.

In the paper, the authors look at the effect of alcohol outlets, sales and trading hours on alcohol-related injuries in Perth, Australia. The idea of looking at outlet numbers (separately for on-licence and off-licence outlets), sales, and trading hours all within the same statistical framework is interesting and potentially important. However, there is a problem in that they use data from emergency department (ED) presentations (I'll explain why that's a problem shortly). Anyway, the authors find:
At postcode level, each additional on-premises outlet with extended trading hours was associated with a 4.6% increase in night injuries and a 4.9% increase in weekend night injuries. An additional on-premises outlet with standard trading hours was associated with a 0.6% increase in night injuries and 0.8% increase in weekend night injuries.
So that seems fine. However, when looking at off-licence outlets:
Conversely, counts of off-premises outlets were associated negatively with alcohol-related injury, indicating a 3.9-4.9% lower risk per additional outlet.
What the hell? So, more off-licence outlets are associated with less harm?

John Holmes and Petra Meier (both University of Sheffield) wrote a commentary on the article in the same issue of Addiction. In the commentary, they correctly note that these sort of inconsistent results are endemic in the literature on the relationship between alcohol outlets and harm. By inconsistent I mean both inconsistent between different studies (even within the same geographic area), and inconsistent with theoretical predictions.

In this case, the problem is the measure of alcohol-related harm. Hobday et al. use (alcohol-related) ED presentations, which on the surface seems like a good measure of alcohol-related harm. Person drinks too much, suffers an accident (or violent incident) and goes to the hospital. Simple enough, right? The problem lies in the address coding in the ED dataset. In health data (like ED data), patients are geographically coded to their home address. This may, or may not, coincide with the location of the harm.

For chronic harm (e.g. cirrhosis), coding to patients' home addresses makes a lot of sense. The geographic accessibility of alcohol over the long term can reasonably be measured by the extent of access to alcohol from each patient's home. However, for acute harm (e.g. injury presentations) this doesn't hold. The geographic accessibility of alcohol on the night of the incident relates to where the patient was on that night, which may or may not be their home address at all. I'd wager that a lot of alcohol-related injury presentations at night (the measure used by Hobday et al.) arise from encounters in the night-time economy away from the patient's home. Indeed, Hobday et al. recognise the problems with their data:
A limitation of using ED records is that location information is restricted to the patient's place of residence, and data on last place of drinking are not recorded.

So, there is little reason for us to believe that we would observe a positive relationship between alcohol outlet numbers (or hours or sales) and ED presentation data. In fact, my co-authors and I observed mostly statistically insignificant results when looking at similar data for Manukau City. Having acknowledged the problems with the ED data, we have avoided this approach in our subsequent work (e.g. see here or here).

Now, let's think through the unexpected results. Hobday et al. find that there are more ED presentations from people who live in areas that have fewer off-licence outlets (especially those that open later) compared with areas that have more off-licence outlets. One potential explanation is that people who live close to an off-licence outlet (especially off-licence outlets that open later) have ready access to alcohol and can easily drink at home, and have less reason to travel to entertainment precincts where they might be at higher risk of becoming a victim of violence. This might be reinforced by drinkers who don't want to go to entertainment precincts to drink, but still want to have ready access to alcohol, choosing to live in areas where an off-licence outlet is nearby. In contrast, people who don't live close to an off-licence outlet (or where such outlets close earlier) must travel further to drink, and may therefore be more likely to drink in entertainment precincts where they are at higher risk of alcohol-related harm such as violence. I'm not sure whether this explanation is the true one that underlies the results, but it might be one contributing factor.

Overall though, for the sake of credibility of results, it might be best not to use ED data as a measure of acute alcohol-related harm, unless the location data relates to the location of harm rather than the patient's residential address.

Tuesday 23 February 2016

The compensating differential for rural GPs must be enormous

In the news this morning was a story about a Tokoroa GP struggling to recruit a new doctor:
A Tokoroa doctor is struggling to fill a job that offers a young GP the potential to earn an eye-watering $400,000-plus a year - and he will even chuck in half his practice for free...
In the past four months, Dr Kenny has not received a single application for the permanent position, which he believes is due to the perception of a rural general practitioner being a dead-end job.
The 61-year-old said $400,000 after expenses was more than double a GP's average income. But even the prospect of no weekend or night work had failed to attract a taker.
Economists recognise that wages may differ for the same job in different firms or locations. Consider the same job in two different locations. If the job in the first location has attractive non-monetary characteristics (e.g. it is in an area that has high amenity value, where people like to live) then more people will be willing to do that job. This leads the supply of labour to be higher, which leads to lower equilibrium wages. In contrast, if the job in the second area has negative non-monetary characteristics (e.g. it is in an area with lower amenity value, where fewer people like to live) then fewer people will be willing to do that job. This leads the supply of labour to be lower, which leads to higher equilibrium wages. The difference in wages between the attractive job that lots of people want to do and the dangerous job that fewer people want to do is called a compensating differential.

Now, consider the case of this job for a doctor. Even when offering a 100% premium, Dr Kenny isn't able to fill the position. There are a number of ways the compensating differential may arise in this case.

First is the location. Living in urban areas is often more attractive to young people (including presumably young doctors) than living in rural areas. So, a premium is required to overcome that difference (which, admittedly, might not apply to all people).

Second are the job characteristics. Reputedly, working as a rural GP is very hard work, involving long hours, and where it is difficult to take breaks or holidays (or more so than for urban GPs or other doctors). Dr Kenny says as much:
"Last year, I cancelled a holiday because I couldn't get a locum ... and this year I am probably going to have to cancel a holiday ... and it's just tough for me."...
He worked between 8.30am and 6pm without a lunch break.
Again, a premium would need to be offered to make a job with long hours and difficult working conditions attractive.

Third, are firm characteristics. In an occupation like doctors, it would not surprise me if firm's reputations for working conditions are reasonably well-known within the domestic community. So, if a firm has a difficult manager, for instance, it would be less attractive to potential employees and again a premium would need to be offered. I'm not saying that's true in this case, but in theory it is a further source of compensating differentials in wages.

Overall, it appears that a $200,000 premium (plus whatever half of the GP practice is worth) is not enough to compensate potential employees (and co-owners!) for the negative characteristics of being a doctor in Tokoroa.

[Update 24/02/2016]: After the media coverage, many doctors are interested in the position. However they are international, not domestic, doctors. Perhaps the compensating differential for moving from Portugal or Brazil to Tokoroa isn't as large?

Also, the other practice in Tokoroa notes that the stress of owning a clinic is a factor that makes the job less attractive. See my second point above.

Friday 19 February 2016

Chinese growth and global inequality

Last year I wrote a series of posts on global inequality (see here, here, here, and here). I've been meaning to come back to this topic since reading this recent Branko Milanovic blog post about China.

China (along with India) has been the biggest contributor to the reduction in global "Concept 2" inequality (a term coined by Milanovic - this is inequality between countries in terms of average incomes, weighted by population size). However, China has been growing faster than western countries for a long time. Eventually it may catch up or pass some of them and at that point, will China start to contribute to an increase global inequality? And what about inequality within China - how important is that?

Milanovic explains (though I encourage you to read his whole post):
...rising internal inequality in China added some 2 Gini points to global inequality [between 1988 and 2011]. Luckily, however China’s fast growth more than compensated for that.
But the question can be asked next, what happens if China continues growing fast? Will its inequality reducing effects wane, and eventually reverse? The intuition is helpful here: if China were to become the richest country in the world, surely its further faster growth than the world mean, will be inequality-augmenting. Therefore, there must be a point when China becomes so rich that its further growth adds to global inequality...
Now, with global Gini around 0.7, the percentile rank at which countries begin to add to global inequality is around 0.85 (that is, only if they are mean-richer than 85% of other countries). China’s mean income is still far from that point. In 2011, it is around the 60th percentile with urban China around the 70th percentile and rural China around the 35th percentile...
Thus, while growth in urban China’s income will, by 2020, be close to contributing to increasing global inequality, its rural mean will be far from that position.
In other words, growth in urban China will start to contribute to an increase in global inequality from about 2020 (at projected growth rates). Rural China is some way behind, along with India. However, if both rural China and India also catch up, then we might expect a rapid reversal of the recent pattern of decline in global inequality.

Although, with Chinese growth looking much lower than previously, maybe we don't have to worry just yet?

Read more:



Wednesday 17 February 2016

The changing business model for MOOCs

Yesterday I wrote a post about MOOCs and the changing role of teachers. Part of that post was a consideration of where MOOCs sat on the hype cycle. Then, soon after hitting the 'post' button, I read this Times Higher Education article, about Coursera changing its business model:
Coursera last week announced the release of dozens of new courses and course sequences, which it calls Specializations, in subjects ranging from career brand management to creative writing. But many of the new MOOCs came with a new barrier to enrollment. To sign up for Michigan State University’s How to Start Your Own Business, for example, budding entrepreneurs have to pay $79 up front for the first of five courses in the Specialization or prepay $474 for the entire program.
So, what was previously free is now becoming not so. Note that Udacity (another MOOC provider) moved to a pay model a few years ago as well. Of course, having a pay model will likely reduce the number of enrolments by students who fail to complete each course (but not entirely - many university students fail to complete courses despite much higher fees), but it also reduces the 'open' aspect of massive open online course (does that make them MOCs instead?). As might be expected, Coursera has been criticised for making MOOCs less accessible, particularly to those on low-incomes (including those in developing countries).

However, this change clearly demonstrates that the free online model wasn't sustainable. As noted in the article:
The education writer Audrey Watters called the shift “significant,” but also “inevitable.” In an email, she pointed out that Coursera has needed to develop a business model that satisfies its investors -- “although I’m not fully convinced that this move will be it,” she added.
Which suggests to me that MOOCs are approaching that 'trough of disillusionment' section of the hype cycle. The technology is failing on its initial promise, and producers are trying to find the right business model to fit. Up to now, MOOC providers have been essentially assuming that education was mostly about content provision (which MOOCs are great at). However, a substantial part of education is about signalling and credentials (I've written about signalling in education previously here), which relies on separating high-quality from low-quality students - since MOOCs are open-access (and free) and identity verification for assessments is difficult (but not impossible), MOOCs are less good at demonstrating a student's quality to employers. If employers don't recognise a MOOC certificate as signalling a high-quality employee, then it's hard to argue that it holds as much value to a student as a university qualification. In order to be a financially sustainable business though, this is a problem that MOOC providers are going to have to solve.

Read more:



Tuesday 16 February 2016

MOOCs and the changing role of teachers in higher education

The research and advisory firm Gartner produces an annual 'Hype cycle for emerging technologies'. Here is the picture for 2015:


In the hype cycle, a new technology is expected to go through a series of five stages starting with the 'innovation trigger' where expectations grow rapidly, then a peak of inflated expectations (which are generally not met), followed by a trough of disillusionment (where interest wanes due to repeated failures of the technology), a slope of enlightenment where the true benefits of the technology start to become apparent (and are often different from those initially envisioned), and finally a plateau of productivity when the technology achieves mainstream adoption.

Notably absent from the hype cycle in the Gartner report is online education. Massive open online courses (MOOCs) are one of the most endemic buzzwords in higher education at the moment. The idea that there are thousands of potential students willing to study online, at very low marginal cost, is appealing to university administrators. The reality, from what I have seen, is that only a very small proportion of MOOC students complete, and the costs of developing a high-production-value MOOC are very high. Serious questions should be raised about where on the hype cycle MOOCs lie. Are they about to crash down a trough of disillusionment, and would universities be better to wait until others have identified where the real value in online education lies, before investing heavily? And how should faculty react - should we be upskilling for the new online regime, or waiting until things are more settled?

On the latter question, late last year the Journal of Economic Perspectives had two interesting papers that present contrasting (and sometimes complementary) views on the state of online education (with a particular focus on economics, as your might expect). The first paper, by Michael McPherson (Spencer Foundation) and Lawrence Bacow (Harvard Kennedy School) paints what I consider to be a realistic picture of the pros and cons of online education. They actually present a variation on the discussion I have with my ECON110 class every year - that the incentives for universities to be involved in MOOCs are different for the top quality and for the low quality universities. They write:
We noted earlier that more-selective and prestigious colleges and universities make less use of fully online courses than other institutions do. What explains this pattern of adoption? A natural explanation is that more-selective institutions compete on the basis of personal service, prestige, and brand while less-selective places are offering something closer to a commodity product...
One natural conclusion here is for the education market to rapidly devolve into two tiers: (1) a 'top tier' that uses high-quality (and expensive) online (or hybrid/flipped classroom) instruction to differentiate themselves, and as a quality signal and marketing tool to students (and their parents); and (2) a 'bottom tier' that offers a commoditised education based on modules drawn primarily from the online offerings of the top tier, with online tutorial support that is automated and involves minimal human input, at the lowest cost possible.

What happens to the mid-range universities in this system, that can't compete on quality, and can't compete on low cost either? Will we see a hollowing out of educational institutions? These are important questions for universities in New Zealand, for instance.

McPherson and Bacow provide some hope. They note:
...for those who believe that brilliantly produced online courses taught by a handful of the very best faculty in the world will eliminate the demand for live versions of the same courses, we note the continuing vibrant and growing market for live concerts, theatrical productions, and sporting events. Cheap digital downloads of music have not eliminated the demand for live concerts, nor has the availability of live sports on TV (often with better viewing angles, instant replay, and simplified access to bathroom facilities) eliminated the demand for tickets to live sporting events.
Moreover, it is difficult (read: expensive) to integrate current events, locally-specific content, and interactive teaching into online lectures, so students who want this type of learning (which, from my experience, is vastly superior to alternatives) will seek out institutions that offer it. My feeling then is that economics faculty should be focusing on the value-add they provide in-class. If you are teaching straight from a textbook, using the pre-packaged textbook powerpoint presentations and the instructors manual questions, then you are first in line to be replaced by a MOOC. Maybe that's what you intend (it would allow more time to focus on research, after all), but it doesn't bode well for long-term job security. However, how far away is that future? As McPherson and Bacow note, there are a lot of thorny issues that remain unresolved, foremost of which are intellectual property issues.

In the second paper, Peter Navarro (UC Irvine) presents a much rosier picture of online higher education (from experience - Navarro has been teaching using a flipped classroom model for many years, and in MOOCs more recently). He also presents some good arguments for (particularly new) faculty to up-skill on online (or hybrid) delivery modes. He notes that:
...online education technologies will both substitute labor and complement labor. For example, while MOOCs may spell doom for some type of teaching like traditional lectures that cover the basics of a discipline, a shift to more hybrid courses might increase the demand for other types of teaching, like personalized in-class discussions of examples and applications. While the overall effect on labor demand is unclear, there certainly will be distributional consequences, with winners and losers among educators depending on their skills, willingness to adapt, and ability to innovate.
Again, a good argument for preparing an offering that is different from the standard textbook treatment of a topic. Of course, the relationship between teaching and job tenure assumes that high-quality teaching is valued by universities alongside high-quality research, which is by no means a given in the current funding environment. But that is an argument for another time.

Monday 15 February 2016

Even candy can't make young kids republican

There is a famous saying that: "A man who is not a Liberal at sixteen has no heart; a man who is not a Conservative at sixty has no head", attributed to Benjamin Disraeli (but, interestingly, also to many others). I don't think this statement has been rigorously empirically tested, but a recent paper in the journal Economic Inquiry (ungated earlier version here) by Julian Jamison (Consumer Financial Protection Bureau) and Dean Karlan (Yale University) looks at the youngest end of the age distribution - children (aged 4 to 15).

The paper involves probably the cutest field experiments possible - conducted on children at Halloween. The authors explain:
We set up two tables on the porch of a home for Halloween, one festooned with McCain campaign props in 2008 (Romney in 2012) and the other with Obama props. Children, at the stairs leading up to the porch, were told they could choose which side to go. Half of the children were randomly assigned to be offered twice as much candy for the McCain table (Romney in 2012), and half were offered an equal amount...
The experimental setup allows us to measure not just what proportion of children who trick-or-treat in this neighborhood support each candidate (as indicated by their choice of table), but also how elastic their support is, or, more precisely, how elastic their desire is to make a public statement of their support.
What did they find? In the 2008 experiment:
In the “equal candy” treatment, 79% of children chose the Obama table, reflecting the high level of support for the Democratic Party in New Haven, Connecticut. When offered twice the amount of candy to go to the McCain table, 71% of the children still chose the Obama table, though the difference is not statistically significant (Table 1)...
Children ages eight and under did not respond to the additional candy incentives — approximately 30% of children chose the McCain table in both treatment groups. Children ages nine and older however, were much more responsive to the candy incentive. The percentage of older children that visited the McCain table increased from 10% without the incentives to 30% with the incentives.
So, younger children were more firm in their preferences for Obama (they had less elastic preferences than older children). And in the 2012 experiment:
Our results are largely consistent with the results from 2008, suggesting that support for Obama in this context has not declined since 2008. Eighty-two percent of children chose Obama in the “equal candy” treatment, whereas 78% of children chose Obama when twice as much candy was offered at the Romney table.
As in 2008, for children ages nine and older, the double candy incentive appeared to encourage some Obama supporters to choose Romney. While 17% of older children chose Romney when offered equal candy, 31% of older children chose Romney when offered double candy. For children ages eight or under, the double candy incentive had the opposite effect: 18% chose Romney when offered equal candy, whereas 14% chose Romney when offered more candy.
So even bonus candy isn't enough to get young children to abandon their political allegiances. Older children are more easily swayed by a little extra sugar. If we extrapolate to adults then, does that explain pork barrel politics?

Wednesday 10 February 2016

Why the destruction of two Death Stars might not have bankrupted the galactic economy

I deliberately avoided any news about Star Wars until I saw the new movie, with the consequence that there was a lot of Star Wars related stuff in my to-be-read pile when I got back from holiday. One of those was this paper by Zachary Feinstein (Washington University in St Louis). Bloomberg covered the paper, as did Mark Johnston in his blog.

In the paper, Feinstein estimates the cost of constructing two Death Stars (the one destroyed in the Battle of Yavin in Episode IV: A New Hope, and the incomplete (but operational) one destroyed in the Battle of Endor in Episode VI: Return of the Jedi). Feinstein estimates a cost of $193 quintillion ($193,000,000,000,000,000) for the first Death Star, and $419 quintillion for the second Death Star. He then estimates the size of the galactic economy (Gross Galactic Product, or GGP) at $4.6 sextillion per year, and then estimates the cost to the galactic financial system of the destruction of the two Death Stars. He concludes:
In this case study we found that the Rebel Alliance would need to prepare a bailout of at least 15%, and likely at least 20%, of GGP in order to mitigate the systemic risks and the sudden and catastrophic economic collapse. Without such funds at the ready, it likely the Galactic economy would enter an economic depression of astronomical proportions.
In other words, the Rebel Alliance may have won, but they also lost because the galactic economy would implode. Nice.

However, there is one (unstated) assumption on which the analysis is based, and that is that the galactic economy is based on a capitalist model, with infrastructure spending funded through bonds sold on relatively open financial markets. Now, perhaps somewhere the economic system of the Galactic Empire is explained in detail, but it certainly isn't in the movies (and Wookiepedia doesn't have a lot of relevant detail). And when I think about the Empire, I don't expect capitalism to be the economic system of the day (this in spite of the Trade Federation and the InterGalactic Banking Clan being represented on the Separatist Council in Episode II: The Clone Wars (see here)).

The Empire is clearly autocratic, and while that doesn't necessarily imply a low degree of market orientation, it seems to me that Imperial control of the means of production is rather more likely than not, particularly in the case of building large planet-destroying infrastructure and conducting a campaign to eliminate the Rebel Alliance. Moreover the rise of the Empire involved an immense increase in governmental control, and we know that planned or command economies are a powerful tool for enacting and enforcing economic change, so it should not be a surprise if the galactic economy (at least on those planets closely controlled by the Empire) was a command economy.

Now, because command economies have greater control over the means of production, they can apply resources to 'grand projects', and are often more successful than market economies at doing so. So the Death Stars could have been constructed by decree, with resources (labour, capital and materials) re-deployed from other uses (but at significant opportunity cost in terms of the reduced availability and quality of consumer goods). Raising significant funding through financial markets might not have even been necessary, in which case the Death Stars could have been constructed without a large systemic risk to the galactic financial system.

As a side note on the galactic financial system, according to Wookiepedia, at the end of the Clone Wars, "control of the Banking Clan was ceded to the office of the Supreme Chancellor"). So the Empire had direct control over the largest bank in the galaxy - it was effectively a state-owned bank.

Finally, a couple of other quick Star Wars related notes. The Free Exchange blog has a great post on the economics of the Star Wars universe, especially trade. Well worth a read. As for The Force Awakens, I had low expectations for it, and I guess my expectations were met. Like many, I was disappointed that the plot was essentially wholesale recycled from the original trilogy. However, there was something else that bugged me that I couldn't put my finger on. Until I read this critique, which pretty much nailed it.

Tuesday 9 February 2016

A modest proposal for dealing with red light runners

Drivers who run red lights is one of the banes of modern driving (and cycling, and crossing roads as a pedestrian). And periodically people vent to the media about it (like this recent opinion piece by Barney Irvine of the AA):
An epidemic - that's how Auckland AA members describe red light running in their city.
It's a road safety issue that has people scared, frustrated, and crying out for action. Every year, two-three people are killed and over 300 injured in accidents caused by red light running, imposing a social cost of close to $50 million on the country.
Irvine's proposed solution is to increase the number of red light cameras. I want to suggest an alternative proposal. But first some background.

While some drivers are careless or inattentive, I'd argue that most drivers are totally conscious of running a red light. A rational (or quasi-rational) driver weighs up the costs and benefits of running the red light, and if the benefits outweigh the costs they run the light. The benefits are the (often small) time saving from not waiting for one phase of traffic lights.

The costs? There's a probability of having an accident and the associated costs (of car repair, but also potential accident-related health costs). The probability of an accident (and facing the associated costs) increases the longer after the light has turned red, which explains why more drivers will run through a red light 0.1 seconds after it changes than 3 or 4 seconds after.

There is also the probability of being caught and fined (currently the fine is $150 for failing to stop at a traffic signal). The probability of being caught is probably fairly low (essentially a police officer would have to be at the same intersection to see you do it), so the probability-adjusted cost of fines is pretty small. Irvine's solution would increase the probability of being caught (on camera), and increase this cost.

However, there's another problem with the probability-adjusted cost of fines. The fines occur at some time in the future, and we know that people value costs (and benefits) that occur in the future much less than immediate costs and benefits. So, the cost gets discounted and compared with an immediate benefit. And for some drivers, the cost (small probability-adjusted cost of accident plus smaller probability-adjusted cost of fine) is smaller than the benefit (time savings), and they run the red light.

My modest proposal is to make the cost more immediate, and costly in time as well as monetary terms: Let's have road spikes that deploy on the white lines at traffic lights, 0.1 seconds after the light turns red. Any red light runners then face an immediate and severe cost of four new tires (blown out by the road spikes). On top of that, it pretty much ensures that there would be no drivers for whom the benefits of red light running outweigh the cost - because the time cost alone (of being forced to stop because all your tires are flat) is sure to exceed the time benefit of running a red light. And that's without considering the monetary cost.

And before you think this idea is crazy and wouldn't be implemented, consider this video of cars crashing into automatic bollards designed to reserve bus lanes for buses alone, which follows a similar principle:


Maybe forcing drivers to face a higher cost of their actions isn't so crazy?

[2018 Update]: The Chinese city of Daye (in Hubei province) implements something similar, to deter jaywalking pedestrians.

Monday 8 February 2016

The welfare impacts of the 2013 prescription fee increases

Regular reader Mark asked me via email about the effects of increasing standard fees for prescription medications from $3 to $5. Since it was a good question, I thought I would cover it here.

To recap, the government made this change effective from 1 January 2013. There was a fair amount of media coverage at the time (see here and here), and a 2014 editorial in the Journal of Primary Health Care (PDF).

What effects can we expect from this change? Well, first we need to consider the market for prescription medications. There are two aspects to this market: (1) the government (through Pharmac) negotiates prices directly with drug suppliers, then supplies the drugs to pharmacies at set wholesale prices (the wholesale market); and (2) the government maintains the retail price of prescription medicines by using a subsidy that is paid to the pharmacists (by District Health Boards), for the difference between the wholesale price and the retail price (which increased from $3 to $5) (the retail market). We know there must be a subsidy in the retail market because it is unlikely that the equilibrium retail price of prescription medicines would be as low as $3, and it is also unlikely that the low price is achieved through a price control otherwise we would observe excess demand.

So, considering the retail market only, an increase in the prescription price from $3 to $5 is essentially achieved through a decrease in the subsidy paid to the pharmacists. As shown in the diagram below, the supply curve without a subsidy is S, and the initial subsidy is shown by the curve S+subs. If there was no subsidy, the equilibrium price would be P0 (and the quantity of medicines sold and consumed would be Q0), but with the subsidy the consumer pays the price P1 ($3), and the pharmacist receives the effective price P2 (which is the $3 the consumer pays, plus the subsidy paid by the government), and the quantity of medicines sold and consumed increases to Q1.


Now consider the economic welfare effects. The consumer surplus is the difference between the price the consumers are willing to pay, and the price they actually pay. With no subsidy, consumer surplus is ABP0, but with the subsidy this increases to ACP1 (which makes sense - consumers consume more prescription medicines with the subsidy than without it, and pay a lower price for those medicines). The producer surplus is the difference between the effective price the pharmacists receive, and their costs (which are shown by the supply curve). With no subsidy, producer surplus is DBP0, but with the subsidy this increases to DEP2 (which again makes sense - producers sell more prescription medicines with the subsidy than without it, and receive a higher effective price for those medicines).

The value of the subsidy paid by the government per unit of medication is the difference between the effective price the pharmacists receive (P2) and the price the consumers pay (P1, or $3). The total cost to the government of the subsidy is this amount multiplied by the quantity of subsidised medicines sold (Q1), or the area P2ECP1.

Total economic welfare is made up of the consumer and producer surplus, minus the area of the subsidy. This is actually ABD-BEC. The area BEC is the deadweight loss of the subsidy, which arises because total economic welfare is actually maximised at the quantity where marginal social cost (MSC) is equal to marginal social benefit (MSB), which is at the quantity Q0, and the market is therefore trading too many medicines relative to this quantity. [*]

Now think about the effect of the decrease in subsidy, shown on the diagram below by the new curve S+subs2. Compared with the previous subsidy, the consumers pay a higher price (P3, or $5), the pharmacists receive a lower effective price P4, and the quantity of medicines traded decreases to Q2.


Now consider economic welfare. The consumer surplus with the new subsidy has decreased to AFP3, and the producer surplus has decreased to DGP4. The cost to the government has decreased to P4GFP3. Total economic welfare, on the other hand, has increased to ABD-BGF, with the smaller deadweight loss BGF.

So, considering this change we might expect to hear complaints from consumers, who were made worse off by the change. And indeed:
"A lot of people are saying it had to go up at some point, [but] $5 was too, too much for several people, especially with rent skyrocketing. For some people $3 is the limit - $5 is actually becoming quite considerable."
We might expect to hear complaints from pharmacists too, who were also made worse off. And indeed, read the editorial from JPHC (the lead author is based at the School of Pharmacy at the University of Otago).

The cost to taxpayers decreases, and the government at the time argued that the savings would be reinvested into the health sector. Presumably the government felt that there were better value health care alternatives that the subsidy savings could be applied to. Without a careful analysis though, there is no way to evaluate this claim.

*****

[*] From a strict utilitarian perspective the deadweight loss means that this subsidy is not a good idea. However, those who benefit most from the subsidy are likely to be people on low incomes (especially the chronically sick, the elderly, children, etc.), so the deadweight loss might alternatively be considered a reasonable cost of a transfer of wellbeing from taxpayers as a whole to these vulnerable groups.

Saturday 6 February 2016

Unemployment benefits and work disincentives

A couple of weeks ago the NZ Herald had a couple of (short) opinion pieces on the choice for beneficiaries between working and remaining on the benefit - one by Karen Pattie, and one by Lindsay Mitchell. Karen writes:
There is a large percentage of my clients who approach our service and ask us to look at the viability of returning to work -- single parents who are committed to getting off benefit and excited about the prospect of returning to work.
When we break down the in-work tax credit, the childcare subsidy, accommodation supplement and temporary additional support, it is not uncommon that the working single parent ends up with under $50 a week more in their hand.
We then look at transport, parking, appropriate clothing etc. for work. Work and Income will assist with a percentage of this cost, however, not the total cost, which then gets taken off the $50.
Then, school holiday programmes need to be paid for along with childcare, which is subsidised, and the $50 in hand is reduced further.
Given this, most of our clients still opt to return to paid work because we can see the benefits of work experience which may lead to better work opportunities. 
Lindsay writes:
A Blenheim single mother of three finds she is only $34 better off working. She says, "When you weigh it up, is it worth going to work? The Government is trying to get everyone off the benefit but there is no incentive to work."
Of course, the 'choice' between working and remaining on the benefit is only relevant when there are jobs available. However, I'm not going to talk about that aspect. Instead, I want to discuss incentives (or rather, the work disincentives that benefits create).

One of the topics we cover in ECON110 is the economics of social security. Part of that topic involves considering the incentive effects of having a social safety net. If there is no safety net (for the unemployed, for example), then there are high incentives to take any employment that is available. The alternative is trying to live on zero income, relying on assistance from friends and family or non-government organisations, begging, etc. When there is a social safety net (for the unemployed), then the incentives for work are reduced, because the income difference between working and not working is lesser.

A rational (or quasi-rational) beneficiary who is offered the opportunity to work will weigh up the costs and benefits of working rather than remaining on the benefit. The costs of working (compared with being unemployed) are mostly the foregone leisure time (less time with the kids, gardening, or playing XBox). The benefits include higher income. If the difference between working and not working is only $34 (as per the example above), then it wouldn't be surprising for that to be insufficient incentive to encourage people to work.

A couple of additional points are important. First, there are non-monetary benefits to working that must also be factored in. Working provides a sense of purpose and identity. It can increase life satisfaction. So, it might not be surprising that some beneficiaries would return to work even if the monetary benefits were lower than the costs. Second, there may be long-run impacts. For example, the initial job taken may lead to improved future job prospects. As Lindsay notes:
Moving into work may provide little financial gain initially. But the individual's sense of well-being and future prospects are improved.
How do we reduce the disincentive for beneficiaries to return to work? We first need to recognise that the disincentives arise in two ways: (1) the relative generosity of the unemployment benefit; and (2) the rate at which the benefit is reduced as the beneficiary earns other income.

So, the disincentive to work can be reduced if the unemployment benefit was less generous. Clearly there is a trade-off here - you probably want the benefit to be high enough to provide for a minimum standard of living; however, making it too generous (compared with, say, the minimum wage) reduces the incentives to work.

The disincentive to work can also be reduced by allowing the beneficiary to continue to receive a (reduced) benefit if they go back to work. So, rather than taking away the entire benefit if a person returns to work, you simply reduce their benefit by an amount that depends on how much other income they earn. This way, you ensure that beneficiaries who take on part-time work can still attain a minimum standard of living, and you ensure that beneficiaries who work are financially better off than those who don't.

The abatement rate (the rate at which benefits reduce due to other income) matters because unsurprisingly it also affects incentives by contributing to the effective marginal tax rate (the proportion of the next dollar earned that is lost to taxation, decreases in rebates, and decreases in government transfers, e.g. benefits). If the marginal tax rate for low earners is 20%, and the benefit abatement rate is 50 cents for every additional dollar the beneficiary earns, then the effective marginal tax rate is at least 70% - quite a high disincentive to work. And then you have to factor in that the beneficiary might also have to pay student loans or child support from that additional dollar, and they might face a reduction in accommodation supplement and family tax credits, etc. So, there's not likely to be much left over. However, if the abatement rate is too low, then a large proportion of low (and medium) income earners will be eligible for income support, and you start to affect the incentives for people who would otherwise be working full-time, etc. So, again there is a tradeoff.

Social security is fraught with incentive issues and tradeoffs. Striking the right balance is always going to be a challenge.

Tuesday 2 February 2016

Why study economics? Tech firm jobs edition...

I've been pretty quiet over the last month, due to a family holiday in the U.S., then being quite sick on my return. Back into it now though.

On Saturday, Michelle Dickinson (aka Nanogirl) had an interesting piece in the NZ Herald about big data. She concluded:
With so many big issues being helped using complex analysing software, it's no surprise that one of the top future jobs is predicted to be in IT as a big data analyst developer.
Now, big data analysis might sound like a statistics job. However, I would argue that these are jobs for big data analysts are really jobs for economics graduates (at least, those with some quantitative skills). This is because economics graduates understand and can work with observational data, and hopefully understand some of the limitations of these data (including understanding the difference between causation and correlation).

Susan Athey (Stanford) has also recently written about jobs for economics graduates in tech firms, in answer to the question "Why do technology companies hire economists, and what is their contribution? What kinds of problems do they work on?". I recommend reading the whole thing, but of relevance to new graduates, she writes:
More junior economists have a wide variety of roles in tech firms. They can take traditional data science roles, be product managers, work in corporate strategy, or on policy teams.  They would typically do a lot of empirical work.
In terms of complementing existing non-economist workers, I have found that economists bring some unique skills to the table.  First of all, machine learning or traditional data scientists often don’t have a lot of expertise in using observational data or designing experiments to answer business questions.  Did an advertising campaign work?  What would have happened if we hadn’t released the low end version of a product?  Should we change the auction design?  Machine learning is better at prediction, but less at analyzing “counter-factuals,” or what-if questions.
These tech-firm jobs for big data analytics, designing and evaluating experiments, and other similar roles are tailor-made for economics graduates. Just one more reason why studying economics is a good idea.

Read more: