Saturday, 13 October 2018

Book Review: The Company of Strangers

If you've been reading about economics for long enough, sooner or later you will come across reference to the miracle that is the production of a lead pencil. Perhaps you'll read about it in the context of a Milton Friedman interview (like here), or perhaps a reference to the original, Leonard Read's 1958 book I, Pencil (see here). Either way, the story is about spontaneous order and/or interdependence - how the actions of hundreds or more people interact produce a single pencil, without any one of them necessarily caring about the end product, or even knowing how it is produced.

Over the years, I've seen several references to Paul Seabright's book, The Company of Strangers, in a similar context to I, Pencil. So, I finally took the time to read it (or rather, the revised edition from 2010 - the original was published in 2001). The essence of the book can be summarised as follows, from the introduction to the book:

  • First, the unplanned but sophisticated coordination of modern industrial societies is a remarkable fact that needs an explanation. Nothing in our species' biological evolution has shown us to have any talent or taste for dealing with strangers.
  • Second, this explanation is to be found in the presence of institutions that make human being willing to treat strangers as honorary friends.
  • Third, when human beings come together in the mass, the unintended consequences are sometimes startlingly impressive, sometimes very troubling.
  • Fourth, the very talents for cooperation and rational reflection that could provide solutions to our most urgent problems are also the source of our species' terrifying capacity for organized violence between groups. Trust between groups needs as much human ingenuity as trust between individuals.
Trust is central to the modern economy, and has been central to social interactions for as long as Homo sapiens has gathered into groups. In my ECONS101 class, I always wish we had more time to consider repeated games in our game theory topic, where trust and reputation become key features of the resulting outcomes. 

The book is thoroughly researched and very deep. Seabright draws from a range of sources from biology to anthropology to economics. Every paragraph made me think, which made for a long read (in case you were wondering why I haven't posted a book review since early September). Seabright also provided me with a number of novel ways of explaining some key concepts in my classes.

Some readers may find the book quite dry, but there are also some lighter highlights, such as this:
There is a lost look sometimes that flits across the brow of those senior politicians who have not managed to attain perfect facial self-control. It is the look of a small boy who has dreamed all his life of being allowed to take the controls of an airplane, but who discovers when at last he does that none of the controls he operates seems to be connected to anything, or that they work in such an unpredictable way that it is safer to leave them alone altogether. Politicians have very little power, if by power we mean the capacity to achieve the goals they had hoped and promised to achieve.
If only more politicians, and voters, understood this point! And this as well:
As the twenty-first century develops, "globalization" has become a convenient catch-all term to sum up the multitude of different, often contradictory reasons people have to feel uneasy about the way in which world events are developing.
Since the book is really about trust between people, and globalization is ultimately about connections between people, there are some good underlying discussions to be found in its pages. Seabright also provides an interesting perspective on the Global Financial Crisis (which was contemporary at the time of writing the revised edition). However, at its heart this is a book about people, and our interactions in a world where we don't know the majority of people with whom we interact. Like spontaneous order, such interactions are at the heart of economics. If you want to develop a deeper understanding of these interactions, this book would be a good one to read.

Thursday, 11 October 2018

Petrol prices and drive-offs

The New Zealand Herald reported yesterday:
Due to increasing costs of crude oil, the New Zealand dollar falling and increasing fuel taxes, petrol pumps are forcing a strain on Kiwis.
Some motorists are taking drastic actions to avoid the economic sting, putting in fuel and driving away from stations before paying.
Over the last few weeks, Z Energy has witnessed a small increase of motorists getting behind the wheel instead of the cash register.
"At a high level we have seen a slight increase in the number of drive-offs," a spokeswoman said.
Why the increase in drive-offs? Gary Becker (1992 Nobel Prize winner) argued that criminals are rational, and they would weigh up the benefits and costs of their actions (see the first chapter in this pdf). It is rational to execute a drive-off if the benefits of the drive-off (the savings in fuel costs because they fuel wasn't paid for) exceed the costs of the drive-off (the penalty for being caught, multiplied by the probability of being caught).

How often will people engage in drive-offs? We can think about that question in terms of the marginal benefits and marginal costs of drive-offs, as shown in the diagram below. Marginal benefit (MB) is the additional benefit of engaging in one more drive-off. In the diagram, the marginal benefit of drive-offs is downward sloping - the first drive-off provides a relatively high benefit (filling an empty tank), but subsequent drive-offs will likely provide less additional benefit, because the car's tank is already close to full. Marginal cost (MC) is the additional cost of engaging in one more drive-off. The marginal cost of drive-offs is upward sloping - the more drive-offs a person engages in, the more likely they are to get caught. The 'optimal quantity' of drive-offs (from the perspective of the person engaging in the drive-offs!) occurs where MB meets MC, at Q* drive-offs. If the person engages in more than Q* drive-offs (e.g. at Q2), then the extra benefit (MB) is less than the extra cost (MC), making them worse off. If the person engages in fewer than Q* drive-offs (e.g. at Q1), then the extra benefit (MB) is more than the extra cost (MC), so conducting one more drive-off would make them better off.

Now consider what happens in this model when the price of petrol increases. The benefits of drive-offs increase, because the value of fuel cost savings from driving off increases. As shown in the diagram below, this shifts the MB curve to the right (from MB0 to MB1), and the optimal quantity of drive-offs increases from Q0 to Q1. Drive-offs increase.

Regular readers of this blog will recognise that this situation is quite similar to the increase in honey thefts reported earlier this year. Petrol prices are expected to increase further through the rest of this year. Petrol stations should be preparing themselves for increases in drive-offs. Consumers can probably expect more petrol stations to move to having their pumps on pre-pay.

Read more:

Wednesday, 10 October 2018

Nobel Prizes for Paul Romer and William Nordhaus

The 2018 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) has been awarded to William Nordhaus of Yale "for integrating climate change into long-run macroeconomic analysis" and Paul Romer of NYU "for integrating technological innovations into long-run macroeconomic analysis."

Marginal Revolution has excellent coverage as always, on Nordhaus and on Romer. Romer's work on endogenous growth theory hasn't had much influence on my teaching as I don't teach macroeconomics or growth, but here is an excellent video from MRU that summarises many of the key contributions:

As you might expect, Nordhaus' work on the economics of climate change is picked up in my ECONS102 class in the topic on externalities and common resources. Here is my review of his book The Climate Casino - it's good that I finally read a laureate's most recent book before they received the award for once! Nordhaus has also contributed to our understanding of the economics of intellectual property rights, which Marginal Revolution didn't mention, but which I have talked about briefly here and here. His approach, in terms of the trade-off between having weaker (or shorter) intellectual property rights, which would lead to under-investment in intellectual property development, or having stronger (or longer) intellectual property rights, which would lead to under-consumption of intellectual property, is what I follow in teaching that topic in ECONS102.

This was a very well deserved (and overdue) prize for both men. However, I was a little surprised that they shared the prize together (in the Economics Discussion Group poll, both this year and last year, I picked Romer and Robert Barro to win). Nonetheless, an excellent choice.

Tuesday, 9 October 2018

The effect of cutting subsidies for after-hours doctors

The New Zealand Herald reported last week:
Parents who received free after-hours medical care for their children are now having to pay up to $61 at two Auckland clinics following funding cuts from district health boards...
The changes meant White Cross Glenfield's casual fee for under 13s after hours skyrocketed from free to $61.
At Three Kings Medical Centre, prices for care after 5pm had gone up to $50 for children aged between 6 to 12 - and $35 for under-6-year-olds.
This is what happens when you remove a subsidy - the price that consumers pay goes up. To see why, consider the market in the diagram below. The subsidy is paid to the supplier (the after-hours medical clinic), so we show it using the S-subsidy curve. The consumers (patients) pay the price where that curve meets the demand curve (PC), which from the article above could be as low as zero. The clinic receives that price (PC) from the patient, but then is topped up by the government subsidy, and receives an effective price of PP. The number of patients going to the clinic is Q1. If the subsidy is removed, the market shifts to equilibrium, where demand meets supply. The price for patients increases to P0, and the price received by clinics decreases to P0. The number of patients going to the clinic decreases to Q0.

The article notes that the subsidy hasn't been removed from all clinics. So, patients may simply go to some other clinic instead of the nearest one, if the nearest one is no longer subsidised. This was effectively what the DHB was trying to achieve:
Waitemata and Auckland City DHB announced a rejig to after-hours clinic funding in July in a bid to "reduce inequalities".
Presumably, that means that the DHB removed the subsidies from clinics in areas that are relatively more affluent (so that a higher proportion of the total subsidy goes to areas that are less affluent)? A more cynical view is that the DHB will benefit from some cost savings (which they may need!). The cost savings arise because fewer patients in total will go to after-hours clinics that are subsidised (if your illness isn't urgent or critical, maybe you choose not to go to the doctor, because the subsidised clinic is far away, and the unsubsidised clinic is now more expensive). The DHB also benefits from administration cost savings, because the DHB now has to deal with fewer clinics. The costs of the removed subsidy are borne by patients (their medical care is now more expensive, because it is unsubsidised, or because they have to travel further to get to a subsidised clinic) and the now-unsubsidised clinics (who receive a lower effective price from patients, and see fewer of them).

Another way of looking at who is made worse off by removing this subsidy is to consider economic welfare. Consumer (patient) surplus is the difference between what consumers are willing to pay for the service (shown by the demand curve) and the price they actually pay. In the diagram above, the consumer surplus is the triangle AEPC when there is a subsidy, but decreases to ABP0 when the subsidy is removed. Consumers (patients) are worse off without the subsidy.

Producer (clinic) surplus is the difference between the price that the producers receive and the producers' costs (shown by the supply curve). In the diagram above, the producer surplus is the triangle PPFG when there is a subsidy, but decreases to P0BG when the subsidy is removed. Producers (clinics) are worse off without the subsidy.

The taxpayer (the DHB) is the only party made better off without the subsidy. [*]

Finally, the loss of economic welfare is not the only cost of the removal of the subsidy. If patients are dissuaded from attending a clinic at all because of the higher cost, there could be real health losses that arise from the change in policy. It would be interesting to know how big an effect this has.


[*] I have ignored what happens to total economic welfare in this diagram and this analysis. Typically, if we draw a subsidy on a market and the subsidy moves the market away from the quantity where marginal social benefit is equal to marginal social cost (as in the diagram I have shown), total economic welfare decreases (the subsidy makes society worse off, on aggregate). However, health care has positive externalities that are also not represented in the diagram, and in the presence of positive externalities a subsidy can actually increase (rather than decrease) total economic welfare. I've opted to keep the diagram simple by ignoring positive externalities and the effect on total welfare.

Saturday, 6 October 2018

Why study economics? Economists in tech companies edition...

In my ongoing series of posts entitled "Why study economics?" (see the end of this post for links), several times I have highlighted the increasing role of economists in technology companies. In a new NBER Working Paper (ungated version here), Susan Athey (previously chief economist at Microsoft, and now at Stanford and on the board of a number of technology companies) and Mike Luca (Harvard) provide a great overview of the intersection of economics (and economists) and technology companies:
PhD economists have started to play an increasingly central role in tech companies, tackling problems such as platform design, pricing, and policy. Major companies, including Amazon, eBay, Google, Microsoft, Facebook, Airbnb, and Uber, have large teams of PhD economists working to engineer better design choices. For example, led by Pat Bajari, Amazon has hired more than 150 Ph.D. economists in the past five years, making them the largest employer of tech economists. In fact, Amazon now has several times more full time economists than the largest academic economics department, and continues to grow at a rapid pace.
Importantly, it isn't just PhD economists:
Tech companies have also created strong demand for undergraduate economics majors, who take roles ranging from product management to policy.
What is it about economics that creates value for tech firms? Athey and Luca identify:
...three broad skillsets that are part of the economics curriculum that allow economists to thrive in tech companies: the ability to assess and interpret empirical relationships and work with data; the ability to understand and design markets and incentives, taking into account the information environment and strategic interactions; and the ability to understand industry structure and equilibrium behavior by firms.
One further interesting point is that:
...Amazon was the largest employer of Harvard Business School’s most recent graduating class of MBA students.
The job market for economics graduates (or, more broadly, graduates with skills in economics) is looking stronger.

[HT: Marginal Revolution]

Read more:

Wednesday, 3 October 2018

Your Fitbit will betray you

Yesterday I wrote a post about how home insurers are starting to more accurately price house insurance based on natural hazard risk. Home insurance isn't the only area where insurers are looking at adopting more sophisticated screening methods to deal with adverse selection. Take this story from the New Zealand Herald in June:
Fitbits are already used to track your heart rate, the amount of exercise you do and how much you sleep - essential data that could potentially be used by insurance providers to determine your premiums.
The boom in wearable health tracking technology means we now have more information than ever before on health and well being of people at any given moment.
The Telegraph reports that information collected from these devices is already being used by insurers to calculate insurance premiums and there are concerns that this might lead to only the healthiest customers enjoying lower premiums.
This is serious business. Insurance companies have it in their interests not only to ensure the lowest-risk customers but also to detect potential health conditions before they become severe (and expensive). A study of the insurance market by the Swiss Re Institute, a research organisation, last year found that insurers had filed hundreds of patent applications relating to "predictive insurance modelling".
The issue that an uninformed health insurer or life insurer faces is essentially the same as the home insurer from yesterday's post. They can't tell the low-risk applicants from high-risk applicants. A pooling equilibrium develops, where everyone pays the same premiums (coarsely differentiated based on age, gender, and smoking status). A savvy and entrepreneurial insurer that was better able to tell who the low-risk insured people are could attract them away with lower premiums (knowing that they would cost less to insure because they are low risk).

So, that is effectively what insurers are starting to do. As the Herald article notes:
In making these moves, Insurance companies aim to collect data that could serve to help them make better policy decisions or even tweak existing policies over time.
The Telegraph reported that policy agreements increasingly feature clauses that allow insurers to collect data on their customers.
This is a point that I first made in a post back in 2015 (and an earlier post on technology in car insurance in 2014). We can all look forward to insurers asking for our Fitbit data when we apply for health or life insurance. And if we're fit and healthy, we'll give it to them. The people most likely to withhold that information are the unfit and unhealthy (and those who are most privacy-conscious). Denying access to your Fitbit data would probably be enough to signal to the insurer that you are high risk, and result in a declined application or a higher premium. So, even if you want to opt out of sharing your data, your Fitbit will still betray you.

Read more:

Monday, 1 October 2018

The most surprising thing I learned about home insurance this year

Home insurance markets are subject to adverse selection problems. When a homeowner approaches an insurer about getting home insurance, the insurer doesn't know whether the house is low-risk or high-risk. [*] The riskiness of the house is private information. In fact, the riskiness of the house is probably not known even to the homeowner, but let's assume for the moment that they have at least some idea. To minimise the risk to themselves of engaging in an unfavourable market transaction, it makes sense for the insurer to assume that every house is high-risk. This leads to a pooling equilibrium - low-risk houses are grouped together with the high-risk houses and owners of both types of house pay the same premium, because they can't easily differentiate themselves. This creates a problem if it causes the market to fail.

The market failure may arise as follows (this explanation follows Stephen Landsburg's excellent book The Armchair Economist). Let's say you could rank every house from 1 to 10 in terms of risk (the least risky are 1's, and the most risky are 10's). The insurance company doesn't know who is high-risk or low-risk. Say that they price the premiums based on the 'average' risk ('5' perhaps). The low risk homeowners (1's and 2's) would be paying too much for insurance relative to their risk, so they choose not to buy insurance. This raises the average risk of the homes of those who do buy insurance (to '6' perhaps). So, the insurance company has to raise premiums to compensate. This causes some of the medium risk homeowners (3's and 4's) to drop out of the market. The average risk has gone up again, and so do the premiums. Eventually, either only highest risk homeowners (10's) buy insurance, or no one buys it at all. This is why we call the problem adverse selection - the insurance company would prefer to insure low-risk homes, but it's the homeowners with high-risk homes who are most likely to buy.

Of course, insurers are not stupid. They've found ways to deal with this adverse selection problem. When the uninformed party (the insurer in this case) tries to reveal the private information (about the riskiness of the house), we refer to this as screening. Screening involves the insurer collecting information about the house and the homeowner in order to work out how risky the house is. With the private information revealed, the insurer can then price accordingly - higher-risk houses attract higher premiums, while lower-risk houses attract lower premiums. We have a separating equilibrium (the high-risk and low-risk houses are separated from each other in the market).

With all this in mind, this story from April surprised me greatly:
Other insurers are likely to follow NZX-listed Tower's lead and increase their focus on risk-based pricing for natural hazards, says an insurance expert...
Thousands of home-owners who live in high-risk earthquake-prone areas and insure via Tower are set to face hikes in their premiums while those in low-risk areas like Auckland will get a cut.
The insurance company, which is New Zealand's third largest general insurer, said it would stop cross-subsidising its policy-holders from April 1 in a bid to send a clearer message to home-owners about the risks of where they lived.
Tower chief executive Richard Harding said at the moment six Auckland households were paying more to subsidise insurance premiums for every one high-risk house in Wellington, Canterbury or Gisborne.
In other words, insurers previously weren't screening for all available private information before pricing their insurance for a given house. Essentially, owners of low-risk houses have been paying premiums that are too high, and owners of high-risk houses have been paying premiums that are too low. It took a little while, but eventually other insurers have also started to use risk assessments in determining insurance premiums, so this discrepancy is disappearing.

Why didn't the market break down due to adverse selection? The issue here is something I noted earlier in the post - the riskiness of a house is private information to both the insurer and the homeowner. If the homeowner doesn't know how risky their house is, owners of low-risk houses can't tell if the insurer is pricing their insurance too high relative to the risk of natural hazard damage. So, the owners of low-risk houses have no reason to drop out of the market. And, if the owners of low-risk houses don't drop out of the market, insurers have no reason to raise premiums.

However, that leaves the market open to disruption. As noted in the April article:
Jeremy Holmes, a principal at actuarial consulting firm Melville Jessup Weaver, said it was hard to say how long this would take. Insurers needed to be as good as their competitors at distinguishing risk.
"Otherwise they risk having their competitors target the lower-risk policyholders whilst they are left with only the higher risks ..."
An entrepreneurial insurer that was able to distinguish the low-risk houses from high-risk houses could start approaching owners of low-risk houses and offering them lower premiums. The remaining insurers would be left with higher-risk houses on average, and would have to raise premiums. This would increase the number of homeowners dropping out of the market (or rather, going to the insurer that was pricing according to risk). Tower was the first insurer to shift to risk-based premiums, so presumably they recognised this issue before any of the other insurers and acted exactly as you would expect - by moving to risk-based premiums before any potential disruptor could enter the market.

Still, it's a little surprising (to me, at least) that pricing based on natural hazard risk wasn't already happening.


[*] For simplicity, I'm going to refer to low-risk houses and high-risk houses, when risk is probably as much a function of location as of the house itself. So, if you must, read 'low-risk house' as 'house with a low risk of damage in an earthquake', and 'high-risk house' as 'house with a high risk of damage in an earthquake'.

Friday, 28 September 2018

Return migrants are willing to accept lower wages in exchange for better institutional quality

Last year, I wrote a post about how return migrants to Vietnam prefer areas with higher-quality institutions. The post was based on the research of one of my PhD students, Ngoc Tran, myself, and Jacques Poot. Ngoc recently submitted her PhD thesis for examination, which is a great achievement. Along the way, she completed four research papers (see here, here, here, and here), and also has a forthcoming book chapter. In this post though, I want to focus on the fourth research paper, as it is the most novel.

In the paper, co-authored between Ngoc Tran, Jacques Poot and I, we looked at the intensity of migrants' preferences to high-quality institutions back in their home country. In other words, we asked how much things like absence of corruption, political stability, and the rule of law, mattered for migrants' decision-making about potentially returning home. To measure the intensity of preferences, we used a novel application of the contingent valuation method.

To do this, we first recognised that there are compensating differentials for working in different locations - in areas with higher-quality amenities (such as higher-quality institutions), wages will be lower than in areas with lower-quality amenities (such as lower-quality institutions). We can exploit this to work out what higher-quality institutions are worth, by looking at how much a migrant's income would have to change to make them indifferent between the area with lower-quality institutions and the area with higher-quality institutions.

We did this by asking Vietnamese migrants in New Zealand two questions:
  1. Given your perceptions of the difference in institutional quality between New Zealand and Viet Nam, what would be the smallest level of weekly income before tax in Viet Nam where you would be happy moving back to Viet Nam permanently?; and
  2. Now imagine that the institutional quality in Viet Nam changed so that it was equal to New Zealand in all ways (and everything else remained the same). If this happened, what would be the smallest level of weekly income before tax in Viet Nam where you would be happy moving back to Viet Nam permanently?
The first question allowed us to estimate the compensating different based on the current differences in institutional quality and other amenities between the two countries, as well as migration costs. The second question modifies the institutional quality in Vietnam so that it is equal to that in New Zealand, holding everything else (including migration costs) constant. The difference between the answer to Question 1 and the answer to Question 2 provides an estimate of the willingness of the migrant to accept lower wages in exchange for improved institutional quality in Vietnam.

Our estimates show that willingness to pay for an incremental unit improvement in institutional quality in Viet Nam is, on average, NZD 79.80 per week (approximately 33 percent of the average weekly wage in Viet Nam for the same period). Moreover, older migrants are willing to pay more, as are migrants who perceive institutional quality to be more important for the repatriation intentions.

As far as we know, this is the first paper ever to use contingent valuation to measure the intensity of preference for institutional quality, certainly among migrants if not among any population group. Notwithstanding the continuing debate on the use of the contingent valuation method (which I've written about here, here, and here), this was a really innovative piece of work.

Congratulations again to Ngoc on submitting her PhD thesis!

Read more:

Tuesday, 25 September 2018

Alcohol minimum pricing vs. taxes

Let's say that there was some good, where the government thought the market provided too much. Consumers consume too much of this product, compared to some socially efficient level. Economists call this a demerit good. The government might want to find some way of reducing consumption of the good. A tax seems like an obvious solution, and has the bonus effect of increasing government revenue - a double win!

But now let's consider a specific demerit good - alcohol. If the government taxes alcohol, the effects on the market are shown in the diagram below. The price that the consumers pay increases from P0 to PC, and the effective price that the producers receive (after paying the tax to the government) falls to PP. The quantity traded (and consumed) decreases from Q0 to Q1. Notice that the demand curve is quite steep (inelastic), so the tax doesn't reduce consumption by much. Notice also that, because the demand curve is steeper (more inelastic) than the supply curve, the price consumers pay goes up by a lot, while the effective price that producers receive falls by only a little. That means that consumers end up facing the burden of the tax. But, at least, the tax has reduced alcohol consumption, which was the aim.

But will the tax really reduce alcohol consumption? Maybe consumers notice the higher prices and simply switch from higher quality (and more expensive) alcohol to lower quality (and less expensive) alcohol. Then, they could continue to drink the same amount as before (but they would just be drinking lower quality beverages). This argument has been made by Eric Crampton (see here, for example).

If the government is concerned about consumers switching to lower quality alcohol, an alternative is to introduce minimum pricing (which I have discussed before, here). Indeed, that is what Northern Territory is about to do, as John Boffa noted in The Conversation this week:
From October 1, 2018, one standard drink in the Northern Territory will cost a minimum of A$1.30. This is known as floor price, which is used to calculate the minimum cost at which a product can be sold, depending on how many standard drinks the product contains...
 The implementation of the minimum floor price is the result of legislation, recently passed to minimise alcohol-related harms in the NT. From October, the NT will become one of the first places in the world to introduce a minimum price for alcohol.
What effect would minimum pricing have on the market? Here's what I wrote back in 2016 on the same topic (but I've updated the diagram to match the diagram above):
The effect is shown in the diagram below. Without minimum pricing, the market equilibrium price is P0, and the quantity of alcohol sold (and presumably consumed) is Q0. But with a binding minimum price (above the equilibrium price) of PC, the quantity of alcohol demanded falls to Q1. In other words, alcohol consumption falls.
Notice that the effect is to reduce alcohol consumption, which is what the government wants. Eyeballing the data in Boffa's article, it definitely shows an increase in price, and it seems that there is the decrease in quantity sold, but the decrease might not be statistically significant. Although Boffa notes:
As expected, the ban on cheap cask and fortified wine led some drinkers to turn to other types of alcohol. But while there was a 70% increase in the consumption of more expensive full-strength beer, the decline in the consumption of cheap alcohol more than offset this. This led to the overall 20% decline in consumption.
An added benefit of minimum pricing is that consumers can't switch between categories to essentially minimise the effect of the policy on their drinking, since a minimum price has a greater effect on low-quality (and therefore cheaper) drinks.

Now let's consider another good that some people would like to see the government act to reduce consumption (although it is arguable whether it is a demerit good) - sugar. A tax on sugar-sweetened beverages (which I've previously discussed here) would have similar effects to the tax on alcohol described above (although whether demand for sugar-sweetened beverages is as inelastic as alcohol is a separate issue). It would even induce consumers to switch to lower quality drinks, as Eric Crampton has argued (see here and here, for example). So, if the anti-sugar brigade want to reduce sugar consumption, wouldn't it be better for them to argue for a minimum sugar price instead?

Read more:

Sunday, 23 September 2018

Is Trump vs. Xi a game of chicken, or a prisoners' dilemma?

There are several famous games that we use to teach game theory, one of which is the prisoners' dilemma (which I blogged on earlier in the week). Another is the game of chicken.

In the classic chicken game, two rivals line their cars up at opposite ends of the street. They then race directly towards each other. Each rival then has two options: (1) to swerve out of the way; or (2) to speed on. If one rival swerves and the other speeds on, the rival that sped on wins and the other driver looks foolish and loses some street cred. If both swerve, they both look a bit foolish. If both speed on, then there is a horrific accident and both may be severely injured or die. The game is presented in the payoff table below, for two drivers (Driver A and Driver B).

To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:
  1. If Driver A speeds ahead, Driver B's best response is to swerve (since a loss of face is better than dying in a fiery crash - maybe not immediately, but certainly in the long term) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Driver A swerves, Driver B's best response is to speed ahead (since winning is better than looking a little foolish);
  3. If Driver B speeds ahead, Driver A's best response is to swerve (since, again, a loss of face is better than dying in a fiery crash);
  4. If Driver B swerves, Driver A's best response is to speed ahead (since winning is better than looking a little foolish).
Notice that there are two Nash equilibriums in this game - where one driver swerves and the other speeds ahead. Both drivers prefer the outcome where they are the one speeding ahead though, so if both try to get that outcome, we end up with both drivers dying in a fiery crash. The chicken game suggests that both players, acting in their own selfish best interest (or not considering the response of the other driver), leads to the worst possible outcome.

Which brings me to this article by Ambrose Evans-Pritchard in the Telegraph UK (gated, but there is an ungated version here):
The US and China are on a combustible escalation path that can end only when there is economic blood on the floor and the political pain threshold of one side or the other has been hit.
Both think they can withstand the longer siege. Neither can retreat easily...
China has in any case stated already that it will match each round of US tariffs with a riposte in kind.
Beijing must carry out this threat or lose face, and Trump has already vowed to escalate further when it does.
It is very hard to see how asset markets priced for perfection can ignore this deranged game of chicken for much longer. The mystery is that they have not crumbled yet.
Is this really a game of chicken? It turns out that it depends on how you define the payoffs. There are two players in the game (the U.S. and China), and two strategies (enact tariffs or hold off - the equivalents of speeding ahead or swerving). So, we can represent the game easily in a payoff table.

First, let's consider the payoffs to each country. If one country enacts tariffs and the other holds off, both countries are worse off than the status quo (they both lose some gains from trade), but the country that holds off probably loses less (their exporters are a bit worse off) [*]. We'll say that the payoff to the country enacting the tariffs is "bad", but for the other country the payoff is just "not so bad". If both countries enact tariffs then all of the bad stuff happens (exporters are a bit worse off, and there are lost gains from trade). We'll say that payoff is "very bad" for both countries. If both countries hold off, then the status quo prevails - the payoff is "OK" for both countries. The game is presented in the payoff table below.

Again solving for Nash equilibrium, the best responses are:
  1. If China enacts tariffs, the U.S.'s best response is to hold off (since "not so bad" is better than "very bad");
  2. If China holds off, the U.S.'s best response is to hold off (since "OK" is better than "bad");
  3. If the U.S. enacts tariffs, China's best response is to hold off (since "not so bad" is better than "very bad");
  4. If the U.S. holds off, China's best response is to hold off (since "OK" is better than "bad").
Notice that the best response for both countries is to hold off, regardless of what the other country does. Holding off is a dominant strategy. There is one Nash equilibrium here, which is for both countries to hold off (the status quo). It is also a dominant strategy equilibrium (because both countries have a dominant strategy). The equilibrium outcome of this game is the best outcome overall.

Clearly, that isn't the game that is playing out at the moment though, so how are things different? The current game is not a game about trade, it is a game about political posturing. The players are not the U.S. and China, but Donald Trump and Xi Jinping. They want to look strong, and not appear weak (to each other, or to their respective peoples). So, the payoffs and the players are different. The game that is actually being played looks more like the payoff table below. If both hold off, the status quo prevails (the payoff is "OK" for both. If one of them enacts tariffs and the other holds off, whichever of them enacted tariffs appears strong, and the other appears weak. If both enact tariffs, the payoff is costly to the economy.

Again solving for Nash equilibrium, the best responses are:
  1. If Xi enacts tariffs, Trump's best response is to enact tariffs (since "costly" is better than "weak");
  2. If Xi holds off, Trump's best response is to enact tariffs (since "strong" is better than "OK");
  3. If Trump enacts tariffs, Xi's best response is to enact tariffs (since "costly" is better than "weak");
  4. If Trump holds off, Xi's best response is to enact tariffs (since "strong" is better than "OK").
Notice that the best response for both leaders is to enact tariffs, regardless of what the other leader does! Enacting tariffs is a dominant strategy, and both leaders enacting tariffs is both the only Nash equilibrium and a dominant strategy equilibrium. The single equilibrium is also unambiguously worse than one of the other outcomes - this is an example of the prisoners' dilemma. Notice that it is not a chicken game.

How could this become a chicken game? If costing the economy was worse than appearing weak, then that would change things around. In that case, the best response to the other leader enacting tariffs would be to hold off. There would be two Nash equilibriums - where one leader enacts tariffs and the other holds off. However, both would prefer to be the leader enacting the tariffs rather than the one holding off.

So, whether this game is a chicken game or a prisoners' dilemma depends on how you think each leader feels about appearing weak. It seems to me that both want to avoid that at all costs. In my mind, this is a prisoners' dilemma, not a chicken game.

The repeated prisoners' dilemma can be solved for the optimal outcome (both holding off), but this requires cooperation between the two leaders. In order for this cooperation to arise, each leader must trust the other (because enacting tariffs is still a dominant strategy). If we want global trade to survive this showdown, somehow we need these leaders to develop a trusting relationship. It's a pity that Trump will not be at the APEC leaders meeting - it seems like a group hug is in order!


[*] This might sound surprising. For the country imposing tariffs, tariffs lead to a deadweight loss (lost wellbeing). They make domestic sellers better off, but make domestic consumers worse off by more than the gain to domestic sellers. In contrast, the market in the country that holds off has no deadweight loss. The exporting firms in that country will be able to export a bit less, but that probably doesn't have as big of a negative impact as the tariffs do on the country that imposed them.

Saturday, 22 September 2018

Safety concerns and strawberry markets

The economic model of demand and supply is remarkably robust in terms of explaining changes in prices, and as I show in my ECONS101 class, it even works (qualitatively) when the market is not perfectly competitive. Given that my ECONS101 class has a test coming up in a week and a half, I thought it might be timely to look at an example. Let's take the recent safety scares in Australia, as reported by the New Zealand Herald:
Fruit growers across Australia are reeling from 20 reports of needles found in punnets of berries, with isolated cases of banana and apple sabotage...
Mass harvests of fruit have been dumped as prices plunge, consumer demand evaporates and products are ripped from shelves.
A police operation involving 100 officers across multiple states is now under way to hunt down those responsible...
Up to 120 growers in Queensland alone — where the scare originated — have been hit by a slump in demand and a wholesale price collapse of more than 50 per cent.
Consider the market for strawberries, as shown in the diagram below. Before the sabotage, the market was operating in equilibrium with price P0, and Q0 strawberries were being traded. Following reports of needles in strawberries, consumers have product safety concerns (who wants to buy strawberries when there's a chance of a needle strike?), so demand decreases from D0 to D1. The equilibrium price falls from P0 to P1 (a "price collapse of more than 50 per cent"), and the quantity of strawberries traded falls from Q0 to Q1.

So far, so bad. But, if you're strawberry growers, what do you do with all those strawberries that aren't being demanded by consumers? You could dump them (and some have), so maybe you find someone else willing to take them. Consider the market for strawberry jam, as shown in the second diagram below. The market was initially operating in equilibrium with price PA, and QA units of strawberry jam were being traded. Then, sabotage hits the strawberry market. The price falls. Strawberries are now much cheaper to buy (not just for consumers, but for strawberry jam producers as well). The supply curve for strawberry jam shifts down and to the right (an increase in supply), from SA to SB. The equilibrium price of strawberry jam falls from PA to PB, and the quantity of strawberry jam traded increases from QA to QB.

That last implication is testable. Check the supermarket shelves in coming months, and expect to see cheaper strawberry jam.

Friday, 21 September 2018

The prisoners' dilemma and construction tenders

After my post on the construction industry yesterday, where I suggested that clients should adopt a second-price auction to reduce risk of construction firm failures, a student noted on Facebook:
Stop undercutting the competition so that companies can actually DO the jobs they claim they can do.
In yesterday's post, I neglected to elaborate on why the construction industry can't solve the problem of firms under-pricing bids by themselves. So, in this follow-up post, let's see why, using a little bit of game theory.

Consider an industry with just two construction firms (Firm A and Firm B). [*] Both firms are bidding for a construction contract, which they know will go to the lowest bidder. The firms can choose to bid high, or bid low. Both firms are choosing their bid strategy at the same time - this is what economists refer to as a simultaneous game. The game itself is laid out in the payoff table below. If both firms price low, they each have a 50% chance of winning the contract and earning a low profit (and a 50% chance of not getting the contract and facing the loss of the resources they spent preparing their bid). If both firms price high, they each have a 50% chance of winning the contract and earning a high profit (and a 50% chance of not getting the contract and facing the loss of the resources they spent preparing their bid). If one firm prices high and the other prices low, the low-price firm wins the contract for sure and makes a low profit, and the high-price firm misses out on the contract for sure and loses the resources they spent preparing their bid. Let's further assume that winning the contract for sure at a low price is preferred over a half chance of winning the contract at a high price. [**]

To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:
  1. If Firm A bids high, Firm B's best response is to bid low (since winning the contract for sure at a low price is better than a 50% chance of winning the contract at a high price) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Firm A bids low, Firm B's best response is to bid low (since a 50% of winning the contract at a low price is better than certainly losing the resources spent preparing the bid);
  3. If Firm B bids high, Firm A's best response is to bid low (since winning the contract for sure at a low price is better than a 50% chance of winning the contract at a high price); and
  4. If Firm B bids low, Firm A's best response is to bid low (since a 50% of winning the contract at a low price is better than certainly losing the resources spent preparing the bid).
Note that Firm A's best response is always to bid low. This is their dominant strategy. Likewise, Firm B's best response is always to bid low, which makes it their dominant strategy as well. The single Nash equilibrium occurs where both players are playing a best response (where there are two ticks), which is where both construction firms choose to bid low.

Notice that both firms would be unambiguously better if they bid high. However, both will choose to bid low, which makes them both worse off. This is a prisoners' dilemma game (it's a dilemma because, when both players act in their own best interests, both are made worse off). Both firms will choose to bid low, and whichever firm wins the contract will be at risk of having bid too low and suffering the winner's curse, as I noted yesterday. This is why the construction industry cannot solve this problem on its own.

Of course, the simple example above assumes this is a non-repeated game. A non-repeated game is played once only, after which the two players go their separate ways, never to interact again. Most games in the real world are not like that - they are repeated games. In a repeated game, the outcome may different from the equilibrium of the non-repeated game, because the players can learn to work together to obtain the best outcome.

However, cooperative strategies will not work in the construction firms' dilemma game, because such cooperation is illegal collusion. The firms would be subject to prosecution by the Commerce Commission for cartel behaviour.

So, to reiterate yesterday's conclusion, it is up to the clients of construction firms to solve this issue:
We need to ensure that sustainable contract prices are being paid, and the current system is clearly failing.

[*] Limiting ourselves to two firms makes this example easy to follow, but it would work much the same if we had 20 or 200 firms (albeit being much harder to create a payoff table for!).
[**] This seems unlikely in the case of two firms, where there is a 50% chance of winning the contract, and a 50% chance of wasting time preparing the bid. However, if there are ten firms bidding, then there is a 10% chance of winning the contract, and a 90% chance of wasting time, which makes this preference seem more likely.

Read more:

Thursday, 20 September 2018

The winner's curse and construction tenders

Finally, some sanity in terms of writing about the problems the construction industry is facing. John Walton wrote in the New Zealand Herald earlier this week:
Right now, of course, the construction industry is indeed booming - not just in housing, but also in commercial and infrastructure development. But companies are continuing to fail. Why?
The answer lies in the construction process and a misunderstanding of the roles of the participants. At its simplest, the owner provides the site, resource consents, designs and pays for the work. The contractor organises the work to the design and to the required legal standards, for the agreed price within the allocated time. Price and time are adjusted for unforeseen events and for changes instructed by the owner. To this extent, construction contracts legislate for uncertainty.
That uncertainty is exacerbated by an incomplete understanding of other project risks (ground conditions and supply chain issues like subcontractor and supplier pricing and availability) and unrealistic expectations on the part of owners, particularly that they can fill in the gaps in the design and instruct changes at their whim without cost consequences. The tender process encourages this opportunistic behaviour by forcing contractors to compete on incomplete, or simply unrealistic or unfair contract terms.
Contractors try to introduce some balance by excluding risks from their bids. They must then rely on the claims process to protect their margins.
This can turn the pricing process into something of a lottery. Typically, the cheapest price wins, which all too often is submitted by the contractor with the greatest appetite for risk, coupled with the most optimistic expectations for making claims under the contract.
Following contract award, managing design development, construction and capricious owner changes to the design becomes a considerable headache for contractors who need to be able to meet construction costs, pay subcontractors and protect their already slim margins.
Every time the issue of financially troubled construction companies comes up (and it seems to be coming up a lot lately), it makes me think of the winner's curse.

Consider a group of construction firms, tendering for a construction contract. Rational construction firms who have complete information about the project and associated risks (and with the same tolerance for risk) would all have the same expectations about the costs of completing the contract, so all would bid the same. However, not all construction firms have complete information (as Walton noted above) and not all firms have the same tolerance for risk. The construction firms may make random errors in determining their costs (or margins) and risks associated with the contract, so all of the construction firms will expect different completion costs (and margins and risks) associated with the contract. For firms with similar tolerance for risk, differences in expected completion costs (and margins) arise randomly - some will overestimate the completion costs (or underestimate the risk) of the contract, and some will underestimate the completion costs (or overestimate the risk or margins). Those who expect low completion costs will bid low for the contract, and those who expect high completion costs will bid high for the contract.

The real problem arises when the contract goes to whichever construction firm bids the lowest for the contract. This will be the firm that has most underestimated the completion costs, because they will be the firm that bid the lowest. The chances are high that, if there are enough construction firms entering bids, the eventual winner will have underestimated the completion costs compared with the true costs, and hence will not receive enough to cover their true costs. This is what we refer to as the winner's curse.

The problem is that consumers of construction firms' services are too focused on looking for the lowest bid. This virtually guarantees the problems we are facing in the construction industry. Walton's piece concludes:
The industry has a choice. Either it accepts that designs and prices will change and pay contractors accordingly, or take the time to remove contract uncertainties before fixing the price and instructing work to commence. Experience here and overseas would suggest that a combination of the two works best.
Walton is not very clear on his proposed solution. So, let me offer two options.

First, when a construction contract is put up for bids, the decision could be made independent of price. That removes the incentive to bid too low. Construction firms can then be realistic about the costs and risks they face, when preparing their bids, without worrying about a high bid ruling them out. Of course, it probably creates an incentive to bid too high, precisely because a high price won't rule the firm out of the process.

Second, adopt a variant of a second-price auction. Give the contract to the lowest bidder, but pay them the amount that was asked by the second-lowest bidder. Or, if the contract is not given to the lowest bidder, still pay an amount for the contract equal to the next highest bidder's offer. This ensures that the client isn't paying well over a 'fair' price for the contract (which would likely be the case for the first option), while providing some additional space for the successful firm, which has likely underestimated the completion costs. If second-price isn't enough, a third-price auction would further limit the chance of construction firms being underpaid because of their inability to accurately estimate costs.

Something clearly needs to be done. We don't want construction firms falling over mid-contract, and in a small market like New Zealand we can't afford to have the market dominated by only a couple of players. We need to ensure that sustainable contract prices are being paid, and the current system is clearly failing.

Wednesday, 19 September 2018

The fable of the bees for rent

Back in 2016, I wrote a post about one of the key examples many economists (including myself) use in describing the problems associated with positive externalities, being an example involving bees and apple orchards. I won't repeat all of that post here, but instead take this bit:
In 1973, Stephen Cheung wrote a follow up to the Meade paper in the Journal of Law and Economics (ungated here), where he pointed out that contracting solutions to the bees-and-trees problem were not observed in the real world, because the transaction costs of these agreements are too high (transaction costs in this case are the costs of negotiating a suitable agreement between the apple-farmer and the bee-keeper - if the costs are high, it will be more difficult for the parties to justify the expense of coming to an agreement). Instead, a social norm developed between apple-farmers and bee-keepers in terms of the number of bees per orchard, etc. Of course, a social norm is just an informal contract by another name.
Forget social norms though. An article in The Conversation by Manu Saunders (University of New England) earlier this year suggests that there actually is a thriving contractual market for bees as pollinators:
To optimise yields, most growers rent European honeybee hives during crop flowering season. Honeybees were first introduced to Australia from Europe in the early 1800s. Today, the beekeeping industry includes around 600,000 managed hives and is worth around A$100 million to Australia’s economy. But it’s not just about honey and beeswax products.
Managed crop pollination services have become big business in many parts of the world, including Australia. Although most beekeepers do still keep bee hives to produce honey or wax products, paid pollination services are becoming increasingly important to the industry...
Costs per hive vary depending on the crop, covering costs to the beekeeper such as how far they have to travel, the time of year (early season pollination can be more stressful for honey bees and require more feeding costs for beekeepers to maintain hive health), and the risks (e.g. chemicals) bees might face in the crop. For almond pollination, one hive can cost around $70-100 to rent. 
To recap from my earlier post, beekeepers create a positive externality for orchardists, when their bees distribute pollen incidentally in collecting it for making honey. Orchardists create a positive externality for beekeepers because their trees provide the pollen that the bees use for making honey. Left to its own devices the market produces too little of goods that have positive externalities - there would be too few bees to satisfy the needs of the orchardists, and too few trees to satisfy the needs of the beekeepers. As we discuss in my ECONS102 class, one way of solving this problem is for the two parties to negotiate a contract, specifying the number of bees, the number of trees, and some payment (in this case, from the orchardists to the beekeepers). And it appears to be working, almost exactly as described in class.

Read more:

Monday, 17 September 2018

Industrial dust and bargaining over externalities

One of the most famous results in welfare economics is the Coase Theorem - the idea that, if private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own (i.e. without the need for government intervention). An externality is the uncompensated impact of the actions of one party on the wellbeing of a bystander. For instance, a factory that emits air pollution creates a negative externality for people who live nearby - the reduction in air quality makes the neighbours worse off.

Ronald Coase (1991 Nobel Prize winner) argued that externality problems are jointly produced - even though one party creates the externality (e.g. the factory), the problem is also created by the neighbours - if they didn't live next to the factory, there would be no externality problem (or at least, there would be no problem for the neighbours, as they wouldn't be living next to the factory!).

One of the implications of the idea that externalities are jointly produced, and the idea that parties to the externality might be able to arrive at some agreement to deal with the externality problem, is that the same solution to the externality may arise regardless of our starting point. To see why, let's consider a specific example, from this New Zealand Herald article from earlier in the year:
For more than four years, residents and workers in De Havilland Way, Mount Maunganui have complained organic dust from a nearby industrial building was making them sick.
Business owner Colin Alexander had a severe allergic reaction that laid him up for months. Resident Skye Sloan has to take a tablet every day to keep flu-like symptoms at bay. Dozens of other complaints have been recorded.
With health officials and an air quality investigation now backing their claims, they want authorities to do something about 101 Aerodrome Rd immediately.
They argue the operations - bulk storage and handling of stock feeds including palm kernel expeller, a controversial palm oil industry byproduct - must stop until the fine, inhalable dust particles that regularly blew into the hangars can be prevented or contained. 
There is some disagreement in the article about whether industrial dust is creating health issues for nearby residents, but let's take it as a given. However, as Coase noted, the externality problem is jointly created by the bulk storage firm and the nearby residents (who, it should be noted, are living in an industrial-zoned area). But if we want to follow through on the Coase Theorem, how can this externality problem be solved without government intervention?

The solution to the externality problem depends on the distribution of entitlements - primarily the property rights, but also liability rules. There are two competing sets of property rights here. The bulk storage firm has the right to operate - it is located in an industrial zone. If the firm has to restrict its operations, that takes away some of its rights. The residents have the right to clean air. The industrial dust is taking away some of their rights.

To determine the potential bargaining solution to the externality problem, we need to start by considering which party has the overriding rights. That is, whose rights (the bulk storage firm's, or the residents') are more important to uphold? That isn't a question that economics can answer, but obviously there are two options (the bulk storage firm, or the residents). Let's consider both in turn.

If the residents have the overriding rights (their right to clean air is seen as more important to uphold than the firm's right to operate), then the default solution to the externality problem is that the firm shuts down (or it installs some type of filter to prevent the escape of industrial dust, or finds some other way not to reduce the air quality). That isn't the only solution under this set of entitlements though. The alternative solution to the externality problem is that the firm continues to operate as before, but compensates the residents for any reduction in air quality. For this alternative solution to work though, the firm would need to pay the residents more than the value of their lost air quality (however they value it), but less than the cost to the firm of shutting down (or the cost installing a filter, or the cost of whatever other option they can find for avoiding the reduction in air quality).

If the bulk storage firm has the overriding rights (their right to operate is seen as more important to uphold than the residents' right to clean air), then the default solution to the externality problem is that the residents have to put up with the dust (or they keep their hangar homes shut up to prevent dust getting in, or they wear dust masks, or something else). Again, there is an alternative solution to the externality problem, which in this case is that the residents compensate the firm for the cost of shutting down (or the cost installing a filter, or the cost of whatever other option they can find for avoiding the reduction in air quality). For this alternative solution to work though, the residents would need to pay the firm more than the cost to the firm of shutting down (or the cost installing a filter, or the cost of whatever other option they can find for avoiding the reduction in air quality), but less than the value of the improved air quality the residents gain (however they value it).

Notice that which set of default and alternative solutions is available depends crucially on which party has the overriding rights, which is determined by the legal environment (as I said, economics can't answer that question). Notice also that the default solution simply upholds the existing overriding rights, while the alternative solution always involves compensation from one party to the party whose overriding rights are being foregone.

Will a bargaining solution always work? No, because as noted above it depends on the relative costs and benefits. That issue aside, many economists argue that, because of the Coase Theorem, government involvement in dealing with externalities is almost never necessary. However, the Coase Theorem depends on the parties being able to bargain without cost, and that seems unlikely. In the case of industrial dust above, even if all parties sat around a big table to talk over the issues, agreement takes time and effort (and hence, transaction costs), and is made more difficult by there being many parties involved (a firm, and many residents). Many parties creates a coordination problem, since it may be difficult even to get all parties on one side of the problem (e.g. the residents) to agree. And even if the majority agree, a small minority might then try to hold out for a better deal. And even if an agreement is struck between the parties, there needs to be monitoring of the agreement to ensure the parties follow through, and some enforcement if they don't do so, both of which entail costs.

Finally, behavioural economics suggests that even if we manage to get through all of the above, arriving at a bargaining solution that suits both parties will be made more difficult because of the endowment effect. Whichever party has the overriding rights will be most unwilling to give up those rights, and will demand extra compensation (more compensation than what they would have been willing to pay to obtain the rights in the first place!) - a point that I made in this post last year.

All of this suggests that, while the Coase Theorem is good in theory, in practice it is very difficult to execute. Most of the time, if there is an externality problem, some government intervention (even if it is just covering the transaction costs and the costs of monitoring and enforcement) will be required.

Saturday, 15 September 2018

Market power and milk prices

Domestic milk prices have been in the news again recently. For instance, Federated Farmers recently called for consumers to revolt against the market power of supermarkets and buy their milk from dairies:
Federated Farmers is calling on Kiwis to stage a revolt against supermarkets and buy their milk from dairies instead.
The call comes after Bodo Lang, a University of Auckland marketing professor, slammed milk pricing in New Zealand as "astoundingly high".
Federated Farmers' dairy chairman Chris Lewis told the Herald he avoids the supermarket altogether when buying milk for his family.
Instead he buys two 2 litre bottles of milk for $6 from his local Waikato dairy.
Lewis questions what goes on between the product leaving the farm and entering the supermarket for the prices to be quite as high as they are currently.
He said farmers got about 60 cents per litre of milk sold.
A Foodstuffs spokesperson said retail prices took into account the wholesale cost from suppliers, which could change according to the price they could achieve on the global market...
Mark Johnston also recently asked, Global Dairy Prices down but why does NZ have such high milk prices? Johnston concluded that:
Milk in Germany is much lower in price because of the high levels of competition with multiple [supermarket] chains operating there. In New Zealand however the price consumers pay reflects the concentrated nature of the market.
The argument is often made that New Zealand domestic milk consumers pay high prices because of two things. First, the supermarkets and other retailers must match the world price for milk. Second, because there are essentially two main players in the supermarket sector, the duopoly gives them market power and allows them to raise the price of milk (and other products as well). However, this argument ignores the fact that the supermarkets are not the only players in the market here with market power. Fonterra dominates the market for milk supply, giving it some market power as well.

Consider a market with international trade, as shown in the diagram below. If the domestic market was competitive and not open to international trade, the market would operate at equilibrium, with a price of PD and QD milk would be traded. The world price is PW - it is higher than the domestic price, which illustrates that New Zealand has comparative advantage in milk production, because it can be produced and sold at a lower price domestically than in the rest of the world. If the market was open to international trade, the price would increase to PW because the domestic sellers would prefer to sell at that price than the lower price PD. Domestic buyers have to match the higher price, so the quantity of domestic demand decreases to Qd0, while the quantity supplied increases to Qs. The difference between Qs and Qd0 (which would normally be excess supply), is exports.

Now consider what would happen if the domestic sellers in this market had some market power. Instead of operating at the world price PW in the domestic market, they can operate at the price and quantity that maximise their profits from the domestic market. This occurs where marginal revenue meets marginal cost, so they would restrict the quantity of milk sold in the domestic market to Qd1, and sell it at a higher price of PM. The quantity of milk supplied remains Qs, as Fonterra can simply supply more to the world market when it supplies less to the domestic market.

You might rightly ask, why wouldn't domestic milk consumers buy from the world market at the lower price PW, rather than from the domestic sellers at the higher price PM? Buying from the world market entails a higher transport cost than buying from the domestic market, so there's at least some reason to believe a premium on locally produced milk would be accepted. Fonterra's market power is limited by the availability of milk from the world market. However, that doesn't mean that Fonterra doesn't have some market power, and so the actions of the supermarkets (who clearly do have market power as well) is not the only source of high milk prices in New Zealand.

Thursday, 13 September 2018

John T. Ward, 1927-2018

The University of Waikato flew its flag at half mast yesterday, to mark the passing of Emeritus Professor John Trevor Ward on Tuesday. John was Inaugural Professor of Economics (and one of the first full professors at the University), appointed in 1965. He was Head of the Department of Economics for 25 years, and was the founding Dean of the School of Social Sciences. When the Department of Economics de-camped to join the School of Management Studies (as it was then), John was briefly the Dean, and he served terms on the University of Waikato Council and as president of the New Zealand Association of Economists. John retired from the University in 1990, and was awarded the title of Emeritus Professor.

All of this was long before my time at Waikato of course (and even before I had finished high school!), but John maintained a long association with the Department of Economics. I met him at a number of functions, including at prizegivings for our students. John clearly wanted the best for our economics students, and he endowed two prizes for top students - the Foundation Professor's Prize, which goes to the top student in ECONS101 (previously ECON100); and the J.T. Ward Prize, which goes to the top student in ECONS302 (or previously ECON202).

John's research interests spanned wide, including land use, health economics, and cost-benefit evaluation. A quick search of Google Scholar demonstrates this, with his publications including: Economic principles of land use: A comparison of agriculture and forestry (in the New Zealand Journal of Forestry, 1963), Health expenditure in New Zealand (in Economic Record, 1972), and Compensation for Maori land rights. A case study of the Otago tenths (in New Zealand Economic Papers, 1986). He also published the very first paper for the Agricultural Economics Research Unit (AERU) at Lincoln College (it wasn't a university then) in 1964, entitled The systematic evaluation of development projects.

John is survived by his wife Anne, children Tony, David and Michael, and five grandchildren. His funeral is to be held tomorrow morning, at the Lady Goodfellow Chapel on the University of Waikato campus. He will be missed.