Wednesday, 30 November 2016

Jetstar regional services cause loss of $40 million to main centres' economies

Last week, Jetstar announced a report by Infometrics that suggested their introduction of regional flights to Nelson, Palmerston North, Napier, and New Plymouth boosted the economy of those regions by around $40 million. Here's what the New Zealand Herald reported:
Jetstar's regional operations could boost the economy of four centres it serves by about $40 million a year, according to Infometrics research.
The regional GDP growth could support up to 600 new jobs according to the research which notes domestic air travel prices have fallen by close to 10 per cent during the past 12 months.
Jetstar Group CEO, Jayne Hrdlicka, said the report highlighted how important cheap fares were to growing local economies.
That sounds like a good news story, but as with most (if not all) economic impact studies, it only provides half the picture. That's because flying to the regions doesn't suddenly create new money. So, every dollar that is spent by travellers to the regions is one less dollar that would have been spent somewhere else. In the case of domestic travellers who would not have otherwise travelled to those regions if Jetstar hadn't been flying there (which is the assumption made in the report), every dollar they spend on their trip to Napier is one less dollar they would have spent at home in Auckland. One could make a similar case for international travellers, although perhaps cheaper flights encourage them to spend more on other things than they otherwise would (although this is drawing a pretty long bow).

So, if it's reasonable to believe that Jetstar flights add $40 million to the economies of those regions, it is also reasonable to believe that Jetstar flights cost around $40 million in lost economic activity elsewhere in the country (depending on differences in multiplier effects between different regions), and much of this will likely be from the main centres.

To be fair, the Infometrics report (which I obtained a copy of, thanks to the Jetstar media team) does make a similar point that:
...the economic effects of this visitor spending should only be interpreted on a region-by-region basis, rather than as an aggregate figure for New Zealand as a whole. It is likely that some of the increase in visitor spending in regions with additional flights represented spending that was diverted from other parts of New Zealand.
The Infometrics report has some other issues, such as assuming a fixed proportion of business travellers to all four airports, which seems fairly implausible but probably doesn't have a huge impact on the estimates. A bigger issue might be the underlying model for calculating the multiplier effects, since multi-region input-output models (I'm assuming this is what they use) are known to suffer from aggregation bias that overstates the size of multiplier effects. I had a Masters student working on multi-region input-output models some years ago, and that was one of the main things I took away from that work. However, that's a topic that really deserves its own post sometime in the future.

Of course, these problems aren't important to Jetstar, which only wants to show its regional economic impact in the best light possible. The next step for them might be to say: "Ooh, look. We've done such a great job enhancing the economy of these regions. The government should subsidise us to fly to other regions as well so we can boost their economies too". Thankfully, they haven't taken it that far. Yet.

You might argue that boosting the economies of the regions, even if it is at the expense of the main centres, is a good thing. That might be true (it is arguable), but it isn't clear to me that increased air services is the most cost effective mechanism for developing the regional economies. I'd be more convinced by an argument that improved air services are a consequence of economic development, not a source of it.

For now, just take away from this that we should be sceptical whenever firms trumpet their regional economic impact based on these sorts of studies.

Tuesday, 29 November 2016

Quitting Facebook might make you happier

A new paper by Morten Tromholt (University of Copenhagen) published in the journal Cyberpsychology, Behavior, and Social Networking (sorry I don't see an ungated version anywhere) reports on an experiment, where Facebook users were randomly allocated to a treatment group that gave up Facebook for a week, and a control group that did not. Discover Magazine reports:
Tromholt recruited (via Facebook, of course) 1,095 Danish participants, who were randomly assigned to one of two conditions. The ‘treatment group’ were instructed not to use Facebook for one week, and were recommended to uninstall the Facebook app from their phones if they had it. At the end of the study, 87% of the treatment group reported having succesfully avoided Facebook the whole week. Meanwhile, the ‘control group’ were told to continue using the site normally.
The results showed that the treatment group reported significantly higher ‘life satisfaction’ and more positive emotions vs. the control group (p < 0.001 in both cases). These effects were relatively small, however, for instance the group difference in life satisfaction was 0.37 on a scale that ranged from 1-10...
This is a nice little study, but in my mind it doesn’t prove all that much. The trial wasn’t blinded – i.e. the participants of necessity knew which group they were in – and the outcome measures were all purely subjective, self-report questionnaires.
It's this last point that I want to pick up as well. I worry that much of what is observed in this study is a Hawthorne effect - that the participants who were asked to give up Facebook for a week anticipated that the study was evaluating whether it increased their happiness, and reported what the researcher expected to see. The outcome measures were all self-reported measures of life satisfaction and emotions, which are easy for the research participants to manipulate (whether consciously or not). The author tries to allay this concern in the paper:
The critical point here is whether the participants have formulated their own hypotheses about the effects of quitting Facebook and that these hypotheses, on average, are pointing in the same direction. On the one hand, the individual formulation of hypotheses may have been facilitated by the pretest and the selection bias of the sample. On the other hand, the participants’ hypotheses may not be pointing in the same direction due to the fact that the direction of the effects found in the present study is not self-evident. Hence, there may be limited experiment effects at stake. If experiment effects did affect the findings of the present study, they might even turn in the opposite direction because people, in general, perceive Facebook as a source to positive feelings.
I'm not convinced. Especially since, as this Guardian article on the research notes, the study was performed by the Happiness Research Institute. I'm not sure how any of the research participants could miss the significance of that and not realise that the study expected an increase in happiness to result.

Having said that, there are supplementary results reported in the paper that might partially allay those concerns. The effects on happiness were greater for those who were heavier users of Facebook, which is what you would expect to see if the effect is real, and this is not something that could be easily spoofed by the actions of participants.

One final word: it could be possible to improve this study by moving away from, or supplementing, the subjective wellbeing (life satisfaction) questions, such as by measuring levels of the stress hormone cortisol in the research participants before and after giving up Facebook. Of course, this would increase the cost of the study, because it could no longer be run solely online. Something for future research perhaps?


Monday, 28 November 2016

Newsflash! Population growth will be highest on the fringes of fast-growing urban areas

I'm not sure how this is news:
Infometrics has this morning released its Regional Hotspots 2016 report, showing the country's top future population growth areas between 2013 and 2023, revealing some obvious and less obvious areas...
The hotspots were concentrated around the country's main metropolitan centres, "reflecting the highly urbanised nature of New Zealand's population and the greater density of potential new markets offered by these growth areas".
Well, duh. The Infometrics report is here, but it doesn't really say much that isn't obvious to anyone with local knowledge who hasn't been living under a rock. For instance, North Hamilton is one of the 'hotspots' and this is part of what they have to say about it:
The choice of this hotspot reflects the ongoing trend of the growth in Hamilton’s metropolitan area towards the north. Although there are also longer-term plans for expansion of the city southwards towards the airport, growth in the shorter-term will be focused on the fringes around Flagstaff, Rototuna North, and Huntington.
The whole report is full of re-packaged Statistics NZ data on area unit population estimates (to 2016) and projections (to 2043), which anyone can view here, so it doesn't even include anything new. Last Thursday must have been a slow news day.

For more on small-area population projections though, you can read my report with Bill Cochrane for the Waikato Region here (there is a more recent update to that report, but it isn't available online - if you would like a copy, drop me an email). We use a model of statistically downscaling higher-level population projections using a land use projections model (a more detailed paper is currently in peer review for journal publication). This is a significant advance over the method employed by Statistics New Zealand, because it takes into account the planning decisions of councils at the local level. The results (in some cases) are strikingly different from Statistics New Zealand's projections, and suggest that more can be done to improve the quality of 'official' small-area population projections.

Sunday, 27 November 2016

Book review: The Drunkard's Walk

I just finished reading the 2008 book by Leonard Mlodinow "The Drunkard's Walk - How Randomness Rules Our Lives". I was kind of expecting something like the Nassim Nicholas Taleb book, "Fooled by Randomness" (which I reviewed earlier this year), but this book was much better.

While it purports to be a book about the role of randomness in our lives, much of the content is a wide-ranging and well-written series of stories in the history of probability theory and statistics. To pick out the highlights (to me), Mlodinow covers the Monty Hall problemPascal's triangle, Bernoulli's 'golden theorem' (better known as the law of large numbers), Bayes' theorem, the normal distribution and the central limit theorem, regression to the mean and the coefficient of correlation (both developments by Francis Galton, known as the father of eugenics), Chi-squared tests (from Karl Pearson), Brownian motion, and the butterfly effect. And all of that in a way that is much more accessible than the Wikipedia links I've provided.

Throughout the book, Mlodinow illustrates his points with interesting anecdotes and links to relevant research. One section, on measurement issues, struck me in particular because it made me recall one of the funniest papers I have ever read, entitled "Can People Distinguish Pâté from Dog Food?" (the answer was no). Mlodinow focuses on the subjectivity (and associated randomness) of wine ratings:
Given all these reasons for skepticism, scientists designed ways to measure wine experts' taste discrimination directly. One method is to use a wine triangle. It is not a physical triangle but a metaphor: each expert is given three wines, two of which are identical. The mission: to choose the odd sample. In a 1990 study, the experts identified the odd sample only two-thirds of the time, which means that in 1 out of 3 taste challenges these wine gurus couldn't distinguish a pinot noir with, say, "an exuberant nose of wild strawberry, luscious blackberry, and raspberry," from one with "the scent of distinctive dried plums, yellow cherries, and silky cassis." In the same study an ensemble of experts was asked to rank a series of wines based on 12 components, such as alcohol content, the presence of tannins, sweetness, and fruitiness. The experts disagreed significantly on 9 of the 12 components. Finally, when asked to match wines with the descriptions provided by other experts, the subjects were correct only 70 percent of the time.
Clearly, randomness is at play more often than we probably care to admit. So, apparently at random, I highly recommend this book (and I've made sure to add some of Mlodinow's other books to my Amazon wish list to pick up later!).

Sunday, 20 November 2016

The inefficiency of New Zealand's emissions trading scheme

A couple of days ago I wrote a post about the game theory of climate change negotiations. One of the conclusions of that post was that there was a dominant strategy for countries not to reduce their greenhouse gas emissions. Another problem might be that countries reduce emissions, but not by as much as they should (in order to achieve the Paris Agreement goal of no more than two degrees of temperature increase over pre-industrial levels).

Potentially, even worse might be that countries find inefficient ways of meeting their emissions reduction goals, and I believe there is a strong case that New Zealand is in the inefficient camp. New Zealand introduced its emissions trading scheme (ETS) in 2008, and it was later amended in 2009 (and has been reviewed twice since). Under the scheme (described here), "certain sectors are required to acquire and surrender emission units to account for their direct greenhouse gas emissions or the emissions associated with their products".

As Megan Woods notes, one of the main problems with the ETS is that agriculture is not included in the scheme, and farmers have been told that there are no plans to change that in the near future. Agriculture is responsible for about half of New Zealand's greenhouse gas emissions (see page 4 of this fact sheet from NZAGRC).

This creates a problem because, in order to meet the overall goal of emissions reduction, other sectors must reduce emissions by more to compensate. To see why this is inefficient, consider the diagrams below. Say there are just two markets: (1) agriculture (on the left); and (2) all other sectors (on the right). Both markets produce a negative externality, represented by the difference between the supply curve (the marginal private cost or MPC curve, since it includes only the private costs that producers face) and the marginal social cost (MSC) curve (made up of MPC plus the marginal external cost (MEC), which is the cost of the externality to society). In both cases the market, left to its own devices, will produce at the quantity where supply is equal to demand - at Q0 in the agriculture market, and at Qa in the other market. Society prefers each market to operate where economic welfare is maximised. This occurs where MSB is equal to MSC - at Q1 in the agriculture market, and at Qb in the other market.


In the agriculture market, total economic welfare is equal to the area ABD-BFE, and there is a deadweight loss of BFE [*]. In the other market, total economic welfare is GHL-HMJ, and the deadweight loss is HMJ [**]. The value of the externality is represented by the area DFEC in the agriculture market, and by the area LMJK in the other market.

Now consider the implementation of two different emissions trading schemes, as shown in the diagrams below. In the first scheme, both markets are included. Firms must either reduce emissions directly, or buy credits to cover their emissions.  Either of these is costly, and forces the producers to internalise the externality. The markets both move to operating at the point where MSB is equal to MSC, maximising economic welfare at ABD in the agriculture market and GHL in the other market (there is no longer a deadweight loss in either market).


In the second scheme, agriculture is excluded but the same total emissions reduction is desired. This means that the other market must reduce emissions by more to compensate. The other market reduces quantity to Qc (note that the reduction of the value of the externality in this case is double what it was in the first scheme). Total economic welfare in this market reduces to GNSL, with a deadweight loss of NHS. This market over-corrects and produces too little relative to the welfare maximising quantity (Qb). Meanwhile, the agriculture market continues to produce a deadweight loss of BFE.

Notice that the size of the combined deadweight losses across the two markets is pretty much the same under the second scheme (BFE + NHS) than it was without any emissions trading scheme at all (BFE + HMJ). So compared with the first scheme, the second scheme leads to a loss of economic welfare - it is inefficient.

Emissions trading schemes create a property right - the right to pollute (if you have purchased ETS units, you are allowed to emit greenhouse gases). In order for property right to be efficient, they need to be universal, exclusive, transferable, and enforceable. In this case, universality means that all emissions need to be covered under the scheme, and all emitters must have enough rights to cover their emissions. Exclusivity means that only those who have rights can emit greenhouse gases, and that all the costs and benefits of obtaining those rights should accrue to them. Transferability means that the right to emit must be able to be transferred (sold, or leased) in a voluntary exchange. Enforceability means that emissions should be able to be enforced by the government, with high penalties for those who emit more than they are permitted to.

Clearly, the current New Zealand ETS fails under universality as agriculture is not included. And the previous analysis above shows why this leads to inefficiency (loss of total economic welfare). The government is simply passing the buck by avoiding the inclusion of agriculture in the ETS (e.g. see Paula Bennett here). If we want to efficiently reduce our greenhouse gas emissions, agriculture must be included in the scheme.

*****

[*] The total economic welfare in the agriculture market is made up of consumer surplus of AEP0, producer surplus of P0EC, and the subtraction of the value of the negative externality DFEC.

[**] The total economic welfare in the other market is made up of consumer surplus of GJPa, producer surplus of PaJK, and the subtraction of the value of the negative externality LMJK.

Friday, 18 November 2016

Reason to be surprised, or not, about the Paris Agreement on climate change coming into force

The latest United Nations Climate Change Conference (COP22 in Marrakech) finishes today. The Paris Agreement, negotiated at the previous conference in Paris last year, officially came into force on 4 November. Under the agreement, countries commit to reduce their greenhouse gas emissions, in order to limit the increase in the global temperature to well below two degrees above pre-industrial levels.

Some basic game theory gives us reason to be surprised that this agreement has been successfully negotiated. Consider this: If a country reduces its emissions, that imposes a large cost on that country and provides a small benefit to that country, which is also shared by other countries. If a country doesn’t reduce its emissions, that imposes a small cost on that country and on other countries, while providing a small benefit only to the country that didn’t reduce emissions.

Now consider the negotiations as involving only two countries (Country A and Country B), as laid out in the payoff table below [*]. The countries have two choices: (1) to reduce emissions; or (2) to not reduce emissions. A small benefit is recorded as "+", a small cost is recorded as "-", and a large cost is recorded as "--" [**]. The costs and benefits (noted in the previous paragraph) imposed by the choice of Country A are recorded in red, and the costs and benefits imposed by the choice of Country B are recorded in blue. So, if both countries choose to reduce emissions the outcome will be in the top left of the payoff table. Country A receives a small benefit from their own action to reduce emissions (red +), a small benefit from Country B's action to reduce emissions (blue +), and a large cost of reducing emissions (red --). Other payoffs in the table can be interpreted similarly.


The problem lies in where the Nash equilibrium is in this game. Consider Country A first. They have a dominant strategy to not reduce emissions. A dominant strategy is a strategy that is always better for a player, no matter what the other players do. Not reducing emissions is a dominant strategy because the payoff is always better than reducing emissions. If Country B reduces emissions, Country A is better off not reducing emissions (because ++- is better than ++--). If Country B does not reduce emissions, Country A is better off not reducing emissions too (because +-- is better than +---). So Country A would always choose not to reduce emissions, because not reducing emissions is a dominant strategy.

Country B faces the same decisions (and same payoffs) as Country A. They also have a dominant strategy to not reduce emissions. If Country A reduces emissions, Country B is better off not reducing emissions (because ++- is better than ++--). If Country A does not reduce emissions, Country B is better off not reducing emissions too (because +-- is better than +---). So Country B would always choose not to reduce emissions, because not reducing emissions is a dominant strategy.

Both countries will choose their dominant strategy (to not reduce emissions), and both will receive a worse payoff (+--) than if they had both chosen to reduce emissions (++--). This game is an example of the prisoners' dilemma. There is a single Nash equilibrium, that occurs where both players are playing their dominant strategy (to not reduce emissions). So, based on this we might be surprised that the Paris Agreement has come into force, since all countries are better off if they choose not to reduce emissions, and instead just free ride on the emission reductions of other countries.

However, that's not the end of this story, since there are two alternative ways to look at the outcome. First, the payoff table above makes the assumption that this is a simultaneous, non-repeated game. In other words, the countries make their decisions at the same time (simultaneous), and they make their decisions only once (non-repeated). Of course, in reality this is a repeated game, since countries are able to continually re-assess whether to reduce emissions, or not.

In a repeated prisoners' dilemma game, we may be able to move away from the unsatisfactory Nash equilibrium, towards the preferable outcome, through cooperation. Both countries might come to an agreement that they will both reduce emissions. However, both countries have an incentive to cheat on this agreement (since, if you knew that the other country was going to reduce emissions, you are better off to not reduce emissions). So the countries need some way of enforcing this agreement. Unfortunately, the Paris Agreement has no binding enforcement mechanism. So there is no cost to a country reneging on their promise to reduce emissions.

In the absence of some penalty to non-cooperation in a repeated prisoners' dilemma, a tit-for-tat strategy is the most effective means of ensuring cooperation in the two-person game. In a tit-for-tat strategy, the player starts out by cooperating. Then they choose the same strategy that the other player chose in the previous play of the game. However, it isn't clear how a tit-for-tat strategy works when you have more than two players (which we do). So, we probably can't rely on this being a repeated game to ensure cooperation between countries.

Second, the prisoners' dilemma looks quite different if the players have social preferences. For example, if players care not only about their own payoff, but also about the payoff of the other player. Consider the revised game below, where each of the countries receives a payoff that is made up of their own payoff in the base case (coloured red or blue as before) plus the other player's payoff in the base case (coloured black). This is the case of each country having highly altruistic preferences.


The game now changes substantially, and reducing emissions becomes a dominant strategy for both players! Consider Country A first. If Country B reduces emissions, Country A is better off reducing emissions (because ++++---- is better than +++----). If Country B does not reduce emissions, Country A is better off reducing emissions too (because +++---- is better than ++----). So Country A would always choose to reduce emissions, because reducing emissions is now their dominant strategy.

Country B faces the same decisions (and same payoffs) as Country A. If Country A reduces emissions, Country B is better off reducing emissions (because ++++---- is better than +++----). If Country A does not reduce emissions, Country B is better off reducing emissions too (because +++---- is better than ++----). So Country B would always choose to reduce emissions, because reducing emissions is now their dominant strategy.

Of course, this is the extreme case, but you get similar results from a variety of intermediate assumptions about how much each country cares about the payoff of the other country (for brevity I'm not going to go through them). We get similar (and stronger) results if we consider our social preferences towards the wellbeing of future generations. The takeaway message is that if we care about people living in other countries (or future generations), we might not be surprised that the Paris Agreement has come into force.

So, game theory can show us both why we might be surprised about the success of the Paris Agreement (because countries have incentives not to reduce emissions), or not surprised about its success (because we care about other people - in other countries, or in future generations).

*****

[*] If we assume more than two countries, we would get pretty much the same key results, but it is much harder to draw a table when there are more than two players, so I'm keeping things simple throughout this post by considering only the two-country case.

[**] Of course, the size of the +'s and -'s will matter, and they probably differ substantially between countries, but for simplicity let's assume they are equal and opposite.

Wednesday, 16 November 2016

Scale vs. scope economies and the AT&T-Time Warner merger deal

I've been watching with interest the unfolding news on the proposed merger between AT&T and Time Warner. The Washington Post has a useful primer Q&A here:
AT&T, the nation's second-largest wireless carrier, is buying Time Warner, the storied media titan that owns HBO, CNN and TBS. In an unprecedented step, the deal is going to combine a gigantic telecom operator — which also happens to be the largest pay-TV company — and a massive producer of entertainment content.
In ECON110, one of the topics we cover includes media economics. The economics of media companies is of interest because it illustrates a bunch of economic concepts, and because their interaction leads to some seemingly-counterintuitive real-world outcomes.

For instance, media content (e.g. movies, television shows, music albums) is subject to substantial economies of scale - the average production cost per consumer of media content falls dramatically as you provide the content to more consumers. This is because the cost of producing content is relatively high, while the cost of distributing that content (especially in the digital age) is extremely low. Large economies of scale (in distribution) tend to favour large media companies over small media companies, since the large media company can distribute to a larger audience at lower cost per-audience-member. In other words, each item of media content gives rise to a natural monopoly.

However, consumers demand a variety of media content, and variety is difficult (and costly) to produce. Every different item of content requires additional scarce inputs (e.g. quality actors, sets, scripts, etc.), and the more novel the item of content the more expensive it will generally be to create. This leads to what is termed diseconomies of scope - the more different items of content that a media company produces, the higher their average costs. Diseconomies of scope (in production) tend to favour smaller media companies that focus on specific content niches.

So, the combination of these (economies of scale and diseconomies of scope) leads to an industry that is characterised by several large media companies producing 'mainstream' content (and usually formed by mergers of previously smaller media companies), as well as many smaller niche providers. If we focused only on the economies of scale, we would be left wondering why the smaller providers continue to survive alongside their larger rivals. It also explains why the offerings of the large media companies are often pretty bland when compared to the offerings of the smaller players - producing something that isn't bland is too costly.

Which brings me to the AT&T-Time Warner merger. This introduces a new element into play. Time Warner is a content provider, and a distributor through traditional media channels (television, etc.). AT&T is not a media company (at least, not yet), but would greatly enhance the potential distribution of Time Warner content. That in itself isn't problematic, and would simply follow previous media mergers focusing on increasing the gains from economies of scale.

The main problem is that AT&T is a gateway to consumers (and a gateway from consumers to content), and that means that the merged entity could significantly reduce competition in the media market. Media consumers value variety (as I noted above), but this merger might make it more difficult (or costly) for AT&T consumers to access that variety. The WaPo article linked above notes:
Here's where it starts to get really interesting. AT&T could charge other companies for the rights to air, say, "Inception" on their networks, or for the use of the Superman brand. Left unchecked, AT&T could abuse this power and force other Web companies, other cable companies, other content companies or even consumers to accept terms they otherwise would never agree to.
One of the things that AT&T might also do is make it costlier for their subscribers to access other content. Of course, they wouldn't frame it that way - they would instead offer a discount on accessing Time Warner content (which effectively makes other content more expensive and shifts consumers towards the Time Warner content).

What happens next? Federal regulators are looking into the deal, and depending on who you read, President-Elect Trump will either block the deal (which markets are anticipating and which he noted during the campaign) or not (based on the makeup of staff within the administration). From the consumers' perspective, let's hope that Trump follows through (this may be the only Trump campaign promise I would make that statement for!).

Tuesday, 8 November 2016

Voting paradoxes and Arrow's impossibility theorem

In ECON110, we don't have nearly enough time to cover the theory of public choice as well as I would like. We do manage to cover some of the basics of the Condorcet Paradox and Arrow's impossibility theorem. Alex Tabarrok at Marginal Revolution recently pointed to this video by Paul Stepahin, which looks at several voting paradoxes. Given the events about to unfold in the U.S., it seems timely. Enjoy!



[HT: Marginal Revolution]

Monday, 7 November 2016

Book review: Who Gets What - and Why

Alvin Roth won the Nobel Prize in Economics in 2012 (along with Lloyd Shapley) "for the theory of stable allocations and the practice of market design". So, I was glad to finally have time to read Roth's 2015 book, "Who Gets What - and Why". I wasn't sure what to expect from this book, having previously only read some of Roth's work on kidney exchange, and was pleasantly surprised that the book covers a lot of fundamental work on the design of markets.

In introductory economics, we usually start by focusing on commodity markets - markets where the goods that are being offered for sale are homogeneous (all the same). In commodity markets, the decision about 'who gets what' is entirely determined by price - the consumers that value the good the most will be buyers, and the producers with the lowest costs will be sellers. This book (and indeed, much of Roth's most high-profile research) focuses instead on matching markets - markets where price cannot perform the role of matchmaker by itself. As Roth notes early on in the book:
Matching is economist-speak for how we get the many things we choose in life that also must choose us. You can't just inform Yale University that you're enrolling or Google that you're showing up for work. You also have to be admitted or hired. Neither can Yale of Google dictate who will come to them, any more than one spouse can simply choose another: each also has to be chosen.
The book has many examples, but I especially liked this example of the market for wheat, and how it became commodified:
Every field of wheat can be a little different. For that reason, wheat used to be sold "by sample" - that is, buyers would take a sample of the wheat and evaluate it before making an offer to buy. It was a cumbersome process, and it often involved buyers and sellers who had successfully transacted in the past maintaining a relationship with one another. Price alone didn't clear the market, and participants cared whom they were dealing with; it was at least in part a matching market.
Enter the Chicago Board of Trade, founded in 1848 and sitting at the terminus of all those boxcars full of grain arriving in Chicago from the farms of the Great Plains.
The Chicago Board of Trade made wheat into a commodity by classifying it on the basis of its quality (number 1 being the best) and type (winter or spring, hard or soft, red or white). This mean that the railroads could mix wheat of the same grade and type instead of keeping farmer's crop segregated during shipping. It also meant that over time, buyers would learn to rely on the grading system and buy their wheat without having to inspect it first and to know whom they were buying it from.
So where once there was a matching markets in which each buyer had to know the farmer and sample his crop, today there are commodity markets in wheat...
Of course not all markets can easily be commodified, which means that many markets remain matching markets. And in a matching market, the design of the market is critical. The book presents many examples of matching markets, from financial markets, to matching graduate doctors to hospitals, to kidney exchange. Roth also discusses signalling, a favourite topic of mine to teach, and which is of course important in ensuring that there are high-quality matches in markets.

On kidney exchange, it is interesting to read an economist who isn't substantially pro-market (e.g. kidneys for transplant should be traded in markets, at market prices). Roth recognises that there may be valid objections against fully monetising some markets (such as kidney exchange). He writes (the emphases are his):
Such concerns about the monetization of transactions seem to fall into three principal classes.
Once concern is objectification, the fear that the act of putting a price on certain things - and then buying or selling them - might move them into a class of impersonal objects to which they should not belong. That is, they risk losing their moral value.
Another fear is coercion, that substantial monetary payments might prove coercive - "an offer you can't refuse" - and leave poor people open to exploitation, from which they deserve protection.
A more complex concern is that allowing things such as kidneys to be bought and paid for might start us on a slippery slope toward a less sympathetic society than we would like to live in. The concern, often not clearly articulated, is that monetizing certain transactions might not itself be objectionable but could ultimately cause other changes that we would regret.
Overall, the book is a really good complement to learning about the commodity markets that we look most closely at in introductory economics. Many economists (myself included) probably don't spend nearly enough time considering the design of markets and largely take the role of prices in allocating resources as a given. This is definitely a highly recommended read.

Sunday, 6 November 2016

The Great Walks may be free, but they are not free

My blog's been a bit quiet the last couple of weeks while I've been buried in exam marking. Now that marking is done, I can start to post on some things I had put aside over that time. Starting with the controversy over comments made by Department of Conservation director-general Lou Sanson, reported here:
It may be time to start charging for the use of the country's Great Walks, Department of Conservation director-general Lou Sanson says.
Foreign tourists could pay $100 and New Zealanders $40 to cope with a huge increase in trampers — especially overseas travellers — and their effect on the environment, he suggested.
Sanson said the country's Great Walks brand had "exploded" but this popularity had created some problems...
In March, he took the United States ambassador to the Tongariro Alpine Crossing — a 19.4km one-day trek between the Mangatepopo Valley and Ketatahi Rd in the North Island.
"Every time we stopped we were surrounded by 40 people. That is not my New Zealand. We have got to work this stuff out — these are the real challenges," Sanson told the Queenstown Chamber of Commerce yesterday...
Introducing differential charges on the Great Walks was one potential mechanism to alleviate pressure, Mr Sanson said.
"We have got to think [about that]. I think New Zealand has to have this debate about how we're going to do bed taxes, departure charges — we have got to work our way around this.
"I think a differential charge [is an option] — internationals [pay] $100, we get a 60 per cent discount."
The New Zealand Herald then ran an editorial the next day, entitled "Turnstiles on wilderness is not the answer". The editorial raised some good practical issues with charging a fee for trampers on the Great Walks:
Would rangers be posted to collect cash, or check tickets that would have to be bought in advance? How would they be enforced?
It also raised an important issue about the perception of the service provided by DoC:
A charge changes the way users regard it. The track and its surrounds would cease to be a privilege for which they are grateful, and become something they feel they have paid for.
They will have an idea of the value they expect and rights they believe due for their expense. They may be more likely to leave their rubbish in the park. The costs of removing litter and cleaning camping areas may quickly exceed the revenue collected.
However, the editorial ignored the fundamental issue of providing goods and services for 'free'. If something comes with no explicit monetary cost associated with it, that does not mean that it is free. Economists recognise that there are opportunity costs (because in choosing to do a Great Walk, we are foregoing something else of value we could have done in that time), but this is about more than just opportunity costs.

When a good or service has no monetary cost, there will almost always be excess demand for it - more consumers wanting to take advantage of the service than there is capacity to provide the service. Excess demand can be managed in various ways - one way is to raise the price (as suggested by Sanson). Another is to limit the quantity and use some form of waiting list (as is practiced in the health sector). A third alternative is to degrade the quality of the service until demand matches supply (because as the quality of the service degrades, fewer people will want to avail themselves of it).

The latter option doesn't sound particularly appealing, but it's the option that Sanson is most against, and would be the necessary consequence of the laissez faire approach the Herald editorial advocates for. Sanson already notes one way that the quality of the Great Walks is affected, when he notes "Every time we stopped we were surrounded by 40 people". If you want to take a Great Walk in order to experience the serene beauty and tranquillity of our natural landscape, the last thing you want is to be constantly mobbed by selfie-taking dickheads. The quality of the experience degrades the more people are on the Great Walks.

Pricing might not be appetising to some, but at least it would manage the demand for the Great Walks. Providing lower prices to locals is, as I have noted previously, an appropriate form of price discrimination that I remain surprised that we don't see more of in New Zealand. Of course, that doesn't negate the practical concerns raised in the Herald editorial. But if we want to maintain the quality of the experience on the Great Walks, this is a conversation that we should be having.