Friday, 30 December 2016

The battle for shelf space: Lewis Road Creamery vs. Fonterra

One of my favourite discussion points in the pricing strategy topic of ECON100 has little to do with pricing at all. It is actually about making the business environment difficult for your competitors, by competing with yourself. The principle is actually very simple. If you produce a wide variety of similar substitute products and stock them on supermarket shelves, then there is less space left available for competing products (meaning those made by your competitors). By competing with yourself, you actually ensure that it is more difficult for any other firm to do so, in spite of giving the consumer the impression that there is a lot of competing brands. If you doubt this happens, I suggest going to the laundry powder aisle of the supermarket, and comparing the backs of the boxes there. I can almost guarantee that most of the apparent variety of brands will actually only be produced by two (or maybe three) suppliers.

Which brings me to the current bust-up between Lewis Road Creamery and Fonterra, first reported by Liam Damm in the New Zealand Herald on 12 December (see also here and here, where the latest news is that LRC will commence legal action against Fonterra, but presumably that has more to do with packaging and less to do with shelf space). The first of those articles has the detail:
Lewis Road Creamery founder Peter Cullinane is accusing Fonterra of negotiating a "greedy" deal with supermarkets which would limit the ability of smaller dairy brands to get space in the chiller.
In an open letter to Fonterra chief executive Theo Spierings, Cullinane questions whether Fonterra is playing fair.
He said he understood that "Fonterra is looking to use its market power to introduce an exclusionary deal with supermarkets in the North Island that would all but remove non-Fonterra brands".
The deal would give Fonterra's white milk brands 95 per cent of the chiller space, he says.
This is something we should not be surprised about at all, knowing that reducing the shelf space for competing products is a viable strategy. The supermarkets gave the following responses:
Yesterday a representative for the Countdown supermarket chain said it was not involved in any deal with Fonterra.
"Countdown does not have any deal with Fonterra of this nature, and would not enter into any agreement like this," said Countdown general manager merchandise, Chris Fisher. "We treat all of our suppliers fairly and shelf space is determined based on the merit and popularity of each product," Fisher said.
A spokesperson for Foodstuffs North Island said the company had "an agreement in place with Fonterra, as we do with other suppliers including Lewis Road Creamery, to supply a range of dairy products, the terms of our supplier agreements are confidential".
I'd be more surprised if the supermarkets weren't negotiating deals with suppliers where a single supplier provides multiple competing products, especially if they involve side-payments from the suppliers. The fewer suppliers the supermarket chains have to deal with, the simpler (and therefore less costly) their logistics will be. Everybody wins. Well, everybody but the consumer who wants fair competition that leads to a variety of products and low prices.

Ultimately though, there may be a question for the Commerce Commission as to whether these agreements over-step the mark in terms of reducing competition. It will be interesting to see how this story progresses in the new year.

Thursday, 29 December 2016

The most forgotten Nobel Prize winners in economics are...

A new paper (open access) in the latest issue of the journal Applied Economics Letters by Aloys Prinz (University of Munster) caught my eye. In the paper, Prinz develops a measure of the memorability of Nobel Prize winners in economics:
In this article, fame, as measured by the number of Google hits at a certain time t, is supposed to be fading with time. The importance of a Nobel Prize laureate seems to be the higher, the more she or he is remembered years after receiving the Prize.
The paper focuses on the most memorable Nobel Prize winners (in 2012) who, unsurprisingly, are Milton Friedman, Paul Krugman, and Joseph Stiglitz. However, taking Prinz's Table 1 and re-ordering it in reverse order of memorability gives us the most forgotten Nobel Prize winners, being (in order of forgottenness) Thomas Sargent, Dale Mortensen, and Christopher Pissarides:


Sargent is surprising given that, in terms of achievement (as measured in this 2013 paper by Claes and De Ceuster; ungated version here), he is tenth overall. It is sad to see the low memorability (or high forgottenness) ranking of Elinor Ostrom, the only woman ever to win the award (again, in spite of a high achievement ranking). I imagine there's been a few more searches for Schelling and Selten in recent months, which would push them up the memorability (and down the forgottenness) rankings. Finally, it would be interesting (to me at least) to know where some of the short list for next year's award would measure up.

Wednesday, 28 December 2016

The behavioural economics of Lotto

Lotto (and similar lotteries in other countries) presents a problem for economists' assumption of rational behaviour. A rational, risk-averse person should never choose to play Lotto - it has a negative expected value (playing Lotto hundreds of times will lose you money, on average). This point has been made many times (see here for one example from Stats Chat - a simple search on Stats Chat will find you dozens more posts relating to Lotto).

So, if a rational person would never play Lotto, there must be some other explanation for that behaviour. So, I was happy to read this article in The Conversation by Ryan Anderson and David Mitchell (both James Cook University), which used some behavioural economics (among other reasons) to explain why people play Lotto. One reason related to availability bias:
The availability bias/heuristic relates to the idea that people judge the likelihood of something based roughly on how readily examples of it come to mind.
For example, you can probably think of news stories about when a shark has bitten a swimmer. One reason is this kind of a story is sensational, and will likely be highly reported. How often have you seen the headline: “No sharks at the beach today”?
Because you can easily bring to mind examples of shark attacks, you might be tempted to conclude shark attacks are far more common than they actually are. In fact, the chances of being attacked by a shark are somewhere in the neighbourhood of one in 12 million.
You hear and read stories about lottery winners all the time. Jackpot winners always make the news, but the battlers who have been playing for 20 years without winning are relegated to obscurity.
Based on this, it’s at least reasonable to think “jackpotting” can’t be that rare. The net effect is that winning seems possible.
Another related to the sunk cost fallacy:
In economics, a sunk cost is any previous expense that can’t be recovered – like a previous business expenditure on software, education, or advertising. Because this cost has already occurred and can’t be recovered, it should no longer be factored into future decisions. But this is seldom the case.
The sunk-cost fallacy occurs when you make a decision based on the time and resources you have already committed. Research suggests adults are more likely to fall victim to the sunk-cost fallacy than either children or lower-order animals.
In lotto, people will often persevere with what they sometimes know is economically irrational – like buying more lotto tickets – simply because they have already invested so much.
People are susceptible to the sunk cost fallacy because of loss aversion and mental accounting. Loss aversion simply means that we value losses more than we value equivalent gains - we prefer to avoid losses more than we seek to capture gains. This would seem to suggest we should avoid the losses that are inherent in Lotto. However, mental accounting (as the name implies) suggests that we keep 'mental accounts', such as an account for Lotto, and we like to keep those accounts in positive balances. Once we've played Lotto once, we will continue to play because if we stopped when we are behind, then we have to accept the loss from that mental account. As long as we keep playing, there is the chance that we win and the mental account turns positive. Note that this is also an explanation for much gambling behaviour, but also why we stay in jobs we hate, relationships we don't enjoy, investments that are not paying off, and so on. We don't want to acknowledge the loss.

Another behavioural economics explanation that Anderson and Mitchell referenced was our misunderstanding of small probabilities:
Gambling studies professor Robert Williams suggests that although humans have evolved some appreciation for numbers, we don’t really understand big numbers.
We deal with amounts like six, 24 and 120 all the time, but throughout history it’s never really been important to measure out 18 million of something, or count 50 million of something else.
Odds of one in 200 million don’t seem that different to odds of, say, one in 3 million. In both cases success is really unlikely.
Give someone a choice between odds of one in three and one in 200, however, and the difference is really obvious. It’s certainly not that people can’t grasp really big numbers, but that they don’t have much meaning until we stop and think about them.
It's actually worse than Anderson and Mitchell infer. Not only do people not understand probabilities, we have a tendency to overestimate the likelihood of very unlikely events. This is one of the cornerstones of prospect theory, the theory developed by Nobel Prize-winner Daniel Kahneman, along with Amos Tversky. So, if we overestimate small probabilities, we overestimate our chances of winning Lotto, and this makes us more likely to play (compared with if we understood the real probability of winning).

Finally, Anderson and Mitchell also made a point that I have made before, that people facing hard times have less to lose and are more willing to take a punt on Lotto. This isn't behavioural economics at work though - it is people becoming less risk averse. Either way, Lotto is more proof (if any were needed) that our behavioural biases are actively working against us making rational decisions, and the assumption of rationality is dead.

Tuesday, 27 December 2016

The supply-and-demand puzzle that is the Boxing Day sale

In basic supply-and-demand models, if there is an increase in demand the equilibrium price rises (ceteris paribus; which means 'all else being equal'). However, Boxing Day is the biggest shopping day of the year - so we know demand is high. Why are prices so low then? Surely they should be higher?

Helpfully, the New York Times had a good article last month about the price of turkeys in the lead-up to Thanksgiving in the U.S., which illustrates a similar point:
According to government data, frozen whole-turkey prices drop significantly every November; over the last decade, retail prices have fallen an average of 9 percent between October and November.
That trend seems to defy Econ 101. Think back to those simple supply-and-demand curves from introductory micro, and you’ll probably remember that when the demand curve shifts outward, prices should rise. That’s why Major League Baseball tickets get most expensive during the World Series — games that (theoretically, anyway) many more people want to see. Similarly, airline tickets spike around Christmas...
The most intuitive and popular explanation for a high-demand price dip is that retailers are selling “loss leaders.” Stores advertise very low prices — sometimes even lower than they paid their wholesalers — for big-ticket, attention-grabbing products in order to get people in the door, in the hope that they buy lots of other stuff. You might get your turkey for a song, but then you also buy potatoes, cranberries and pies at the same supermarket — all at regular (or higher) markups. Likewise, Target offers a big discount on select TVs on Friday, which will ideally entice shoppers to come in and buy clothes, gifts and other Christmas knickknacks on that frenzy-fueled trip.
That is the supply-side explanation of what’s going on. But plenty of economists disagree, and argue that it’s actually demand-side forces — changing consumer preferences — that drive these price drops...
Consumers might get more price-sensitive during periods of peak demand and do more comparison-shopping, so stores have to drop their prices if they want to capture sales. Perhaps, during the holidays, the composition of consumers changes; maybe only rich people or people who really love turkey buy it in July, but just about everybody — including lower-income, price sensitive shoppers — buys it in November. Or maybe everyone becomes more price-sensitive in November because they’re cooking for a lot of other people, not just their nuclear families.
Count me among the economists with a preference for demand-side explanations, especially for Boxing Day sales. The discounts are so widespread that loss-leading seems extraordinarily unlikely as an explanation. To me, the most likely explanation for the low prices on Boxing Day is that consumers have more price elastic demand then. To see why that leads to lower prices, consider the diagram below (which is the profit-maximising diagram for a firm with market power - note that this is a constant-cost firm to make the diagram a bit easier). The black demand curve (D0) is associated with the black marginal revenue curve (MR0). With this demand curve the firm profit-maximises where marginal revenue (MR0) meets marginal cost (MC), at a quantity of Q*. The profit-maximising price (the price that leads to exactly Q* being demanded) is P0. Compare that with the red demand curve (D1), which is associated with the red marginal revenue curve (MR1). The red demand curve is more price elastic (flatter). With this demand curve the firm profit-maximises where marginal revenue (MR1) meets marginal cost (MC), which is still at a quantity of Q*. The profit-maximising price (the price that leads to exactly Q* being demanded) is P1. Note that when demand is more elastic (for a product with the same cost), the profit maximising price will be lower (the mark-up over marginal cost is also lower).


Why would demand be more elastic on Boxing Day? Consider the factors associated with higher (more elastic) price elasticity of demand. Are there more, or closer, substitute goods available? Arguable. Do consumers have longer time horizons? Perhaps (especially if you compare with shopping the day before Christmas). Is there a higher significance of price in the total cost to the consumer? Doesn't seem to fit. That leaves as a possibility that there is a higher proportion of income spent on the good. [*] If consumers spent a large proportion of their income on gifts and food for Christmas, that leaves a much smaller budget constraint for shopping on Boxing Day, which should lead to more elastic demand. That is, consumers with less money left over for shopping on Boxing Day will be more price sensitive, and retailers respond by lowering their prices to attract those price-sensitive shoppers. [**]

Finally, the NYT article had an interesting comparison between frozen turkeys on Thanksgiving (high demand and low price) and roses on Valentine's Day (high demand and high price), which is worth sharing:
There are a few possible reasons why market forces are different for roses and frozen turkeys on their respective holidays. For one, the loss-leader strategy really only works if you’re a multiproduct retailer, says Chevalier. Florists sell only flowers; they’re not willing to take a loss on the one thing they sell in the hope that you’ll buy a bunch of other stuff, since you’re not likely to buy anything else.
More important, roses — like airline seats or World Series tickets — are what economists refer to as “supply inelastic.” It’s costly to ramp up rose production in time for peak demand, since the roses must all be picked (and for the most part, flown in from Colombia and Ecuador) in the single week preceding Valentine’s Day.
Which is also why you see the biggest discounts on Boxing Day are on items that the retailer can easily hold back for the day, like electronics. Which I admit I took full advantage of!

*****

[*] Alert readers and economics students will notice that I have ignored two other factors that are associated with more elastic demand: (1) narrower definition of the market; and (2) normal goods (compared with inferior goods). Both of these factors compare different goods, so don't make sense when comparing elasticity for the same good on Boxing Day and not-on-Boxing-Day.

[**] Another alternative is that the composition of shoppers is different on Boxing Day from other days, with more low-income (and hence, price-sensitive) shoppers shopping on that day. Unless I see some evidence of this, I'm going to discount it as an explanation.

Wednesday, 21 December 2016

Don't free that lobster!

Last month, BBC reports that a well-meaning vegan bought a 10.4 kg giant lobster, in order to set it free. While on the surface, this act of inter-species kindness seems like a good idea, if it catches on in a big way and many people do likewise, it is easy to envisage this leading to unintended consequences. Here's how:

Say that the market for lobster (shown in the diagram below) is initially in equilibrium with the price P0 and the quantity of lobster taken from the ocean is Q0. Now, many well-meaning vegans decide to buy lobsters in order to free them. Because there are now more buyers in the market for lobster, this simply increases the demand for lobsters. Increased demand for lobsters (from D0 to D1) increases the price of lobsters (from P0 to P1), and importantly increases the quantity of lobsters caught (from Q0 to Q1). So setting lobsters free actually increases creates incentives that lead to an increase in the quantity of lobsters caught - the opposite of what was intended!


Of course, this is very similar to the problems of slave redemption in the Sudan that I have blogged about before. There is one important difference though, not accounted for in the above diagram. In many countries, the lobster catch is governed by a transferable quota system (which I've written on here and here). Even if demand for lobster increases, the number of lobsters caught cannot rise because fishermen are limited in the quantity that they are allowed to catch. In this case (shown in the diagram below), the quantity of lobster caught does not increase (it stays at Q0), but the increase in demand instead leads to a substantial increase in price (to P2 instead of P1). So, the actions of the vegans cause everyone's lobster meals to be significantly more expensive. Nice.



Tuesday, 20 December 2016

Thomas Schelling, 1921-2016

Among the several things I missed during my busy last week was the sad passing of Thomas Schelling, a Nobel Prize winner and one of the most important theorists and early writers in game theory. My ECON100 class (and past ECON308/408 Managerial Economics students) will recognise Schelling's name from the Schelling Point, the most likely solution to a coordination game (where there is more than one Nash equilibrium) in game theory. I also very recently (within the last couple of weeks) discussed Schelling's work on polarisation, in particular his model of segregation, with one of my new PhD students who will be working on a microsimulation model of ethnic communities in Auckland (more on that in a far future post - she is only just getting started on that work).

The Washington Post and the New York Times have excellent summaries of Schelling's life and work. I particular like these bits from the New York Times (which illustrate both the breadth of his work and its real-world implications):
Among other counterintuitive propositions he put forth, Professor Schelling suggested that one side in a negotiation can strengthen its position by narrowing its options, using as an example a driver in a game of chicken who rips the steering wheel from the steering column and brandishes it so his opponent can see that he no longer controls the car. He also argued that uncertain retaliation is more credible and more efficient than certain retaliation...
In “Meteors, Mischief and Wars,” published in Bulletin of the Atomic Scientists in 1960, Professor Schelling looked at the possibility of an accidental nuclear exchange between the United States and the Soviet Union and reviewed three novels that imagined such an event. The director Stanley Kubrick read his comments on the novel “Red Alert” and adapted the book for “Dr. Strangelove,” on which Professor Schelling was a consultant.
And on the model of segregation I noted above:
Expanding on the work of Morton Grodzins, a political scientist at the University of Chicago who used the term “tip point” to describe the crucial moment when white fears become white flight, Mr. Schelling offered a simple diagram, almost like a game board, to show how mixed urban neighborhoods could quickly become entirely black, even when white residents expressed only a slight preference for living among members of their own race.
His papers on the subject, and his book “Micromotives and Macrobehavior” (1978), achieved wider currency when his ideas were popularized by Malcolm Gladwell in his best-selling book “The Tipping Point” (2000).
You can expect a book review of Micromotives and Macrobehavior sometime in the new year - it has been on my 'to-be-read' list for some time.

Sunday, 18 December 2016

Book review: Circus Maximus

I've been a bit quiet while trying to finish a bunch of things before year's end, but I did manage to finish reading Andrew Zimbalist's book "Circus Maximus: The Economic Gamble behind Hosting the Olympics and the World Cup". I mentioned this book in a post about the lack of economic impact of stadiums and arenas earlier this year.

When I think about the Olympic Games or the FIFA World Cup, and see the crazy investments that cities (or countries in the case of the World Cup) are prepared to make not only for the event once it has been awarded to them, but also in simply bidding for the event, I immediately think about the winner's curse. Say that the potential hosts don't know for sure what the benefit of hosting the event is, but must outbid all other potential hosts in order to be awarded the event. The 'winning' host will be the city (or country) which had the most wildly optimistic view of the benefits of hosting, since they will be the city (or country) willing to bid the most. And it is very likely that that city (or country) will have bid more than the real benefits of hosting are worth.

Zimbalist's book makes that point much more forcefully than I have above, and much more besides. Barcelona is the poster-child of successful Olympic Games. In one chapter, Zimbalist contrasts the particular experience of Barcelona in leveraging the hosting to develop the city into a future powerhouse of tourism, with the experience of Sochi hosting the Winter Olympics, and the largely wasteful expenditures and investment in that city. This wasteful investment arises because one of the key arguments made in favour of hosting these events is both the expected short-term and the expected long-term boosts to tourism. The opportunity cost of this spending on tourism promotion is potential very great, and Zimbalist astutely remarks at one point:
Each prospective host would do well to ask the question, if we have $10 billion or $20 billion or more to spend on promoting tourism, what is the most effective use of those resources?
How much more good could be done with that money, in terms of tourism promotion, through more conventional means?

Throughout the book, Zimbalist does a good job of outlining the evidence from the academic literature on the cost-benefit calculus of hosting these large events. As one would expect, the winner's curse is broadly evident. Zimbalist also does an excellent job of looking beyond the academic evidence, and bringing together stories that illustrate some of the very real negative impacts of hosting. I hadn't followed the case of Rio de Janeiro and the impacts on the favelas of hosting both the Olympics and the World Cup, but some of the stories are quite horrific. London also comes in for some criticism on the inability to leverage the Olympics for the betterment of people living in East London, the area immediately surrounding the main Olympic venues.

The first paragraph of the concluding chapter is a useful summary to finish this review on. Zimbalist writes:
The perennial claims that hosting the Olympics or the World Cup is an engine of economic development find little corroboration in independent studies. In the short run, the increasingly massive costs of hosting cannot come close to being matched by the modest revenues that are brought in by the games. The payoff, if there is one, must be realized in the long run. But even the legacy return is at best dubious. Much of the alleged legacy comes in the form of the qualitative gains, and the rest comes over very long periods of time, difficult to trace back to the several-week period of the games or the prior construction. But more often than not, the main legacy consists of white elephants that cost billions to build and millions annually to maintain, along with mountains of debt that must be paid back over ten to thirty years.
If there is one thing missing from this book, it is the consideration of smaller events, such as the Commonwealth Games (the Delhi games are mentioned a couple of times only) and the Rugby World Cup. These smaller events require smaller outlays from the hosts, but also have much smaller potential gains. Nevertheless, this is an excellent book both for those who are familiar with the literature on economic impact studies, and those who are not, along with anyone with a specific interest in the Olympics or World Cup.

Monday, 12 December 2016

Video: Economist's Christmas

Tyler Cowen and Alex Tabarrok of Marginal Revolution have been providing us with video debates on various issues this year (I particularly liked the debate on "Is education signalling or skill-building?". The latest (though I notice it isn't on the same Econ Duel playlist) is on the economics of Christmas:


Enjoy!

Sunday, 11 December 2016

More contingent valuation debates

Back in March I highlighted a series of papers debating the validity (or not) of contingent valuation studies. To recap:
One non-market valuation technique that we discuss in ECON110 is the contingent valuation method (CVM) - a survey-based stated preference approach. We call it stated preference because we essentially ask people the maximum amount they would be willing to pay for the good or service (so they state their preference), or the minimum amount they would be willing to accept in exchange for foregoing the good or service. This differs from a revealed preference approach, where you look at the actual behaviour of people to derive their implied willingness-to-pay or willingness-to-accept.
As I've noted before, I've used CVM in a number of my past research projects, including this one on landmine clearance in Thailand (ungated earlier version here), this one on landmines in Cambodia (ungated earlier version here), and a still incomplete paper on estimating demand for a hypothetical HIV vaccine in China (which I presented as a poster at this conference).
One of the issues highlighted in that earlier debate had to do with scope problems:
Scope problems arise when you think about a good that is made up of component parts. If you ask people how much they are willing to pay for Good A and how much they are willing to pay for Good B, the sum of those two WTP values often turns out to be much more than what people would tell you they are willing to pay for Good A and Good B together. This issue is one I encountered early in my research career, in joint work with Ian Bateman and Andreas Tsoumas (ungated earlier version here).
Which brings me to two new papers published in the journal Ecological Economics. But first, let's back up a little bit. Back in 2009, David Chapman (Stratus Consulting, and lately the US Forest Service) and co-authors wrote this report estimating people's willingness-to-pay to clean up Oklahoma’s Illinois River System and Tenkiller Lake. In 2012, William Desvousges and Kristy Mathews (both consultants), and Kenneth Train (University of California, Berkeley) wrote a pretty scathing review of scope tests in contingent valuation studies (published in Ecological Economics, ungated here), and the Chapman et al. report was one that was singled out. You may remember Desvousges, Mathews, and Train from the CVM debate I discussed in my earlier post.

Four years later, Chapman et al. respond to the Desvousges et al. paper (sorry I don't see an ungated version online). In their reply, Chapman et al. demonstrate a quite different interpretation of their methods that on the surface appears to validate their results. Here's their conclusion:
In summary, DMT argue that Chapman et al.'s scope difference is inadequate because it fails to satisfy theoretical tests related to discounting and to diminishing marginal utility and substitution. Also, according to them, our scope difference is too small. Once the fundamental flaws in their interpretation of the scenarios are corrected, none of these arguments hold. The upshot is that Chapman et al. must be assigned to the long list of studies cited by DMT where their tests of adequacy cannot be applied.
However, Desvousges et al. respond (also no ungated version available). The response is only two pages, but it leaves the matter pretty much resolved (I think). The new interpretation of the methods employed by Chapman et al. has raised some serious questions about the overall validity of the study. Here's what Desvousges et al. say:
...this statement indicates that respondents were given insufficient information to evaluate the benefits of the program relative to the bid amount (the cost.) The authors argue that the value of the program depends on how the environmental services changed over time, and yet the survey did not provide this information to the respondent. So the authors violated a fundamental requirement of CV studies, namely, that the program must be described in sufficient detail to allow the respondent to evaluate its benefits relative to costs. The authors have jumped – to put it colloquially – from the frying pan into the fire: the argument that they use to deflect our criticism about inadequate response to scope creates an even larger problem for their study, that respondents were not given the information needed to evaluate the program.
All of which suggests that, when you are drawn into defending the quality of your work, you should be very careful that you don't end up simply digging a bigger hole for your research to be buried in.

Friday, 9 December 2016

The debate over poached elephants

No, this isn't a post about the best method for cooking endangered species. Back in June, I wrote a post about this NBER Working Paper by Solomon Hsiang (UC Berkeley) and Nitin Sekar (Princeton). In the paper, Hsiang and Sekar demonstrated that a one-time legal sale of stockpiled ivory that happened in 2008 led to a significant increase in elephant poaching.

Since that working paper was released, there has been quite a debate over the validity of the results. First, Quy-Toan Do (World Bank), Andrei Levchenko (University of Michigan) and Lin Ma (National University of Singapore) wrote this blog post on the World Bank website, where they showed that Hsiang and Sekar's results were present only in a subsample of small sites (i.e. sites where there were few elephant carcasses). Do et al. concluded:
In this short discussion, we argued that the results reported by Hsiang and Sekar are confined to only the smallest sites surveyed by the MIKE programme. Aggregate poaching therefore did not experience a step increase in 2008 as argued by the authors. Our data on raw ivory prices in both Africa and China further support the conclusion that the 2008 one-off sale actually had no discernible effects on ivory markets. Rather, we postulate that small changes in the classification of carcasses could account for the results documented by Hsiang and Sekar.
Hsiang and Sekar then responded to the criticisms here, where they argue that the Do et al. results were the "result of a sequence of coding, inferential, and logical errors".

In the latest on this debate, Do et al. have responded to the response. In this latest response, they included all of the Stata code and links to the dataset, so that you can replicate their results and test alternative results:
Hsiang and Sekar's results are now looking increasingly shaky. We'll see if they have any further response (to the response to their response)...
[HT: David McKenzie at Development Impact]

Read more:

Sunday, 4 December 2016

Big data and loyalty to your insurer could raise your insurance premiums

Back in September, I wrote a post about how the most loyal customers are the ones that firms should charge higher prices to, based on this Wall Street Journal article. Last week, the Telegraph had a similar article:
The financial regulator has warned that insurance companies could start charging higher premiums to customers who are less likely to switch by using “big data”.
In a speech to the Association of British Insurers, Andrew Bailey, chief executive of the Financial Conduct Authority, suggested that big data could be used to “identify customers more likely to be inert” and insurers could use the information to “differentiate pricing between those who shop around and those who do not.”...
James Daley, founder of Fairer Finance, the consumer group, said that to some degree big data was already being used to punish inert customers.
He said: “Insurers already know how their own customers are behaving. Those who don’t switch are penalised for their loyalty with higher premiums. Inert customers will be priced partly on risk and partly on what the insurer can get away with.”
To recap, these insurers are engaging in price discrimination -  where firms charge different prices to different customers for the same product or service, and where the price differences don't reflect differences in cost. There are three necessary conditions for effective price discrimination:
  1. Different groups of customers (a group could be made up of one individual) who have different price elasticities of demand (different sensitivity to price changes);
  2. You need to be able to deduce which customers belong to which groups (so that they get charged the correct price); and
  3. No transfers between the groups (since you don't want the low-price group re-selling to the high-price group).
 As I noted in my previous post in September:
If you are an insurance company, you want to charge the customers who are most price sensitive a lower price. If customer loyalty is associated with customers who don't shop around, then customer loyalty is also associated with customers who are less price sensitive. Meaning that you want to charge those loyal customers a higher price.
What about big data? The Telegraph article notes:
Earlier this month Admiral, the insurer, announced that it planned to use Facebook status updates and “likes” to help establish which customers were safe drivers and therefore entitled to a discount.
Campaigners called the proposal it intrusive and the social media giant then blocked Admiral’s technology just hours before it was due to launch.
Just last week a telematics provider, Octo, launched an app that that shares customers' driving data with insurers so that they could bid for custom. It claimed that the safest drivers would get the lowest premiums.
The problem here is that opting out of having your social media profiles available for your insurer to peruse may be an option, but it would also provide a signal to the insurer. Who is most likely to refuse? The high-risk insured, of course. So, anyone who refuses will likely face higher premiums because of the signal they are providing to their insurer. Again, this is a point I made a couple of months ago.

It seems that we will have to accept the reality that big data, and especially our 'private' social media and activity data, is simply going to determine our insurance premiums in future.

Read more:


Friday, 2 December 2016

Riccardo Trezzi is immortal

I very much enjoyed reading this new paper by Riccardo Trezzi (US Federal Reserve Board of Governors), forthcoming in Economic Inquiry (sorry I don't see an ungated version anywhere). In the paper, Trezzi creates a time series model of his own ECG, and uses it to estimate his life expectancy:
In this paper, I go well beyond the frontier. I employ time series econometrics techniques to suggest a decomposition of the heart electrical activity using an unobserved components state-space model. My approach is innovative because the model allows not only to study electrical activity at different frequencies with a very limited number of assumptions about the underlying data generating process but also to forecast future cardiac behavior (therefore estimating the date of death), overcoming the “sudden death forecast” issue which typically arises when using standard time-series models.
My results are duo-fold. First, I show how the heart electrical activity can be modeled using a simple state-space approach and that the suggested model has superior out-of-sample properties compared to a set of alternatives. Second, I show that when the Kalman filter is run to forecast future cardiac activity using data of my own ECG I obtain a striking result: the n-step ahead forecast remains positive and well bounded even after one googol period, implying that my life expectancy tends to infinite. Therefore, I am immortal.
And people wonder about the validity of economic forecasts...

[HT: Marginal Revolution]

Wednesday, 30 November 2016

Jetstar regional services cause loss of $40 million to main centres' economies

Last week, Jetstar announced a report by Infometrics that suggested their introduction of regional flights to Nelson, Palmerston North, Napier, and New Plymouth boosted the economy of those regions by around $40 million. Here's what the New Zealand Herald reported:
Jetstar's regional operations could boost the economy of four centres it serves by about $40 million a year, according to Infometrics research.
The regional GDP growth could support up to 600 new jobs according to the research which notes domestic air travel prices have fallen by close to 10 per cent during the past 12 months.
Jetstar Group CEO, Jayne Hrdlicka, said the report highlighted how important cheap fares were to growing local economies.
That sounds like a good news story, but as with most (if not all) economic impact studies, it only provides half the picture. That's because flying to the regions doesn't suddenly create new money. So, every dollar that is spent by travellers to the regions is one less dollar that would have been spent somewhere else. In the case of domestic travellers who would not have otherwise travelled to those regions if Jetstar hadn't been flying there (which is the assumption made in the report), every dollar they spend on their trip to Napier is one less dollar they would have spent at home in Auckland. One could make a similar case for international travellers, although perhaps cheaper flights encourage them to spend more on other things than they otherwise would (although this is drawing a pretty long bow).

So, if it's reasonable to believe that Jetstar flights add $40 million to the economies of those regions, it is also reasonable to believe that Jetstar flights cost around $40 million in lost economic activity elsewhere in the country (depending on differences in multiplier effects between different regions), and much of this will likely be from the main centres.

To be fair, the Infometrics report (which I obtained a copy of, thanks to the Jetstar media team) does make a similar point that:
...the economic effects of this visitor spending should only be interpreted on a region-by-region basis, rather than as an aggregate figure for New Zealand as a whole. It is likely that some of the increase in visitor spending in regions with additional flights represented spending that was diverted from other parts of New Zealand.
The Infometrics report has some other issues, such as assuming a fixed proportion of business travellers to all four airports, which seems fairly implausible but probably doesn't have a huge impact on the estimates. A bigger issue might be the underlying model for calculating the multiplier effects, since multi-region input-output models (I'm assuming this is what they use) are known to suffer from aggregation bias that overstates the size of multiplier effects. I had a Masters student working on multi-region input-output models some years ago, and that was one of the main things I took away from that work. However, that's a topic that really deserves its own post sometime in the future.

Of course, these problems aren't important to Jetstar, which only wants to show its regional economic impact in the best light possible. The next step for them might be to say: "Ooh, look. We've done such a great job enhancing the economy of these regions. The government should subsidise us to fly to other regions as well so we can boost their economies too". Thankfully, they haven't taken it that far. Yet.

You might argue that boosting the economies of the regions, even if it is at the expense of the main centres, is a good thing. That might be true (it is arguable), but it isn't clear to me that increased air services is the most cost effective mechanism for developing the regional economies. I'd be more convinced by an argument that improved air services are a consequence of economic development, not a source of it.

For now, just take away from this that we should be sceptical whenever firms trumpet their regional economic impact based on these sorts of studies.

Tuesday, 29 November 2016

Quitting Facebook might make you happier

A new paper by Morten Tromholt (University of Copenhagen) published in the journal Cyberpsychology, Behavior, and Social Networking (sorry I don't see an ungated version anywhere) reports on an experiment, where Facebook users were randomly allocated to a treatment group that gave up Facebook for a week, and a control group that did not. Discover Magazine reports:
Tromholt recruited (via Facebook, of course) 1,095 Danish participants, who were randomly assigned to one of two conditions. The ‘treatment group’ were instructed not to use Facebook for one week, and were recommended to uninstall the Facebook app from their phones if they had it. At the end of the study, 87% of the treatment group reported having succesfully avoided Facebook the whole week. Meanwhile, the ‘control group’ were told to continue using the site normally.
The results showed that the treatment group reported significantly higher ‘life satisfaction’ and more positive emotions vs. the control group (p < 0.001 in both cases). These effects were relatively small, however, for instance the group difference in life satisfaction was 0.37 on a scale that ranged from 1-10...
This is a nice little study, but in my mind it doesn’t prove all that much. The trial wasn’t blinded – i.e. the participants of necessity knew which group they were in – and the outcome measures were all purely subjective, self-report questionnaires.
It's this last point that I want to pick up as well. I worry that much of what is observed in this study is a Hawthorne effect - that the participants who were asked to give up Facebook for a week anticipated that the study was evaluating whether it increased their happiness, and reported what the researcher expected to see. The outcome measures were all self-reported measures of life satisfaction and emotions, which are easy for the research participants to manipulate (whether consciously or not). The author tries to allay this concern in the paper:
The critical point here is whether the participants have formulated their own hypotheses about the effects of quitting Facebook and that these hypotheses, on average, are pointing in the same direction. On the one hand, the individual formulation of hypotheses may have been facilitated by the pretest and the selection bias of the sample. On the other hand, the participants’ hypotheses may not be pointing in the same direction due to the fact that the direction of the effects found in the present study is not self-evident. Hence, there may be limited experiment effects at stake. If experiment effects did affect the findings of the present study, they might even turn in the opposite direction because people, in general, perceive Facebook as a source to positive feelings.
I'm not convinced. Especially since, as this Guardian article on the research notes, the study was performed by the Happiness Research Institute. I'm not sure how any of the research participants could miss the significance of that and not realise that the study expected an increase in happiness to result.

Having said that, there are supplementary results reported in the paper that might partially allay those concerns. The effects on happiness were greater for those who were heavier users of Facebook, which is what you would expect to see if the effect is real, and this is not something that could be easily spoofed by the actions of participants.

One final word: it could be possible to improve this study by moving away from, or supplementing, the subjective wellbeing (life satisfaction) questions, such as by measuring levels of the stress hormone cortisol in the research participants before and after giving up Facebook. Of course, this would increase the cost of the study, because it could no longer be run solely online. Something for future research perhaps?


Monday, 28 November 2016

Newsflash! Population growth will be highest on the fringes of fast-growing urban areas

I'm not sure how this is news:
Infometrics has this morning released its Regional Hotspots 2016 report, showing the country's top future population growth areas between 2013 and 2023, revealing some obvious and less obvious areas...
The hotspots were concentrated around the country's main metropolitan centres, "reflecting the highly urbanised nature of New Zealand's population and the greater density of potential new markets offered by these growth areas".
Well, duh. The Infometrics report is here, but it doesn't really say much that isn't obvious to anyone with local knowledge who hasn't been living under a rock. For instance, North Hamilton is one of the 'hotspots' and this is part of what they have to say about it:
The choice of this hotspot reflects the ongoing trend of the growth in Hamilton’s metropolitan area towards the north. Although there are also longer-term plans for expansion of the city southwards towards the airport, growth in the shorter-term will be focused on the fringes around Flagstaff, Rototuna North, and Huntington.
The whole report is full of re-packaged Statistics NZ data on area unit population estimates (to 2016) and projections (to 2043), which anyone can view here, so it doesn't even include anything new. Last Thursday must have been a slow news day.

For more on small-area population projections though, you can read my report with Bill Cochrane for the Waikato Region here (there is a more recent update to that report, but it isn't available online - if you would like a copy, drop me an email). We use a model of statistically downscaling higher-level population projections using a land use projections model (a more detailed paper is currently in peer review for journal publication). This is a significant advance over the method employed by Statistics New Zealand, because it takes into account the planning decisions of councils at the local level. The results (in some cases) are strikingly different from Statistics New Zealand's projections, and suggest that more can be done to improve the quality of 'official' small-area population projections.

Sunday, 27 November 2016

Book review: The Drunkard's Walk

I just finished reading the 2008 book by Leonard Mlodinow "The Drunkard's Walk - How Randomness Rules Our Lives". I was kind of expecting something like the Nassim Nicholas Taleb book, "Fooled by Randomness" (which I reviewed earlier this year), but this book was much better.

While it purports to be a book about the role of randomness in our lives, much of the content is a wide-ranging and well-written series of stories in the history of probability theory and statistics. To pick out the highlights (to me), Mlodinow covers the Monty Hall problemPascal's triangle, Bernoulli's 'golden theorem' (better known as the law of large numbers), Bayes' theorem, the normal distribution and the central limit theorem, regression to the mean and the coefficient of correlation (both developments by Francis Galton, known as the father of eugenics), Chi-squared tests (from Karl Pearson), Brownian motion, and the butterfly effect. And all of that in a way that is much more accessible than the Wikipedia links I've provided.

Throughout the book, Mlodinow illustrates his points with interesting anecdotes and links to relevant research. One section, on measurement issues, struck me in particular because it made me recall one of the funniest papers I have ever read, entitled "Can People Distinguish Pâté from Dog Food?" (the answer was no). Mlodinow focuses on the subjectivity (and associated randomness) of wine ratings:
Given all these reasons for skepticism, scientists designed ways to measure wine experts' taste discrimination directly. One method is to use a wine triangle. It is not a physical triangle but a metaphor: each expert is given three wines, two of which are identical. The mission: to choose the odd sample. In a 1990 study, the experts identified the odd sample only two-thirds of the time, which means that in 1 out of 3 taste challenges these wine gurus couldn't distinguish a pinot noir with, say, "an exuberant nose of wild strawberry, luscious blackberry, and raspberry," from one with "the scent of distinctive dried plums, yellow cherries, and silky cassis." In the same study an ensemble of experts was asked to rank a series of wines based on 12 components, such as alcohol content, the presence of tannins, sweetness, and fruitiness. The experts disagreed significantly on 9 of the 12 components. Finally, when asked to match wines with the descriptions provided by other experts, the subjects were correct only 70 percent of the time.
Clearly, randomness is at play more often than we probably care to admit. So, apparently at random, I highly recommend this book (and I've made sure to add some of Mlodinow's other books to my Amazon wish list to pick up later!).

Sunday, 20 November 2016

The inefficiency of New Zealand's emissions trading scheme

A couple of days ago I wrote a post about the game theory of climate change negotiations. One of the conclusions of that post was that there was a dominant strategy for countries not to reduce their greenhouse gas emissions. Another problem might be that countries reduce emissions, but not by as much as they should (in order to achieve the Paris Agreement goal of no more than two degrees of temperature increase over pre-industrial levels).

Potentially, even worse might be that countries find inefficient ways of meeting their emissions reduction goals, and I believe there is a strong case that New Zealand is in the inefficient camp. New Zealand introduced its emissions trading scheme (ETS) in 2008, and it was later amended in 2009 (and has been reviewed twice since). Under the scheme (described here), "certain sectors are required to acquire and surrender emission units to account for their direct greenhouse gas emissions or the emissions associated with their products".

As Megan Woods notes, one of the main problems with the ETS is that agriculture is not included in the scheme, and farmers have been told that there are no plans to change that in the near future. Agriculture is responsible for about half of New Zealand's greenhouse gas emissions (see page 4 of this fact sheet from NZAGRC).

This creates a problem because, in order to meet the overall goal of emissions reduction, other sectors must reduce emissions by more to compensate. To see why this is inefficient, consider the diagrams below. Say there are just two markets: (1) agriculture (on the left); and (2) all other sectors (on the right). Both markets produce a negative externality, represented by the difference between the supply curve (the marginal private cost or MPC curve, since it includes only the private costs that producers face) and the marginal social cost (MSC) curve (made up of MPC plus the marginal external cost (MEC), which is the cost of the externality to society). In both cases the market, left to its own devices, will produce at the quantity where supply is equal to demand - at Q0 in the agriculture market, and at Qa in the other market. Society prefers each market to operate where economic welfare is maximised. This occurs where MSB is equal to MSC - at Q1 in the agriculture market, and at Qb in the other market.


In the agriculture market, total economic welfare is equal to the area ABD-BFE, and there is a deadweight loss of BFE [*]. In the other market, total economic welfare is GHL-HMJ, and the deadweight loss is HMJ [**]. The value of the externality is represented by the area DFEC in the agriculture market, and by the area LMJK in the other market.

Now consider the implementation of two different emissions trading schemes, as shown in the diagrams below. In the first scheme, both markets are included. Firms must either reduce emissions directly, or buy credits to cover their emissions.  Either of these is costly, and forces the producers to internalise the externality. The markets both move to operating at the point where MSB is equal to MSC, maximising economic welfare at ABD in the agriculture market and GHL in the other market (there is no longer a deadweight loss in either market).


In the second scheme, agriculture is excluded but the same total emissions reduction is desired. This means that the other market must reduce emissions by more to compensate. The other market reduces quantity to Qc (note that the reduction of the value of the externality in this case is double what it was in the first scheme). Total economic welfare in this market reduces to GNSL, with a deadweight loss of NHS. This market over-corrects and produces too little relative to the welfare maximising quantity (Qb). Meanwhile, the agriculture market continues to produce a deadweight loss of BFE.

Notice that the size of the combined deadweight losses across the two markets is pretty much the same under the second scheme (BFE + NHS) than it was without any emissions trading scheme at all (BFE + HMJ). So compared with the first scheme, the second scheme leads to a loss of economic welfare - it is inefficient.

Emissions trading schemes create a property right - the right to pollute (if you have purchased ETS units, you are allowed to emit greenhouse gases). In order for property right to be efficient, they need to be universal, exclusive, transferable, and enforceable. In this case, universality means that all emissions need to be covered under the scheme, and all emitters must have enough rights to cover their emissions. Exclusivity means that only those who have rights can emit greenhouse gases, and that all the costs and benefits of obtaining those rights should accrue to them. Transferability means that the right to emit must be able to be transferred (sold, or leased) in a voluntary exchange. Enforceability means that emissions should be able to be enforced by the government, with high penalties for those who emit more than they are permitted to.

Clearly, the current New Zealand ETS fails under universality as agriculture is not included. And the previous analysis above shows why this leads to inefficiency (loss of total economic welfare). The government is simply passing the buck by avoiding the inclusion of agriculture in the ETS (e.g. see Paula Bennett here). If we want to efficiently reduce our greenhouse gas emissions, agriculture must be included in the scheme.

*****

[*] The total economic welfare in the agriculture market is made up of consumer surplus of AEP0, producer surplus of P0EC, and the subtraction of the value of the negative externality DFEC.

[**] The total economic welfare in the other market is made up of consumer surplus of GJPa, producer surplus of PaJK, and the subtraction of the value of the negative externality LMJK.

Friday, 18 November 2016

Reason to be surprised, or not, about the Paris Agreement on climate change coming into force

The latest United Nations Climate Change Conference (COP22 in Marrakech) finishes today. The Paris Agreement, negotiated at the previous conference in Paris last year, officially came into force on 4 November. Under the agreement, countries commit to reduce their greenhouse gas emissions, in order to limit the increase in the global temperature to well below two degrees above pre-industrial levels.

Some basic game theory gives us reason to be surprised that this agreement has been successfully negotiated. Consider this: If a country reduces its emissions, that imposes a large cost on that country and provides a small benefit to that country, which is also shared by other countries. If a country doesn’t reduce its emissions, that imposes a small cost on that country and on other countries, while providing a small benefit only to the country that didn’t reduce emissions.

Now consider the negotiations as involving only two countries (Country A and Country B), as laid out in the payoff table below [*]. The countries have two choices: (1) to reduce emissions; or (2) to not reduce emissions. A small benefit is recorded as "+", a small cost is recorded as "-", and a large cost is recorded as "--" [**]. The costs and benefits (noted in the previous paragraph) imposed by the choice of Country A are recorded in red, and the costs and benefits imposed by the choice of Country B are recorded in blue. So, if both countries choose to reduce emissions the outcome will be in the top left of the payoff table. Country A receives a small benefit from their own action to reduce emissions (red +), a small benefit from Country B's action to reduce emissions (blue +), and a large cost of reducing emissions (red --). Other payoffs in the table can be interpreted similarly.


The problem lies in where the Nash equilibrium is in this game. Consider Country A first. They have a dominant strategy to not reduce emissions. A dominant strategy is a strategy that is always better for a player, no matter what the other players do. Not reducing emissions is a dominant strategy because the payoff is always better than reducing emissions. If Country B reduces emissions, Country A is better off not reducing emissions (because ++- is better than ++--). If Country B does not reduce emissions, Country A is better off not reducing emissions too (because +-- is better than +---). So Country A would always choose not to reduce emissions, because not reducing emissions is a dominant strategy.

Country B faces the same decisions (and same payoffs) as Country A. They also have a dominant strategy to not reduce emissions. If Country A reduces emissions, Country B is better off not reducing emissions (because ++- is better than ++--). If Country A does not reduce emissions, Country B is better off not reducing emissions too (because +-- is better than +---). So Country B would always choose not to reduce emissions, because not reducing emissions is a dominant strategy.

Both countries will choose their dominant strategy (to not reduce emissions), and both will receive a worse payoff (+--) than if they had both chosen to reduce emissions (++--). This game is an example of the prisoners' dilemma. There is a single Nash equilibrium, that occurs where both players are playing their dominant strategy (to not reduce emissions). So, based on this we might be surprised that the Paris Agreement has come into force, since all countries are better off if they choose not to reduce emissions, and instead just free ride on the emission reductions of other countries.

However, that's not the end of this story, since there are two alternative ways to look at the outcome. First, the payoff table above makes the assumption that this is a simultaneous, non-repeated game. In other words, the countries make their decisions at the same time (simultaneous), and they make their decisions only once (non-repeated). Of course, in reality this is a repeated game, since countries are able to continually re-assess whether to reduce emissions, or not.

In a repeated prisoners' dilemma game, we may be able to move away from the unsatisfactory Nash equilibrium, towards the preferable outcome, through cooperation. Both countries might come to an agreement that they will both reduce emissions. However, both countries have an incentive to cheat on this agreement (since, if you knew that the other country was going to reduce emissions, you are better off to not reduce emissions). So the countries need some way of enforcing this agreement. Unfortunately, the Paris Agreement has no binding enforcement mechanism. So there is no cost to a country reneging on their promise to reduce emissions.

In the absence of some penalty to non-cooperation in a repeated prisoners' dilemma, a tit-for-tat strategy is the most effective means of ensuring cooperation in the two-person game. In a tit-for-tat strategy, the player starts out by cooperating. Then they choose the same strategy that the other player chose in the previous play of the game. However, it isn't clear how a tit-for-tat strategy works when you have more than two players (which we do). So, we probably can't rely on this being a repeated game to ensure cooperation between countries.

Second, the prisoners' dilemma looks quite different if the players have social preferences. For example, if players care not only about their own payoff, but also about the payoff of the other player. Consider the revised game below, where each of the countries receives a payoff that is made up of their own payoff in the base case (coloured red or blue as before) plus the other player's payoff in the base case (coloured black). This is the case of each country having highly altruistic preferences.


The game now changes substantially, and reducing emissions becomes a dominant strategy for both players! Consider Country A first. If Country B reduces emissions, Country A is better off reducing emissions (because ++++---- is better than +++----). If Country B does not reduce emissions, Country A is better off reducing emissions too (because +++---- is better than ++----). So Country A would always choose to reduce emissions, because reducing emissions is now their dominant strategy.

Country B faces the same decisions (and same payoffs) as Country A. If Country A reduces emissions, Country B is better off reducing emissions (because ++++---- is better than +++----). If Country A does not reduce emissions, Country B is better off reducing emissions too (because +++---- is better than ++----). So Country B would always choose to reduce emissions, because reducing emissions is now their dominant strategy.

Of course, this is the extreme case, but you get similar results from a variety of intermediate assumptions about how much each country cares about the payoff of the other country (for brevity I'm not going to go through them). We get similar (and stronger) results if we consider our social preferences towards the wellbeing of future generations. The takeaway message is that if we care about people living in other countries (or future generations), we might not be surprised that the Paris Agreement has come into force.

So, game theory can show us both why we might be surprised about the success of the Paris Agreement (because countries have incentives not to reduce emissions), or not surprised about its success (because we care about other people - in other countries, or in future generations).

*****

[*] If we assume more than two countries, we would get pretty much the same key results, but it is much harder to draw a table when there are more than two players, so I'm keeping things simple throughout this post by considering only the two-country case.

[**] Of course, the size of the +'s and -'s will matter, and they probably differ substantially between countries, but for simplicity let's assume they are equal and opposite.

Wednesday, 16 November 2016

Scale vs. scope economies and the AT&T-Time Warner merger deal

I've been watching with interest the unfolding news on the proposed merger between AT&T and Time Warner. The Washington Post has a useful primer Q&A here:
AT&T, the nation's second-largest wireless carrier, is buying Time Warner, the storied media titan that owns HBO, CNN and TBS. In an unprecedented step, the deal is going to combine a gigantic telecom operator — which also happens to be the largest pay-TV company — and a massive producer of entertainment content.
In ECON110, one of the topics we cover includes media economics. The economics of media companies is of interest because it illustrates a bunch of economic concepts, and because their interaction leads to some seemingly-counterintuitive real-world outcomes.

For instance, media content (e.g. movies, television shows, music albums) is subject to substantial economies of scale - the average production cost per consumer of media content falls dramatically as you provide the content to more consumers. This is because the cost of producing content is relatively high, while the cost of distributing that content (especially in the digital age) is extremely low. Large economies of scale (in distribution) tend to favour large media companies over small media companies, since the large media company can distribute to a larger audience at lower cost per-audience-member. In other words, each item of media content gives rise to a natural monopoly.

However, consumers demand a variety of media content, and variety is difficult (and costly) to produce. Every different item of content requires additional scarce inputs (e.g. quality actors, sets, scripts, etc.), and the more novel the item of content the more expensive it will generally be to create. This leads to what is termed diseconomies of scope - the more different items of content that a media company produces, the higher their average costs. Diseconomies of scope (in production) tend to favour smaller media companies that focus on specific content niches.

So, the combination of these (economies of scale and diseconomies of scope) leads to an industry that is characterised by several large media companies producing 'mainstream' content (and usually formed by mergers of previously smaller media companies), as well as many smaller niche providers. If we focused only on the economies of scale, we would be left wondering why the smaller providers continue to survive alongside their larger rivals. It also explains why the offerings of the large media companies are often pretty bland when compared to the offerings of the smaller players - producing something that isn't bland is too costly.

Which brings me to the AT&T-Time Warner merger. This introduces a new element into play. Time Warner is a content provider, and a distributor through traditional media channels (television, etc.). AT&T is not a media company (at least, not yet), but would greatly enhance the potential distribution of Time Warner content. That in itself isn't problematic, and would simply follow previous media mergers focusing on increasing the gains from economies of scale.

The main problem is that AT&T is a gateway to consumers (and a gateway from consumers to content), and that means that the merged entity could significantly reduce competition in the media market. Media consumers value variety (as I noted above), but this merger might make it more difficult (or costly) for AT&T consumers to access that variety. The WaPo article linked above notes:
Here's where it starts to get really interesting. AT&T could charge other companies for the rights to air, say, "Inception" on their networks, or for the use of the Superman brand. Left unchecked, AT&T could abuse this power and force other Web companies, other cable companies, other content companies or even consumers to accept terms they otherwise would never agree to.
One of the things that AT&T might also do is make it costlier for their subscribers to access other content. Of course, they wouldn't frame it that way - they would instead offer a discount on accessing Time Warner content (which effectively makes other content more expensive and shifts consumers towards the Time Warner content).

What happens next? Federal regulators are looking into the deal, and depending on who you read, President-Elect Trump will either block the deal (which markets are anticipating and which he noted during the campaign) or not (based on the makeup of staff within the administration). From the consumers' perspective, let's hope that Trump follows through (this may be the only Trump campaign promise I would make that statement for!).

Tuesday, 8 November 2016

Voting paradoxes and Arrow's impossibility theorem

In ECON110, we don't have nearly enough time to cover the theory of public choice as well as I would like. We do manage to cover some of the basics of the Condorcet Paradox and Arrow's impossibility theorem. Alex Tabarrok at Marginal Revolution recently pointed to this video by Paul Stepahin, which looks at several voting paradoxes. Given the events about to unfold in the U.S., it seems timely. Enjoy!



[HT: Marginal Revolution]

Monday, 7 November 2016

Book review: Who Gets What - and Why

Alvin Roth won the Nobel Prize in Economics in 2012 (along with Lloyd Shapley) "for the theory of stable allocations and the practice of market design". So, I was glad to finally have time to read Roth's 2015 book, "Who Gets What - and Why". I wasn't sure what to expect from this book, having previously only read some of Roth's work on kidney exchange, and was pleasantly surprised that the book covers a lot of fundamental work on the design of markets.

In introductory economics, we usually start by focusing on commodity markets - markets where the goods that are being offered for sale are homogeneous (all the same). In commodity markets, the decision about 'who gets what' is entirely determined by price - the consumers that value the good the most will be buyers, and the producers with the lowest costs will be sellers. This book (and indeed, much of Roth's most high-profile research) focuses instead on matching markets - markets where price cannot perform the role of matchmaker by itself. As Roth notes early on in the book:
Matching is economist-speak for how we get the many things we choose in life that also must choose us. You can't just inform Yale University that you're enrolling or Google that you're showing up for work. You also have to be admitted or hired. Neither can Yale of Google dictate who will come to them, any more than one spouse can simply choose another: each also has to be chosen.
The book has many examples, but I especially liked this example of the market for wheat, and how it became commodified:
Every field of wheat can be a little different. For that reason, wheat used to be sold "by sample" - that is, buyers would take a sample of the wheat and evaluate it before making an offer to buy. It was a cumbersome process, and it often involved buyers and sellers who had successfully transacted in the past maintaining a relationship with one another. Price alone didn't clear the market, and participants cared whom they were dealing with; it was at least in part a matching market.
Enter the Chicago Board of Trade, founded in 1848 and sitting at the terminus of all those boxcars full of grain arriving in Chicago from the farms of the Great Plains.
The Chicago Board of Trade made wheat into a commodity by classifying it on the basis of its quality (number 1 being the best) and type (winter or spring, hard or soft, red or white). This mean that the railroads could mix wheat of the same grade and type instead of keeping farmer's crop segregated during shipping. It also meant that over time, buyers would learn to rely on the grading system and buy their wheat without having to inspect it first and to know whom they were buying it from.
So where once there was a matching markets in which each buyer had to know the farmer and sample his crop, today there are commodity markets in wheat...
Of course not all markets can easily be commodified, which means that many markets remain matching markets. And in a matching market, the design of the market is critical. The book presents many examples of matching markets, from financial markets, to matching graduate doctors to hospitals, to kidney exchange. Roth also discusses signalling, a favourite topic of mine to teach, and which is of course important in ensuring that there are high-quality matches in markets.

On kidney exchange, it is interesting to read an economist who isn't substantially pro-market (e.g. kidneys for transplant should be traded in markets, at market prices). Roth recognises that there may be valid objections against fully monetising some markets (such as kidney exchange). He writes (the emphases are his):
Such concerns about the monetization of transactions seem to fall into three principal classes.
Once concern is objectification, the fear that the act of putting a price on certain things - and then buying or selling them - might move them into a class of impersonal objects to which they should not belong. That is, they risk losing their moral value.
Another fear is coercion, that substantial monetary payments might prove coercive - "an offer you can't refuse" - and leave poor people open to exploitation, from which they deserve protection.
A more complex concern is that allowing things such as kidneys to be bought and paid for might start us on a slippery slope toward a less sympathetic society than we would like to live in. The concern, often not clearly articulated, is that monetizing certain transactions might not itself be objectionable but could ultimately cause other changes that we would regret.
Overall, the book is a really good complement to learning about the commodity markets that we look most closely at in introductory economics. Many economists (myself included) probably don't spend nearly enough time considering the design of markets and largely take the role of prices in allocating resources as a given. This is definitely a highly recommended read.

Sunday, 6 November 2016

The Great Walks may be free, but they are not free

My blog's been a bit quiet the last couple of weeks while I've been buried in exam marking. Now that marking is done, I can start to post on some things I had put aside over that time. Starting with the controversy over comments made by Department of Conservation director-general Lou Sanson, reported here:
It may be time to start charging for the use of the country's Great Walks, Department of Conservation director-general Lou Sanson says.
Foreign tourists could pay $100 and New Zealanders $40 to cope with a huge increase in trampers — especially overseas travellers — and their effect on the environment, he suggested.
Sanson said the country's Great Walks brand had "exploded" but this popularity had created some problems...
In March, he took the United States ambassador to the Tongariro Alpine Crossing — a 19.4km one-day trek between the Mangatepopo Valley and Ketatahi Rd in the North Island.
"Every time we stopped we were surrounded by 40 people. That is not my New Zealand. We have got to work this stuff out — these are the real challenges," Sanson told the Queenstown Chamber of Commerce yesterday...
Introducing differential charges on the Great Walks was one potential mechanism to alleviate pressure, Mr Sanson said.
"We have got to think [about that]. I think New Zealand has to have this debate about how we're going to do bed taxes, departure charges — we have got to work our way around this.
"I think a differential charge [is an option] — internationals [pay] $100, we get a 60 per cent discount."
The New Zealand Herald then ran an editorial the next day, entitled "Turnstiles on wilderness is not the answer". The editorial raised some good practical issues with charging a fee for trampers on the Great Walks:
Would rangers be posted to collect cash, or check tickets that would have to be bought in advance? How would they be enforced?
It also raised an important issue about the perception of the service provided by DoC:
A charge changes the way users regard it. The track and its surrounds would cease to be a privilege for which they are grateful, and become something they feel they have paid for.
They will have an idea of the value they expect and rights they believe due for their expense. They may be more likely to leave their rubbish in the park. The costs of removing litter and cleaning camping areas may quickly exceed the revenue collected.
However, the editorial ignored the fundamental issue of providing goods and services for 'free'. If something comes with no explicit monetary cost associated with it, that does not mean that it is free. Economists recognise that there are opportunity costs (because in choosing to do a Great Walk, we are foregoing something else of value we could have done in that time), but this is about more than just opportunity costs.

When a good or service has no monetary cost, there will almost always be excess demand for it - more consumers wanting to take advantage of the service than there is capacity to provide the service. Excess demand can be managed in various ways - one way is to raise the price (as suggested by Sanson). Another is to limit the quantity and use some form of waiting list (as is practiced in the health sector). A third alternative is to degrade the quality of the service until demand matches supply (because as the quality of the service degrades, fewer people will want to avail themselves of it).

The latter option doesn't sound particularly appealing, but it's the option that Sanson is most against, and would be the necessary consequence of the laissez faire approach the Herald editorial advocates for. Sanson already notes one way that the quality of the Great Walks is affected, when he notes "Every time we stopped we were surrounded by 40 people". If you want to take a Great Walk in order to experience the serene beauty and tranquillity of our natural landscape, the last thing you want is to be constantly mobbed by selfie-taking dickheads. The quality of the experience degrades the more people are on the Great Walks.

Pricing might not be appetising to some, but at least it would manage the demand for the Great Walks. Providing lower prices to locals is, as I have noted previously, an appropriate form of price discrimination that I remain surprised that we don't see more of in New Zealand. Of course, that doesn't negate the practical concerns raised in the Herald editorial. But if we want to maintain the quality of the experience on the Great Walks, this is a conversation that we should be having.