Wednesday 31 May 2017

The economics of reclining airline seats

The problem of reclining airline seats and the related fights between passengers was a big thing in the media in 2014 (see here and here), but has been back in the media recently. The Economist's Gulliver blog had an excellent piece earlier this month entitled "Who owns the space between reclining airline seats?". It's an interesting read, and highlights several things we discuss in ECON100 and/or ECON110, including: (1) externalities and the Coase theorem; and (2) quasi-rationality and endowment effects.

If Person A (who is sitting in front of Person B) reclines their seat, they reduce the amount of space available to Person B. This is a negative externality (an adverse impact of one person's actions on the wellbeing of a bystander). There are a few things we can take away from this example. First, as Coase originally noted, externality problems are jointly produced by the person who creates the externality and the person who is affected by it. If no one was sitting in Person B's seat, then there would be no externality problem. The externality problem only exists because of both passengers' actions (Person A reclining their seat, and Person B sitting in the seat behind). [*]

Second, the polluter pays principle is not always the best solution to an externality problem. The polluter pays principle essentially says that the polluter (in this case, Person A) is always at fault any must avoid the actions that affect the other party (by not reclining their seat), or pay them compensation. If we believed the polluter pays principle is the best solution in this case, no one would be allowed to recline their seat.

In contrast, the Coase theorem suggests that if private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own (i.e. without any government or other intervention). As Gulliver notes:
According to the theories of Ronald Coase, who won the Nobel Prize in Economics in 1991, the space between airline seats is a scarce resource. Therefore it should not matter who has the initial ownership (assuming there are no barriers to a deal being made). The market will out: whoever values the space more will buy it from the other. (In this case it would normally revert to the recliner.)
What happens if we allow passengers to make these bargaining solutions? We really don't know, as no airline has ever tried it (as far as I know). However, Gulliver writes:
Would such fights be prevented if ownership of those four inches were up for auction? This was the starting point of an experiment by Christopher Buccafusco and Christopher Jon Sprigman, two law professors, which they have written up on the Evonomics website. 
Their aim was to discover whether recliners’ pleasure at being more horizontal is greater than the amount of suffering this inflicts on the person behind. One obvious way to do this is to put a monetary value on it: find out how much the flyer in front would be willing to pay for the right to recline his seat, and compare that with the amount the person behind would be prepared to shell out to stop this from happening. 
In an online survey the researchers asked people to imagine that they were about to take a six-hour flight from New York to Los Angeles. Respondents were told that the airline had created a new policy that would allow flyers to pay those seated in front of them not to recline their seats. Some were then asked how much the passenger behind would have to pay them not to recline during the flight. Others had to specify how much they would be prepared to pay to prevent the person in front of them from reclining.
I suggest reading up the Evonomics article by Buccafusco (Cardozo School of Law) and Sprigman (NYU School of Law), as there is lots of interest there. Note that it is a stated preference study - we don't know for sure what people would actually do when faced with these choices, but this is what they said they would do:
Recliners wanted on average $41 to refrain from reclining, while reclinees were willing to pay only $18 on average. Only about 21 percent of the time would ownership of the 4 inches change hands...
That sounds fine, and was based on the current default set of property rights - that people have the right to recline their seat. But then things got interesting:
When we flipped the default—that is, when we made the rule that people did not have an automatic right to recline, but would have to negotiate to get it—then people’s values suddenly reversed. Now, recliners were only willing to pay about $12 to recline while reclinees were unwilling to sell their knee room for less than $39. Recliners would have ended up purchasing the right to recline only about 28 percent of the time—the same right that they valued so highly in the other condition.
So, when people had the right to recline their seat, they wanted $41 to give it up. But, if they didn't have the right, they were only willing to pay $12 for that right. If that seems odd to you, then welcome to the world of behavioural economics. The Coase theorem suggests that the initial allocation of rights should not matter, because if the person who values the right the most doesn't start out with it, they will simply purchase it from the other. But what Buccafusco and Sprigman found suggests that this simple solution might not work. What they found was an endowment effect.

Because people are loss averse, losses make us much less happy than an equivalent gain makes us happier. For example, losing $10 is more bad news than finding $10 is good news. One of the consequences of this is that we are unwilling to give up something that we already have - we require more in compensation to give it up than what we would have been willing to pay to obtain it in the first place (this is what we call an endowment effect). Note that endowment effects are working for the 'reclinees' as well - they are willing to give up their extra knee room for $39 if they had the right to keep it, but would only be willing to pay $18 to get that right if they didn't start out with it.

The endowment effect means that this problem isn't really amenable to a simple solution, because recliners already have the default rights, and are understandably unwilling to give those rights up. And any change in policy is going to incur passenger protest - because even though we may gain knee room, passengers would be giving up their right to recline, and loss aversion almost ensures that would be a painful and unwelcome trade-off for most passengers.

*****

[*] Of course, Person B probably has little choice about where they are seated. But, there are plenty of other examples of externalities where there would be no problem if the affected person was simply somewhere else. One example I've blogged about before is people who choose to live next to mushroom farms.

Monday 29 May 2017

Return migrants to Vietnam prefer areas with higher-quality institutions

One of the enduring theories of migration is the push-pull theory of Everett Lee (see here for the original research paper from 1966). In this theory, there are factors in the origin (where the migrants are coming from) that push them away, and factors in the destination (where the migrants go to) that pull them there. Many factors might be push or pull factors, including high (or low) wages, or good (or bad) amenities. As I discussed in a previous post, climate is one factor that appears to affect migration, but only in a very limited way.

In a new working paper, Ngoc Tran, Jacques Poot and I look at return migration to Vietnam (this is where Vietnamese migrants have first migrated overseas, then return home to Vietnam), and specifically whether the location that return migrants return to is influenced by the quality of political and economic institutions (in economics, the term 'institutions' is used to refer to social and legal norms and rules). This question is important because there are generally few factors that policy makers can use to influence people's migration decisions, but the quality of institutions is generally something that is within their control. So, if they want to attract return migrants (or potentially other migrants), then having high quality institutions is important.

We used a database of the return migration choices of 654 Vietnamese return migrants to the south of Vietnam in 2014, including the province that they eventually settled in. We found that, holding other variables constant, older return migrants and male migrants were less likely to settle in Ho Chi Minh City (and more likely to return to other regions). Once we introduce the 'provincial competitiveness index' (a measure of local institutional quality in Vietnam) into the model, we find that return migrants are more likely to return to a region with higher-quality institutions.

Digging a bit deeper into the results, we find that this preference for higher-quality institutions depends on the age of the return migrant, with younger return migrants displaying a greater preference for higher-quality institutions than older return migrants. Also, migrants who returned from a country that itself has higher-quality institutions revealed a greater preference for higher-quality institutions when they returned to Vietnam (though this result was not nearly as statistically significant). These results are interesting, especially the latter result, which suggests that there are spillover effects of developing country migrants adopting expectations of high-quality institutions back home, that are similar to those they experienced in the host (usually developed) country. The results also suggest that better institutional quality may attract return migrants, especially those who are younger (and have greater remaining productivity and reproductive potential). Perhaps there might be some lessons to be learned in this for declining regions in other countries?

Finally, this paper is also the first research paper from Ngoc's PhD thesis, so congratulations to her on that achievement, and I look forward to reporting on her future work in later posts.

Saturday 27 May 2017

Free trade agreements, or international commerce agreements

Last week in ECON100, we covered the gains from trade. One of the points I made was the misnaming of free trade agreements, which these days are mostly not about free trade. This was a point made recently by Bill Rosenberg (economist for the Council of Trade Unions) in the New Zealand Herald:
But these agreements are no longer mainly about trade. It is misleading to talk about them as Free Trade Agreements. I'll call them international commerce agreements, and it is misleading to label public concerns as protectionism.
These agreements are now mainly about services, regulation (including so-called non-tariff measures), foreign investment, intellectual property, government procurement, commercialisation of public agencies, and other matters that are "behind the border" and cut deeply into people's daily lives. That is why people protest at restrictions on the ability of future Governments to make and change rules in the public interest, to adapt to new circumstances and repair poor policy of the past.
I like Rosenberg's characterisation of these agreements as 'international commerce agreements', and might start using that terminology interchangeably with 'free trade agreements' in my classes. Not everyone gets this, as this response to Rosenberg from Mike Hosking demonstrated.

It's hard to argue against free trade in itself (though such arguments continue to be made - see my earlier post on this), especially if genuine attempts are made to compensate the losers from free trade (such as those who lose jobs in industries in which we have a comparative disadvantage). There are certainly enough gains for the winners from free trade to compensate the losers, and have some extra left over. However, whether an international commerce agreement (or free trade agreement, if you prefer) has a net positive effect depends on how you evaluate the costs (or benefits) to the economy from all of the other non-free-trade-related clauses in the agreement. And that cost-benefit evaluation is enormously tricky - the more elements you include, the harder the evaluation is going to be.

The economic evaluation of the Trans-Pacific Partnership agreement was conducted by my colleague Anna Strutt and others (you can read the full report here). That economic evaluation estimated gains for New Zealand of $624 billion by 2030 from tariff liberalisation alone, and $4.16 billion if liberalisation of non-tariff trade barriers and customs delays were included. But note that this is trade-related gains only, and doesn't consider all of the other parts of the agreement, such as intellectual property, changes to Pharmac, etc. And now that the US is not included, you can expect the trade-related gains to be somewhat less.

Overall, I'll remain pro-free-trade, but agnostic on free trade agreements.

Wednesday 24 May 2017

Three reasons why tipping is a bad idea

Tipping has been in the news this week. Matt Heath started it with this article on Sunday, but then Deputy Prime Minister (and former waitress) Paula Bennett chimed in, saying "Overall I think the service in New Zealand is good, I always tip for excellent service and encourage others to too if we want standards to continue to improve" (at least, according to this article - I didn't read her letter to the Herald myself). Bennett's comments have stirred a lot of media interest (see here and here and here, for example). Now, as the voice of reason, I give you three reasons why tipping is a bad idea.

First, it's not rational if it's not already a social convention. To see why, we need to go through a little bit of game theory (which is good revision for my ECON100 students, since we did game theory in class last week). Consider a sequential game with two players: (1) the server, who can choose to give average service, or good service; and (2) the customer, who can choose to tip, or not, and makes their choice after the service decision of the server has already been revealed. Let's say that the basic outcome (average service and no tip) leads to a zero payoff for both players. Let's also assume that if the server gives good service, that increases the payoff to the customer by +6 (units of utility, or satisfaction), but comes at a cost to the server of -2 (units of utility). Finally, let's assume that if the customer chooses to tip, that reduces their payoff by 5, and increases the server's payoff by 5. The game is laid out in tree form (extensive form) below.


To find the subgame perfect Nash equilibrium here, we can use backward induction (similar to the best response method we use in a simultaneous game). Essentially, we work out what the second player (the customer) will do first, and then use that to work out what the first player (the server) will do. In this case, if the server gives good service, then we are moving down the left branch of the tree. The best option for the customer in that case is not to tip (since a payoff of +6 is better than a payoff of +1). So, the server knows that if they give good service, the customer is better off not tipping. Now, if the server gives average service, then we are moving down the right branch of the tree. The best option for the customer in that case is not to tip (since a payoff of 0 is better than a payoff of -5). So, the server knows that if they give average service, the customer is better off not tipping. Notice that the customer is better off not tipping no matter what the server does - not tipping is a dominant strategy for the customer. So, the choice for the server is to give good service (and receive a payoff of -2) or to give average service (and receive a payoff of 0). Of course, they will give average service. The subgame perfect Nash equilibrium here is that the server gives average service, and the customer doesn't leave a tip.

However, that analysis assumes that this is a non-repeated game. We know that if games are repeated, the outcome may be able to move away from the Nash equilibrium to an outcome that is better for all players (notice that the combination of good service and tipping is better for both players). How do we get to this alternative outcome? It relies on cooperation between the two players, and cooperation requires trust. The server has to trust that the customer will tip them, before they will agree to give good service. Can they trust the customer? Only if they have developed a relationship with that customer, and in most hospitality situations it is unlikely that a customer will encounter the same server again in the future (unless they are a regular). So, no trust. No cooperation. No tipping, and no good service.

Which brings me to social convention. One way to ensure cooperation from the customer is to make tipping a social convention, which has some social penalty attached to it. If it is frowned upon not to tip the server, to the extent that it becomes costly (in terms of moral costs or social costs, not financial costs) not to tip, then that changes the game. Say that the moral cost of not tipping is -6 units to the customer (since everyone who sees them not tipping the server then thinks the customer is a douchebag). This changes the game to this:


Now, where is the subgame perfect Nash equilibrium? If the server gives good service, the customer will tip (because +1 is better than 0). If the server gives average service, the customer will tip (because -5 is better than -6). Notice that tipping is now a dominant strategy for the customer. Knowing what the customer will do, the server will choose to give good service (since +3 is better than 0). The subgame perfect Nash equilibrium is now that the server gives good service, and the customer leaves a tip.

But it relies on a social convention, which is not the current convention in New Zealand. And developing new social conventions is not easy (although perhaps Paula Bennett is willing to give it a try in this case?).

The second reason why tipping is a bad idea is because of second-order effects. If customers have to tip the servers, this increases the cost of their meal. Since we know that demand curves are downward sloping, an increase in price will lead to lower quantity demanded - customers will demand fewer restaurant meals. If you doubt this point, then consider how many people you know (I'm sure there are at least some) who object to paying a surcharge for a meal on a public holiday, and so choose to either eat somewhere else (where there is no surcharge) or not to go out at all. Now, note that tipping is essentially the same as applying a surcharge to every restaurant meal.

Since the quantity of restaurant meals demanded will decrease, the number of servers required by restaurants also decreases. Tipping will make some servers better off (higher take-home pay), but will make others worse off (they no longer have a job). This has the same effect as raising the minimum wage, except the customers are paying the extra, rather than the employers. I'm not sure that's a trade-off that customers should be willing to accept.

The third reason why tipping is a bad idea is because it could be considered a form of corruption. If you doubt that, consider this example. Remember that the purpose of tipping is to reward the recipient for giving good service. Now, say that I'm pulled over by a police officer for driving through a stop sign, but the officer decides to let me off with a warning (seems unlikely, but let's run with it). The officer gave me good service - should I tip them?

The World Bank defines corruption as:
...the offering, giving, receiving or soliciting, directly or indirectly, anything of value to influence improperly the actions of another party.
Isn't tipping to reward good service providing something of value (money) to influence the actions of another party (to give you good service)? We could quibble over whether the influence is improper or not, I guess. But the general point is valid.

Anyway, now you have three reasons to use to explain why you shouldn't be tipping: (1) it's not rational (when there is no social convention for tipping); (2) it may make some servers worse off; and (3) it may be corrupt. You're welcome.

Tuesday 23 May 2017

Update: Uber is starting to price discriminate

Last year I wrote a post about Uber and price discrimination. At the time, Uber was arguing that they don't adjust their surge pricing to take advantage of people whose phone battery is low. Here's what I wrote then:
So, should we believe that Uber is not price discriminating? Price discrimination increases profits when firms can do it effectively. This only requires three conditions to be met:
1. Different groups of customers (a group could be made up of one individual) who have different price elasticities of demand (different sensitivity to price changes);
 2. You need to be able to deduce which customers belong to which groups (so that they get charged the correct price); and
3. No transfers between the groups (since you don't want the low-price group re-selling to the high-price group).
The first condition is clearly met, and presumably Uber's app knows when the phone is low battery (it's probably buried in the terms and conditions for the app, which almost no one reads). Since customers don't know the battery status of other Uber customers, then the third condition is likely to be met too. So, if Uber isn't price discriminating on the basis of battery level, they are leaving potential profits on the table. Uber shareholders probably wouldn't be too happy to learn this. So, I think it's hard to believe that Uber don't take a lot of information about their passengers (including potentially the remaining battery life of their phone) into account at least at some level - perhaps they are not price discriminating via the surge price (i.e. the multiple by which they increase prices), but via the underlying base price?
Now, it seems that Uber is moving to use price discrimination more broadly. This New Zealand Herald story today notes:
Imagine you live in Sydney's lavish suburb of Bondi, while your friend lives in one the city's lower socio-economic regions.
You both order an Uber home at the same time, with each ride having identical demand, traffic and distance travelled.
Yet, you are charged significantly more for the service because of where you are travelling.
This could soon be a reality with Uber introducing "route-based pricing" — a new fixed rate fare system for its UberX service that charges customers based on what it predicts they would be willing to pay.
There is more on this story here, and here. In this case, those travelling to a richer area of the city may have less elastic demand for the ride (because the fare will take up a lower proportion of their income) compared with those travelling to a poorer area of the city. So, the optimal mark-up (of price over cost) is greater for fares to richer areas than to poorer areas, and the price discriminating firm will charge a higher price for those travelling to the richer areas.

Is it possible that this bit from the story might be both true and false at the same time?:
While this might sound like the service is separating its customers based on income, Uber's head of product Daniel Graf said this wasn't the case.
"This is not personalised. This has nothing to do with the individual," he told Business Insider.
Technically, Uber aren't separating customers based on income (because they don't know the customers' incomes). But they are separating based on route, and some routes are clearly more popular with higher-income customers (and that is why it is effective to price discriminate). The prices may not be personalised, in the sense that every person pays a difference price for the same route in the same market conditions, but it isn't a big step from third-degree price discrimination (group pricing, which is what they are currently doing) and first-degree price discrimination (personalised pricing, where every customer pays a different price, based on their own willingness-to-pay for the service). Uber can gather lots of information on their customers' past behaviour, including which fare prices they were willing to pay (or not) in the past, and use that for pricing in the future.

Price discrimination is not illegal or even unfair in many cases (this is a point I have made before). However, this is definitely an unfolding story that it would pay to keep an eye on.

[HT: Marginal Revolution, for the additional sources on this story]

Read more:


Monday 22 May 2017

Fresh water is not a public good

The title to this post is deliberately provocative, but also entirely accurate. Fresh water has been in the news quite a bit recently, including this article by Kirsty Johnston in the New Zealand Herald today. Johnston writes:
Currently, common law dictates that naturally-flowing freshwater is treated as a public good, or that "no one owns the water".
By definition, a public good is a good that is non-rival (where one person using them doesn’t reduce the amount of the good that is available for everyone else) and non-excludable (where the goods are available to everyone if they are available to anyone). It is the first of these that is clearly not true for fresh water, and this should be clear from the first three paragraphs of Johnston's article:
It was the summer of 1983 when Poroti Springs first ran dry. The watercress stopped growing, the eels disappeared and the koura died, unable to survive as their habitat turned to dust.
Local hapu, the kaitiaki of the sacred Northland springs, were dismayed at the near-extinction of its mauri, or life-force, and the loss of their traditional food source.
The culprit? The Whangarei City Council, who, unable to get to the springhead because it was on Maori land, had drilled directly into the aquifer upstream and sucked up so much water for the town supply, the seemingly endless flow ran out.
Whangarei City Council drew water from the aquifer, and that left less water available further downstream - fresh water is a rival good, not a non-rival good. Goods that are rival and non-excludable are common resources. They are vulnerable to the Tragedy of the Commons, a problem that was first described by William Forster Lloyd in 1833, but was brought to modern attention by Garrett Hardin's 1968 article of that title published in the journal Science.

The problem with fresh water is that all users together (as a group) have an incentive to reduce the amount of water drawn from an aquifer (so that it doesn't run dry). However, no individual user has an incentive to reduce the amount of water they draw by themselves, because the cost of their action is spread over all the water users.

The first problem with the current regime is that many water catchments are clearly over-allocated, or else they wouldn't run dry. Over-allocation of water also has negative consequences for water quality.

One solution for common resources is to make them excludable, i.e. making them not available to everyone. That's what the water usage permits that regional councils issue under the Resource Management Act are designed to achieve. However, giving the permits away for nothing (or next to nothing) is clearly crazy. Johnston writes:
Figures obtained by the Herald found there are now 73 companies with consent to take up to 23 billion litres a year, for an average annual fee of just $200 each.
On a volume basis, that works out at one third of a cent per cubic metres of water (1000 litres). In comparison, an Auckland ratepayer is charged $1.40 per cubic litre [sic] by council, with the rest of the country paying anywhere from 70 cents to $3 to tap into their local supply.
That is ridiculous. Water in all uses should be priced the same. Otherwise, the allocation of water is bound to be inefficient, which is the second problem with the current regime. Although, I will point out that the cost of water drawn at the source (such as by a bottling company or an irrigation scheme) should be less than the cost of water at an urban home or business, because of the cost of the infrastructure (and other costs) associated with getting the water from the source to the home or business. But I very much doubt that the difference in cost is as much as a factor of 200 or more as in the paragraph quoted above.

The lack of a consistent price is not the only reason that the current regime is inefficient. The regional councils' permits create a property right over fresh water, which I wrote about in a post last June. To be efficient though, a property rights scheme has to have four key properties. The rights must be: (1) universal; (2) exclusive; (3) transferable; and (4) enforceable. Here's what I wrote in that earlier post:
Universality means that all fresh water use would need to be included in the system (so municipal water supply, irrigation schemes, industrial use, etc. would all have to have permits to extract and use water). There can be few exceptions to this - although hydro power (where the water is not used up or degraded - that is, its use is not rival, as it doesn't deprive others of also using the same water) may be one.
Exclusivity means that all of the benefits and costs associated with extracting and using the water must accrue to the permit-holder. This essentially means that there can be no free riders - no one benefiting from water who does not have a permit to extract and use that water.
Transferability means that the permits can be freely traded voluntarily. So, if you have a permit to extract and use water from a given river, and you find someone else who is willing to pay more for that permit than whatever you value it at (presumably, whatever value it provides to you), then you should be able to sell (or lease out) your permit. This ensures that water will be used in the highest value activities, and means that water has a price (representing by the price of the permits). Failing to sell (or lease out) a permit entails an opportunity cost (foregone income for the permit holder), so selling (or leasing out) a permit to someone else might actually be the best use of the permit.
The problem with the system that regional councils run is that the permits are not transferable - they can't be sold to those who are willing to pay the most for them. Notice that we've gone full circle now - if the permits were freely transferable, then the price of permits would be set in the market for permits, and all users would face the same price for permitted water allocation.

Fresh water may not be a public good, but it is in the public interest to get this right.

Read more:


Sunday 21 May 2017

Book Review: The Climate Casino

My last book review (Merchants of Doubt) noted my surprise at those authors' comments about William Nordhaus. As I said then, I've gone on to now read Nordhaus's book "The Climate Casino", and as I suspected, Nordhaus is far from one of the 'bad guys' when it comes to evaluating the impacts of climate change, and what should be done to reduce the impacts.

Nordhaus explicitly notes that his work has been misinterpreted by some of the critics of climate change policy (notably this 2012 piece by "sixteen scientists" in the Wall Street Journal - ungated version here). That article says:
A recent study of a wide variety of policy options by Yale economist William Nordhaus showed that nearly the highest benefit-to-cost ratio is achieved for a policy that allows 50 more years of economic growth unimpeded by green house gas controls... And it is likely that more CO2 and the modest warming that may come with it will be an overall benefit to the planet.
In response, Nordhaus notes in his book:
The major point, however, is that the sixteen scientists' summary of the economic analysis is incorrect. My research, along with that of virtually all other economic modelers, shows that acting now rather than waiting 50 years has substantial net benefits... Waiting is not only economically costly but will also make the transition much more costly when it eventually takes place.
Interestingly, other than that point Nordhaus ignores the misinterpretation of his work by Oreskes and Conway in Merchants of Doubt, although he does cite their book. Where does Nordhaus stand overall on climate change? In the concluding chapter he notes:
A fair verdict would find that there is clear and convincing evidence that the planet is warming; that unless strong steps are taken, the earth will experience a warming greater than it has seen for more than a half million years; that the consequences of the changes will be costly for human societies and grave for many unmanaged earth systems; and that the balance of risks indicates that immediate action should be taken to slow and eventually halt emissions of CO2 and other greenhouse gases...
There are no grounds for objective parties simply to ignore the basic results, to call them a hoax, or to argue that we need another half century before we act. 
The book is divided into five main sections, essentially covering: (1) climate data and the evidence for climate change; (2) the impacts of climate change on human and other living systems; (3) strategies for slowing climate change; (4) climate policy; and (5) the politics of climate change. The book is well-written, but I found that some parts might be a little too technical for the general reader. However, Nordhaus does an excellent job of citing the important literature, presenting the data, and developing a robust and believable argument, so the general reader will be able to get through it.

Nordhaus's favoured policy is clearly a carbon tax, although I get the feeling that he would be happy with any policy that results in pricing carbon and thereby leading to incentives to reduce carbon emissions. In ECON110, we look at both carbon taxes and emissions trading as potential solutions to the climate change externality, and the book gives some good discussion of those options, as well as command-and-control-type regulations.

Finally, having worked for a number of years with physical scientists on climate change projects (see this recent post on one climate change project), I think this book should be required reading for climate scientists. In particular, this bit:
Climate-change policy is a tale of two sciences. The natural sciences have done an admirable job of describing the geophysical aspects of climate change. The science behind global warming is well established...
But understanding the natural science of climate change is only the first step. Designing an effective strategy to control climate change will require the social sciences - the disciplines that study how nations can harness their economic and political systems to achieve their climate goals effectively. These questions are distinct from those addressed by the natural sciences.
To be fair though, the physical scientists that I have been working with are well aware of this point (which is why economists and other social scientists have been part of the research teams!). Climate change is a global problem, and is going to require global solutions. And those solutions involve people and politics, which is why the social sciences (not just economics, but political science, psychology, and other disciplines) need to be part of the conversation.

Friday 19 May 2017

North Korea, nuclear missiles, and credible threats of extortion

Eric Rasmussen wrote an excellent post recently about North Korea's nuclear threat. I thought this would be topical to cover here, given that we discussed game theory in ECON100 this week and Rasmussen's post makes use of sequential games. Rasmussen writes:
Besides defense, though, the North Korean military does have another purpose: to make money...
The army can also make money by extortion.  North Korea’s army is too weak to engage in plunder by conquest, but it is strong enough to engage in demanding nuisance fees.   Would it be worth $20 billion per year to South Korea to avoid Seoul being bombarded? North Korea could be like the Barbary Pirates of 1800, who were enough of a nuisance to extract a goodly amount of their revenue as protection money, but so poor that that same amount was trivial to the European countries that paid it. The   United States,  having more principle and less monetary calculation, ironically, than the aristocratic Europeans, proved problematic on the shores of Tripoli and eventually France ended the game by conquering Algeria. However, the Barbary pirates did have a good run for their money.
The problem is making the threat to bombard Seoul credible.  The threat is credible if South Korea invades the North. If a South Korean invasion begins,  and  Kaesong is about to be occupied, North Korea has nothing to lose by destroying Seoul. If South Korea purposely bypasses Kaesong to avoid triggering that response and heads straight for Pyongyang, the Kim regime would see its demise and, again, might as well destroy Seoul and get a bit of revenge. Either way, the threat of bombardment is credible.
On the other hand, if North Korea simply says it will shell Seoul unless $20 billion is deposited in a certain Swiss bank account, that threat is not credible. If South Korea refuses, and North Korea shells Seoul, South Korea will respond by destroying the guns. Once the guns are gone,   South can conquer   North without fear of retaliation. North Korea will have almost literally “shot its wad.” South Korea may have lost 100,000 people, but that is small comfort for the Kim regime if it loses power. Thus, looking ahead, South can see that North will not retaliate and its $20 billion demand can be safely refused.
The game that Rasmussen describes is laid out in the figure below. The payoffs in the figure are (North Korea, South Korea). We can solve sequential games like this using backward induction - that is, we start with the last decision and work our way backwards. So, in this case the final decision is South Korea's - whether to Bomb Pyongyang, or not. If South Korea bombs Pyongyang, their payoff is -95, compared with -100 for not bombing Pyongyang. So, South Korea will bomb Pyongyang (because -95 is better than -100). Now, working backwards one step, we can work out whether North Korea will bombard Seoul. North Korea knows that South Korea will bomb Pyongyang if they bombard Seoul, so North Korea's payoffs are -200 if they bombard Seoul, or -1 if they don't. So, North Korea will choose not to bombard Seoul (because -1 is better than -200). Their threat to bombard Seoul if South Korea doesn't pay them $20 billion is not credible - North Korea wouldn't follow through on the threat.

Next, we can work out whether South Korea will choose to pay the $20 billion demand. If South Korea pays the $20 billion their payoff is -20, but if they don't pay their payoff is 1 (since North Korea will choose not to bombard Seoul). South Korea will choose not to pay the $20 billion (because 1 is better than -20). Finally, we can work out whether North Korea will threaten Seoul. If they issue the threat, their payoff is -1 (since South Korea will choose not to pay, and then North Korea will choose not to bombard Seoul), but if they don't issue the threat their payoff is 0. So, North Korea will not threaten Seoul (because 0 is better than -1). The subgame perfect Nash equilibrium is that North Korea doesn't threaten Seoul (and South Korea doesn't need to do anything in response, because the game ends right there). Note that Rasmussen has tracked all of the best responses as arrows in the figure.

Rasmussen then goes on to describe how the game would change if North Korea develops a nuclear arsenal:
...nukes are good for extortion in themselves and a good backup for artillery. Imagine now that Kim has nuclear missiles pointed at Seoul. This changes the game... There is now a new move at the end, Nuke Seoul or Not. Many of the arrows change, though, because that last move changes everything.
The game with nuclear weapons is in the figure below. Again, we can solve this game with backward induction. Now the final decision in the game is North Korea's - whether to Nuke Seoul, or not. If North Korea nukes Seoul, their payoff is -180, compared with -200 for not nuking Seoul. So, North Korea will nuke Seoul (because -180 is better than -200). Now, working backwards one step, we can work out whether South Korea will bomb Pyongyang. This time, if South Korea bombs Pyongyang their payoff is -195 (since North Korea will nuke Seoul), but their payoff is -100 if they don't bomb Pyongyang. So, South Korea will not bomb Pyongyang (because -100 is better than -195). Next, we can work out whether North Korea will bombard Seoul. North Korea knows that South Korea will not bomb Pyongyang if they bombard Seoul, so North Korea's payoffs are 2 if they bombard Seoul, or -1 if they don't. So, North Korea will now choose to bombard Seoul (because 2 is better than -1). If North Korea has nuclear weapons, notice that their threat to bombard Seoul is now credible - they will follow through on it.


Next, we can work out whether South Korea will choose to pay the $20 billion demand. If South Korea pay the $20 billion their payoff is -20, but if they don't pay their payoff is now -100 (since North Korea will choose to bombard Seoul, and then South Korea will choose not to bomb Pyongyang because Seoul would then get nuked). South Korea will choose to pay the $20 billion (because -20 is better than -100). Finally, we can work out whether North Korea will threaten Seoul. If they issue the threat, their payoff is 20 (since South Korea will choose to pay them), but if they don't issue the threat their payoff is 0. So, North Korea will now threaten Seoul (because 20 is better than 0). The subgame perfect Nash equilibrium is that North Korea threatens Seoul, and South Korea pays the $20 billion.

So, that provides one more reason (if any were needed) why South Korea (and its allies) should be working hard to prevent North Korea from developing nuclear weapons. Because North Korea could then use them for extortion.


Wednesday 17 May 2017

A/B testing vs. long shots, and funding of research

One of the things we discuss in the ECON100 topic on pricing strategy is A/B testing (yes, it isn't strictly limited to pricing, but we cover it in that topic nonetheless). A/B testing occurs where you provide different versions (of a website, an advertisement, a letter, etc.) to different people, then evaluate how those different versions affect people's interactions with you. In a recent blog post (about slowing productivity growth), Tim Harford provides a couple of examples:
But the same basic approach — using quick-and-dirty experiments, or “A/B testing” — has paid dividends elsewhere. David Cameron’s Behavioural Insight Team, known unofficially as the “nudge unit”, has used simple randomised trials to improve the wording of tax demands and the advice given to job seekers. Google tested 41 shades of blue for its advertising hyperlinks. Designers rolled their eyes — then Google claimed that the experiment had netted an extra $200m in annual revenue. As Mr Haldane says, marginal improvements can add up.
Of course, notwithstanding the eye-popping gains claimed about Google's blue links experiment, the gains from A/B testing are marginal. Offering different versions of a website can add to a firm's profits, but it isn't really a feasible way to test out a radical new idea. Harford writes:
An alternative view is that what’s really lacking is a different kind of innovation: the long shot. Unlike marginal gains, long shots usually fail, but can pay off spectacularly enough to overlook 100 failures. The marginal gain is a heated pair of overshorts, the long shot is the Fosbury Flop. If the marginal gain is a text message nudging you to finish a course of antibiotics, the long shot is the development of penicillin. Marginal gains give us zippier web pages; long shots gave us the internet.
These two types of innovation complement each other. Long shot innovations open up new territories; marginal improvements colonise them. The 1870s saw revolutionary breakthroughs in electricity generation and distribution but the dynamo didn’t make much impact on productivity until the 1920s. To take advantage of electric motors, manufacturers needed to rework production lines, redesign factories and retrain workers. Without these marginal improvements the technological breakthrough was of little use.
For productivity gains, we need a mix of both the marginal gains (such as from A/B testing) and the long-shot gains from radical new ideas. Harford only notes in passing that a lot of the long shots of the past arose from research that was either funded by, or at least well supported by, government. Consider as one example all of the spinoff technologies from NASA. Or the Internet, without which Google couldn't have run its blue links experiment at all.

This hasn't always been the case. As this excellent history of research funding notes, prior to World War II a lot of research was funded by private philanthropy. But since WWII, governments have provided the bulk of research funding, particularly for basic research (i.e. research that may not have an immediate application). Many argue that firms aren't interested in basic research, because it doesn't add to their bottom line. Even a lot of applied research is risky for firms to engage in, since it involves often large up-front costs with no certainty of a payoff at the end (see for example my earlier post on the economics of drug development). Which would explain why firms are happier to engage in A/B testing to capture marginal gains, than to chase long shots.

Although that might all be changing, as this article from Science earlier this year notes:
For the first time in the post–World War II era, the federal government no longer funds a majority of the basic research carried out in the United States. Data from ongoing surveys by the National Science Foundation (NSF) show that federal agencies provided only 44% of the $86 billion spent on basic research in 2015. The federal share, which topped 70% throughout the 1960s and ’70s, stood at 61% as recently as 2004 before falling below 50% in 2013.
The sharp drop in recent years is the result of two contrasting trends—a flattening of federal spending on basic research over the past decade and a significant rise in corporate funding of fundamental science since 2012. The first is a familiar story to most academic scientists, who face stiffening competition for federal grants.
Much of the privately-funded basic research is in pharmaceuticals and biotechnology. So, maybe that's where we should expect the next generation of long-shot gains to arise? And if we want long-shot gains in other areas, perhaps it is incumbent on the government to steer some funding in those other directions.

Tuesday 16 May 2017

Negative gearing on rental properties, and the effect on rents

Negative gearing occurs when an investor borrows money to purchase a rental property, and the rental income from that property is less than the costs of operating the rental property (including costs like property rates, insurance, maintenance, and interest on the loan). In many countries, these losses cannot be offset against other income to reduce the investor's taxable income, but in New Zealand they can. This provides a greater incentive to own rental property in New Zealand relative to other countries, especially given that capital gains on the value of the property are not taxable (unless you sell within two years - the so-called 'bright line' test that was recently introduced).

Negative gearing has been in the news this week because of Labour's proposal to ring-fence losses on rental property, so that they would not be able to be used to offset other income and reduce the investor's overall tax liability. The Property Investors Federation immediately squealed (as you would expect - their members will be made worse off if this proposal goes ahead), and claimed that this would raise rents. In contrast, we have this today:
Tenants Protection Association Christchurch manager Di Harwood said tenants shouldn't be worried about the policy backfiring on them...
Harwood said it was difficult to see how cutting the negative gearing tax break would make things harder for the average landlord, as it was a perk at the end of the tax year, rather than a week-by-week cost.
Renters United spokeswoman Kayla Healey said the idea was "fantastic", and getting rid of negative gearing would be good for renters...
She said it was a common argument that such changes would push up rents.
"But at the moment, I think rents now are as high as they possibly could be.
"Demand is so high landlords are charging as much as they possibly can already.
"I don't think this would have any impact overall on rents."
So, who's right in this debate? It actually depends. But before I get to that, let's think carefully about the statement by Kayla Healey: "Demand is so high landlords are charging as much as they possibly can already." I can assure you, rents could go higher. If she thinks rents are already "as high as they possibly could be", she clearly hasn't looked at the cost of renting in San Francisco or New York (yes, that's US$3500 per month for a one-bedroom apartment in San Francisco. And that's the average).

Removing the tax deduction for negative gearing won't affect all landlords, but it will affect at least some of them (91,000 according to this Herald editorial, which does a good job of laying out the debate). That equates to a higher cost for those landlords of owning a property. If we think of a really simple supply-and-demand analysis, higher costs shift the supply curve up and to the left, and raise prices. And that's exactly what the Property Investors Federation is claiming will happen - fewer rental properties available, and higher rents.

But that's not the end of the story. Removing the tax deduction for negative gearing makes owning rental properties less attractive for investors (relative to other investments they could put their money into). Some of them will sell their properties, and on top of that fewer investors will want to buy properties. That leads to both an increase in supply of homes for sale, and a decrease in demand for those homes. This will put some downward pressure on house prices. But lower house prices make it less costly for new landlords to own property (or less costly for existing landlords to add to their property portfolio) - for example, they'll now need a smaller mortgage for any new properties they buy. Over time, this will reduce landlords' costs (shifting the supply curve down and to the right), decreasing rents. On top of that, some previous renters might be able to buy the now-cheaper homes instead, reducing demand for rental properties (which again, lowers rents).

So, who is right? Both sides probably are. Rents will likely rise in the short term if this policy is implemented, but as house prices decrease, rents will fall (or rather, they will be lower relative to what they would have been without the policy - it's unlikely that rents will actually decrease).

Read more:

Monday 15 May 2017

Does the Internet make people happier?

Following on from the paper I discussed yesterday about Facebook use being associated with lower measures of wellbeing, I thought this 2013 paper by Thierry Penard (University of Rennes), Nicolas Poussing (INSEAD), and Raphael Suire (University of Rennes), published in the Journal of Socio-Economics, was a good way to follow up (it's open access, but just in case there's an ungated earlier version here).

The paper is titled "Does the Internet make people happier?", and the authors used data from the 2008 European Social Survey (but only for 1332 respondents from Luxembourg). Intensity of internet use is their main variable of interest, which:
...is measured by four dummies: Onlineday+ if the Internet is used several times per day (38%), OnlineDay if it is used only once per day (22.3%), OnlineMonth if it is seldom connected (17.1%), and NoInternet if the individual never uses the Internet (22.6%).
They essentially look at how life satisfaction (measured on a ten-point scale) changes with intensity of internet use. So far, so good. Except for the fact that the data is based on a cross-section, so it's only going to show correlations, the approach seems reasonable. The main problem arises later in the analysis. They find:
...a significant negative relation between the non-use of the Internet and life satisfaction. However, among the Internet users, there is no significant difference between the heavy and light users. This suggests that being deprived of Internet access (i.e. being on the wrong side of the digital divide) has a detrimental effect on the well-being.
These results hold up as they add more explanatory variables to their model, but as soon as they add health and income , their main result becomes only weakly statistically significant. Here's where the analysis becomes problematic. Penard et al. introduction interactions between internet intensity and other variables (age, marital status, gender, sociability, and income), but in those interactions they treat the ordinal variable of online intensity (described above as four categories) as a continuous variable (0,1,2,3). Treating an ordinal variable as continuous is unjustifiable in this case - there isn't any reason to believe that the difference between no internet use (0) and online once a month (1) is the same as the difference between online once per day (2) and online several times per day (3), but that is how it is treated in this case.

Once they add these dodgy variables into their analysis, it makes all of the internet intensity variables statistically significant, and of the expected sign. However, the results can't be believed because the additional variables introduce bias into the analysis.

As an aside, I'm always skeptical when a paper suddenly does one of three things after finding weak or statistically insignificant results in their main analysis: (1) looking at sub-groups or subsets of the data; (2) introducing interaction variables; or (3) quantile regression techniques. I may talk more about those in a later post, but if they weren't part of the original plan they really cry out that the researchers were clutching at straws looking for something to report.

There is further evidence that the initial results by Penard et al. lack robustness, and that comes from their own robustness checks reported in the paper. The authors rightly point out that:
It is possible that omitted variables in the estimated models influence both the intensity of Internet use and well-being, or that people who are more satisfied with their life are more likely to use the Internet (inverse causality).
So, they apply an instrumental variables (IV) analysis (which I've described earlier here). This involves finding a variable that you know affects internet use, but which won't have a direct effect on life satisfaction. Penard et al. use internet use by other family members. [*] Once they run the IV analysis, none of their internet intensity variables are statistically significant (even when they include the dodgy interaction variables).

Overall, despite the title this paper doesn't really contribute much to our understanding of whether internet use makes people happier or not. I'd be interested to know what happens to their analysis if you replaced the dodgy interaction variables with interactions based on the proper categorical variables, but I wouldn't expect it to change much (else they would probably have reported those results instead!).

*****

[*] If I wasn't feeling generous I would point out that this variable fails the exclusion restriction. If the rest of your family uses the internet, perhaps they don't spend so much time on interacting with you, which could directly affect your life satisfaction (positively or negatively, depending on your family!).

Read more:

Sunday 14 May 2017

The more you use Facebook, the worse you feel

That's the conclusion of a new study by Holly Shakya (UC San Diego) and Nicholas Christakis (Yale), which appears to be more robust than earlier studies (such as this one I discussed last November). As I noted about that earlier study, it was likely subject to a Hawthorne effect - that the participants who were asked to give up Facebook for a week anticipated that the study was evaluating whether it increased their happiness, and reported what the researcher expected to see.

In this new study, which Shakya and Christakis described in a recent Harvard Business Review article, they used data from about 8,000 respondents to the Gallup Panel of American households - at least, that's the number of respondents across three waves who agreed to share their Facebook data. No Hawthorne effects here - the respondents would have had no idea that they were being studied at the time they were engaging with Facebook. The variables of interest were:
the number of Facebook friends they had (“friend count”), the number of times in their history of Facebook use that they had clicked “like” on someone else’s content (“lifetime like count”), the number of links they had clicked in the past 30 days (“30-day link count”), and the number of times they had updated their status in the past 30 days (“status count”).
Shakya and Christakis then looked at how self-reported physical and mental health, self-reported life satisfaction, and body mass index (BMI) varied by the intensity of Facebook use. Importantly, because they have multiple observations of data from the same people, they can look at how previous Facebook use is related to current wellbeing.

In the simple cross-sectional analyses, they find that Facebook use is associated with worse physical and mental health, lower life satisfaction, and higher BMI. However, those results are all correlations. It may be that people with worse physical health and higher BMI spend more time on sedentary activities, which include Facebook, and that people how are unhappier or who have mental health problems spend more time on Facebook in a vain attempt to make themselves happier or to feel more connected with people.

The more robust results are those that look at how previous Facebook use is associated with current measures of wellbeing, while also controlling for the number of real-world social connections. In that case, there is no longer any association between Facebook use and BMI, but Facebook 'lifetime like count' and '30-day link count' are both associated with worse mental and physical health, and lower life satisfaction. In addition, 'status count' was associated with worse mental health. All of which suggests that those who have engaged more intensively with Facebook in the past, have worse current wellbeing. It's still not quite proving causality though, since people who were previously using Facebook a lot are clearly different from those using it less. However, here's what the authors concluded:
The associations between Facebook use and compromised well-being may stem from the simple fact that those with compromised well-being may be more likely to seek solace or attempt to alleviate loneliness by excessively using Facebook in the first place. However, the longitudinal models accounted for well-being measures in wave t when including Facebook use to predict the well-being outcomes in wave t + 1. Also, in our final models, we included degree (or real-world friendship counts) to adjust for this possibility, and the results remained intact. This provides some evidence that the association between Facebook use and compromised well-being is a dynamic process. Although those with compromised wellbeing may be more likely to use Facebook, even after accounting for a person’s initial well-being, we found that using Facebook was associated with a likelihood of diminished future well-being. The exception to this is the case of BMI.
And in their HBR article, the authors write:
Although we can show that Facebook use seems to lead to diminished well-being, we cannot definitively say how that occurs. We did not see much difference between the three types of activity we measured — liking, posting, and clicking links, (although liking and clicking were more consistently significant) — and the impact on the user. This was interesting, because while we expected that “liking” other people’s content would be more likely to lead to negative self-comparisons and thus decreases in well-being, updating one’s own status and clicking links seemed to have a similar effect (although the nature of status updates can ostensibly be the result of social comparison-tailoring your own Facebook image based on how others will perceive it). Overall our results suggests that well-being declines are also matter of quantity of use rather than only quality of use. If this is the case, our results contrast with previous research arguing that the quantity of social media interaction is irrelevant, and that only the quality of those interactions matter.
Certainly, this study is a large step up from the previous study by Morten Tromholt I discussed last year, and provides stronger evidence that we should limit our time spent on social media. Here's to more real world interaction!

Read more:


Thursday 11 May 2017

The beauty premium in undergraduate study is small, and more attractive women major in economics

Those are two of the conclusions from this 2015 paper by Tatyana Deryugina (University of Illinois at Urbana-Champaign) and Olga Shurchkov (Wellesley College), published in the journal Economic Inquiry (ungated earlier version here). The authors used data from "794 alumnae who graduated from an anonymous women’s college between the years 2002 and 2011", and had their pictures rated (for beauty) by 25 male and 25 female students. They then looked at whether more attractive women were more likely to be given higher admission scores, get better grades, choose different majors, and work in different occupations after graduation. They found that:
...once we control for standardized test scores, more attractive women do not receive different admissions ratings, showing that more attractive individuals do not appear to be more capable at the beginning of college, conditional on being admitted...
When we look at college grades, we find that, conditional on their SAT scores and admission rating, more attractive women have a marginally higher GPA... Our conclusion is that if there is a beauty advantage in college courses, it is very small and not driven by bias...
...more attractive women are considerably less likely to major in the sciences and much more likely to major in economics. We find no corresponding selection into humanities, other social sciences, or another group of majors that we label “area studies.”
There is a fairly robust literature demonstrating that there is a beauty premium in the labour market - more attractive people earn more (Daniel Hamermesh has been one of the key authors in this area, and his 2013 book "Beauty Pays" summarises the literature up to that point). Some people assert that the differences demonstrate discrimination against less attractive people. However, other explanations include that more attractive people are more confident and self-assured, and it is those qualities that are being rewarded in the labour market.

This paper offers a different explanation - that more attractive people choose different university majors than less attractive people, and the difference in majors results in different levels of pay (in part because different occupations have different beauty premiums). If attractive women are more likely to major in economics, and economics majors earn more on average than other majors, or that major leads to management jobs where attractiveness is more greatly rewarded, then attractiveness would be correlated with wages after graduation. Deryugina and Shurchkov don't have data on the wages of the women in their sample, but they do know their occupation. They write:
Consistent with our results on academic major selection, we find that more attractive women are much more likely to become consultants and managers and much less likely to become scientists and technical workers (including paralegals, technical writers, technicians, and computer programmers). Previous work has shown that earnings vary substantially by major and occupation...
 ...a back-of-the-envelope exercise suggests that at least half of the beauty premium in the labor market is explained by major/occupational choice and that managerial professions exhibit a larger return to beauty than scientific professions.
This study was based on women at a single US university, so it would be interesting to see whether these results extend to men, to other universities, and especially to co-ed universities. There's some future work to come in this space.

Read more:


Wednesday 10 May 2017

It turns out that getting crayfish drunk counts as research

Quoting this article from The Economist last month:
In a paper just published in Experimental Biology, Matthew Swierzbinski, Andrew Lazarchik and Jens Herberholz of the University of Maryland have shown that a sociable upbringing does indeed increase sensitivity to alcohol. At least, it does if you are a crayfish.
The three researchers’ purpose in studying drunken crayfish is to understand better how alcohol induces behavioural changes. Most recreational drugs, from cocaine and heroin to nicotine and caffeine, have well-understood effects on known receptor molecules in brain cells. That is not, though, true of ethanol, as the type of alcohol which gets people drunk is known to chemists. Ethanol’s underlying molecular mechanisms are poorly understood. But one thing which is known is that crayfish are affected by the same concentrations of the stuff as those that affect humans. Since crayfish also have large, easy-to-study nerve cells that can be examined for clues as to ethanol’s molecular mechanisms, Mr Swierzbinski, Mr Lazarchik and Dr Herberholz are using them to try to track those mechanisms down.
Yes, you read that right. These researchers got crayfish drunk as part of their research (if interested, you can read the paper here - it's open access). Is anyone else imagining the research team meeting where they came up with this research idea was some variant on this:


"Man, we always think of so many brilliant things down here". Like getting crayfish drunk.

There is a serious side to the research, though I'm not sure what it's telling us at this early stage.

Tuesday 9 May 2017

Doctors engage in price discrimination, especially when facing less competition

When it comes to price discrimination (charging different prices to different consumers, based on their willingness-to-pay or price elasticity of demand for the good or service), you might expect doctors to be immune to temptation. After all, it seems reasonable to expect that they aren't profit maximisers, right? Maybe they are.

This 2012 paper by Meliyanni Johar (University of Technology Sydney), published in the journal Economics Letters (ungated version here), investigates the question of whether doctors charge higher income patients more (for a standard general practice (GP) consultation). Higher income patients might be expected to have more inelastic demand for healthcare (because the cost of a visit to the doctor would take up a lower proportion of their income). So, given that the cost of providing a consultation is likely to be the same for high-income and low-income patients, if there is an observed difference in price between these two groups, it is an example of price discrimination.

In the paper, Johar uses a large dataset (2.3 million consultation visits, for 267,000 patients in New South Wales). She finds:
As expected, doctors charge higher fees to high income patients, but contrary to the hypothesis that fee is an indicator of quality, high quality doctors charge lower average fees than low quality doctors. The differences in average fee gaps however are small in size and not statistically significant. On average, there is a fee gap of about $6 for all GPs and $9–$10 excluding the 100% bulk-billing GPs. The latter is about 25% of the floor price, which may be regarded as large, for a basic consultation by the same GP.
The measure of 'quality' was "doctor’s participation in chronic disease management programs", which is pretty coarse, so it's not surprising to not find anything using that. The 'bulk-billing' GPs essentially charge no out-of-pocket fee to their patients and only claim the subsidy from the government, so it is appropriate to see the effect on the analysis of excluding them. The extent of price discrimination is pretty striking - higher income patients may be paying 25% more for a GP consultation than lower income patients.

However, the most interesting results come a bit later in the paper, when Johar disaggregates her results based on the extent of local competition in the GP market (based on deciles of the GP-to-population ratio). She then finds:
The average fee gap in high competition areas is only $3.50, but it is more than double that in low competition areas. There is no difference in fee gap by level of local competition among GPs who selectively bulk-bill their patients, but 70% of GPs in high competition areas adopted 100% bulk-billing.
Competition between doctors matters. When doctors face more local competition, they react by reducing the premium they charge higher income patients compared with those that face less local competition. Which is exactly what you would expect, since more local competition means that patients have more substitutes and relatively more elastic demand. And on top of that, GPs in high competition areas are more likely to adopt bulk-billing, and charge no additional fee to the patient.

So doctors are more likely to engage in price discrimination if they face less competition, and the difference in prices (between high-income and low-income patients) is larger when there is less competition.

Monday 8 May 2017

Preferences over statistical and economic significance

It's been a couple of years since I read Ziliak and McCloskey's "The Cult of Statistical Significance", but I must have put aside this paper by Erik Thorbecke (Cornell) at the time to read later (I don't see an ungated version, but it appears it might be open access), and I just ran across it again last week. In the paper, Thorbecke looks at Ziliak and McCloskey's argument that economics researchers should focus more on economic significance (what Z&M term "policy oomph"), and less on statistical significance (I reviewed the Ziliak and McCloskey book here, a couple of years ago).

What struck me about Thorbecke's paper though was that he demonstrated the difference in preferences between economists with a greater preference for economic significance, and economists with a greater preference for statistical significance, using indifference curves. Which is timely, given that we only recently covered the consumer choice model in ECON100. Here's the key figure from his paper:


The two 'goods' over which economists preferences are defined in this model are economic significance (on the x-axis) and statistical significance (on the y-axis) (for more on the distinction, see my earlier post on the Ziliak and McCloskey book). The upper panel (A) shows economists with a greater preference for statistical significance. Notice that the indifference curves are relatively flat. We know from ECON100 that the slope of the indifference curve is equal to the ratio of marginal utilities (-MUx / MUy). In this case of economists with a greater preference for statistical significance, the marginal utility of x (economic significance) is relatively low and the marginal utility of y (statistical significance) is relatively high (since these economists would prefer more statistical significance, rather than economic significance), so -MUx / MUy is a small number (i.e. a flat curve).

The lower panel (B) shows economists with a greater preference for economic significance. Notice that the indifference curves are relatively steep. In this case of economists with a greater preference for economic significance, the marginal utility of x (economic significance) is relatively high and the marginal utility of y (statistical significance) is relatively low (since these economists would prefer more economic significance, rather than statistical significance), so -MUx / MUy is a large number (i.e. a steep curve).

Notice that, despite the difference shapes of the indifference curves, both groups of economists would prefer more economic significance and more statistical significance. That is, both groups prefer indifference curves further up and to the right. They just differ in terms of which of those two 'goods' is more important.

Where's the budget constraint? I don't think there is one, at least not in the sense of a continuous line like we see in the basic consumer choice model from ECON100. However, there may still be a trade-off between economic significance and statistical significance in the 'preferred' model that we report in any given research paper. And the economists with the preferences in Panel (A) would be more likely to prefer the model with more statistical significance, while the economists with the preferences in Panel (B) would be more likely to prefer the model with more economic significance.

Who is right, of the two groups of economists? They both are. In the model, the two groups of economists make decisions based on their own preferences, maximising their utility (by attaining the highest possible indifference curve - the highest level of utility). In his paper, Thorbecke concludes:
Ultimately, the determination of economic importance is an issue that can only be approached within a specific context and should be left to the (subjective) judgments of individual researchers.
Which would be based on the shape of their individual indifference curves.

Sunday 7 May 2017

William Baumol, 1922-2017

William Baumol became the latest eminent economist to pass away on May 4. The Washington Post has an excellent obituary. To my mind, he was best known for his work in entrepreneurship, and for the eponymous cost disease: a deceptively simple explanation for why labour-intensive industries such as health care, education, and the arts, face increasing costs relative to other industries. His book (which I reviewed here last year) makes the important point that we shouldn't fear the cost disease, since productivity gains in other industries would more than offset the increasing costs in the industries subject to the cost disease.

I recall at a lunch with my colleagues last year, at around the time of the announcement of Hart and Holmstrom as the 2016 winners of the Nobel Prize in Economics, making the point that the Nobel committee needed to hurry up and give the award to Baumol or it might be a significant missed opportunity, given his advanced years (he was 94 at the time). This is one of those times that I wished I had been wrong.

A Fine Theorem provides an excellent summary of Baumol's wider work, including this:
I’ve always thought of Baumol as being the lineal descendant of Schumpeter, the original great thinker on entrepreneurship...
Indeed. I hadn't realised that, alongside his other contributions, he was responsible for the idea of contestable markets. He certainly had a diverse portfolio of interests, including a paper I discussed here about the psychic payoffs to workers in sports and the arts. There are several bits from his work that I reference in teaching ECON110.

Despite his contributions, my discussions with colleagues suggest he was underappreciated. Although he did get more than a few mentions in the 2016 Nobel predictions thread on Econ Job Market Rumors. He will be missed.

[HT: Marginal Revolution]

Wednesday 3 May 2017

Some papers just shouldn't be published

I don't post about all of the papers I read. Some of them are just less interesting than I thought when I first added them to the must-get-around-to-reading-that-sometime pile. Rarely though, there is a paper that is just pretty awful. This one by Vsevolod Andreev (Chuvash State University in Russia) is one example, entitled "Will there be a revolution in Russia in 2017?" and published in the Journal of Policy Modeling (ungated here) in 2015 (it's been in the pile for a while, but now seemed like a good time to crack it open).

The paper mostly appears mostly to be an egregious attempt to inflate the author's own citation counts - 11 of the 29 papers cited in this article are the author's own work. Take this paragraph for example:
Also in my previous studies (Andreev & Jarmulina, 2009; Andreev & Karpova, 2007; Andreev & Karpova, 2008; Andreev & Semenov, 2010a; Andreev & Semenov, 2010b; Andreev & Semenov, 2010c; Andreev & Semenov, 2012; Andreev & Semenov, 2013; Andreev & Vasileva, 2009; Karpova & Andreev, 2007; Karpova & Andreev, 2008) the mathematical models of dynamics of socio-economic systems, created on the principles of predator-prey models, are proposed and investigated. These models were applied for analysis at different time stages of dynamics of socio-economic systems of Russia (Andreev & Jarmulina, 2009; Andreev & Karpova, 2007; Andreev & Karpova, 2008; Andreev & Semenov, 2013; Andreev & Vasileva, 2009; Karpova & Andreev, 2007; Karpova & Andreev, 2008) and of the USA (Andreev & Semenov, 2010a; Andreev & Semenov, 2010b; Andreev & Semenov, 2010c; Andreev & Semenov, 2012). These studies results quite adequately describe the observed real situation.
That's right. That paragraph includes 22 citations of the author's own work, which might be very impressive if it was a solid body of research in a neglected subfield. But in this case, none of those other papers has been published in a reputable journal, and I wouldn't be at all surprised if they weren't just very similar versions of this paper (given the titles in the reference list). Now, I do self-cite in my own papers where appropriate, as does every other researcher, but that paragraph is really taking the piss.

Overall, the paper would be hilarious, if it hadn't actually been accepted for publication. Of course, if it had been published in Economic Inquiry, I might have put it down to being a bit of a joke (Economic Inquiry is the journal responsible for classics like "Riccardo Trezzi is immortal" (see my post on that one), and "On the Efficiency of AC/DC: Bon Scott versus Brian Johnson"). However, in this case Andreev's paper is mostly gibberish. He uses a predator-prey model (which sounds impressive, but it's really not) involving relationships between incomes, GDP, government spending on basic and applied research, population and the outflow of capital. The model appears to perform reasonably well in-sample, but extrapolating forward in time it all goes a bit haywire, leading Andreev to note that:
The blow-up regime is understood to occur when the behavior of one or several functions xi(t) of the system state begin to grow uncontrollably during the small time interval...
And then conclude that:
...it is possible to conclude that in Russia there are symptoms of a revolutionary situation at the end of 2016 and at the beginning of 2017.
Because the dodgy model extrapolated out-of-sample exhibits weird dynamic behaviour (which isn't at all surprising for a nonlinear model), Russia will have a revolution in 2017. I guess we will see. Peer review is supposed to weed out this sort of rubbish. As a reviewer, I would have rejected it in a heartbeat.

Tuesday 2 May 2017

Singing to children as a signal of attention

The other day I wrote a post about signalling by university students. Signalling is actually very common. Consider this example from a recent research paper published in the journal Evolution & Human Behavior (open access) by Samuel Mehr and Max Krasnow (both Harvard), as described on Science Daily earlier in the year:
A new theory paper, co-authored by Graduate School of Education doctoral student Samuel Mehr and Assistant Professor of Psychology Max Krasnow, proposes that infant-directed song evolved as a way for parents to signal to children that their needs are being met, while still freeing up parents to perform other tasks, like foraging for food, or caring for other offspring. Infant-directed song might later have evolved into the more complex forms of music we hear in our modern world...
Mehr and Krasnow took the idea of parent-offspring conflict and applied it attention. They predict that children should 'want' a greater share of their parents' attention than their parents 'want' to give them. But how does the child know it is has her parent's attention? The solution, Krasnow said, is that parents were forced to develop some method of signaling to their offspring that their desire for attention was being met.
"I could simply look at my children, and they might have some assurance that I'm attending to them," Krasnow said. "But I could be looking at them and thinking of something else, or looking at them and focusing on my cell phone, and not really attending to them at all. They should want a better signal than that."
Remember from my previous post that signals are a way for the informed party to reveal private information to the uninformed party. In this case, the private information is about parents' attention, the informed party is the parent (they know how much attention they are giving to their child), and the child is the uninformed party (they don't know for sure how much attention they are receiving). I also mentioned that in order for private information to be a problem it must result in some market failure. In this case, there isn't a market but there is a failure - if the infant feels like they aren't receiving sufficient attention from their parents, they respond by crying. Nowadays, crying children might be sleep-deprivation-inducing or mildly annoying, but for our ancient ancestors trying to hide from furry saber-toothed death machines, the consequences could be pretty serious. [*]

Now, for a signal to be effective is must be: (1) costly; and (2) costly in a way that makes it unattractive for those with lower quality attributes (in this case parents who aren't paying attention to their child) to attempt. Mehr and Krasnow offer this:
What makes such signals more honest, Mehr and Krasnow think, is the cost associated with them -- meaning that by sending a signal to an infant, a parent cannot be sending it to someone else, sending it but lying about it, etc. "Infant directed song has a lot of these costs built in. I can't be singing to you and be talking to someone else," Krasnow said. "It's unlikely I'm running away, because I need to control my voice to sing. You can tell the orientation of my head, even without looking at me, you can tell how far away I am, even without looking."
Mehr notes that infant-directed song provides lots of opportunities for parents to signal their attention to infants: "Parents adjust their singing in real time, by altering the melody, rhythm, tempo, timbre, of their singing, adding hand motions, bouncing, touching, and facial expressions, and so on. All of these features can be finely tuned to the baby's affective state -- or not. The match or mismatch between baby behavior and parent singing could be informative for whether or not the parent is paying attention to the infant." 
Which seems to suggest that singing would provide a good signal, since it involves a cost, and if you're not actually paying attention to the child it would be difficult or unattractive to put in the additional effort required to "[alter] the melody, rhythm, tempo, timbre, of their singing, adding hand motions, bouncing, touching, and facial expressions, and so on".

Of course, if you use your phone or tablet as a babysitting aid and avoid the singing, it might just be giving the opposite signal.

[HT: Marginal Revolution, back in February]

*****

[*] Actually, that's not the best example since singing would probably be just as likely to attract the unwelcome attention of a predator as would the child crying, unless you really believe that music soothes savage beasts (it doesn't sooth savage stock markets).