Friday, 30 October 2015

Why divestment doesn't punish firms' share price (and why you can't reward ethical firms that way either)

Some of you will have been following the campaign to urge the University of Otago to adopt a policy of not investing in fossil fuels (following similar policy by Victoria University and other organisations). The University has deferred the decision for now.

I haven't been following the Otago debate very closely, but a recent article in The New Yorker by William Macaskill caught my attention. In the article, Macaskill lays out the argument against divestment, if the purpose of the divestment is to punish the companies you are divesting from:
if the aim of divestment campaigns is to reduce companies’ profitability by directly reducing their share prices, then these campaigns are misguided. An example: suppose that the market price for a share in ExxonMobil is ten dollars, and that, as a result of a divestment campaign, a university decides to divest from ExxonMobil, and it sells the shares for nine dollars each. What happens then?
Well, what happens is that someone who doesn’t have ethical concerns will snap up the bargain. They’ll buy the shares for nine dollars apiece, and then sell them for ten dollars to one of the other thousands of investors who don’t share the university’s moral scruples. The market price stays the same; the company loses no money and notices no difference. As long as there are economic incentives to invest in a certain stock, there will be individuals and groups—most of whom are not under any pressure to act in a socially responsible way—willing to jump on the opportunity. These people will undo the good that socially conscious investors are trying to do.
Consider the efficient markets hypothesis - all information (good and bad) about the firm's future cash flows is already captured in the firm's share price. Essentially, the firm's market capitalisation (the value of all of its shares) should be the discounted value of all of its future cash flows (in most cases this isn't true, but for firms in Western share markets it usually isn't too far from the case). So, when the Fossil Free petitioners try to argue:
We urge the University of Otago and associated Foundation Trust to avoid fossil fuel extraction investments as they are economically insecure in the long-term.
That information is already captured in the share prices of fossil fuel extraction firms. This information is not a big secret. Holders of shares in those firms are already aware of this information, and how it will impact future cash flows for the firms, and have evaluated whether it makes sense to hold onto those shares.

If a fund divests itself of fossil fuel shares, then that increases the supply of shares to the market, and the price will fall slightly. Given that the previous price had already accounted for the future decreases in cash flows, the shares are now under-priced relative to their 'true' value. Buyers will swoop in to buy those shares, increasing demand and the share price, until it returns to its original share price. So, we shouldn't expect any impact of divestment on the share price. Which is indeed what research has found, as Macaskill notes in the article:
Studies of divestment campaigns in other industries, such as weapons, gambling, pornography, and tobacco, suggest that they have little or no direct impact on share prices. For example, the author of a study on divestment from oil companies in Sudan wrote, “Thanks to China and a trio of Asian national oil companies, oil still flows in Sudan.” The divestment campaign served to benefit certain unethical shareholders while failing to alter the price of the stock.
This argument could equally apply (albeit in reverse) for "ethical investment". It's not possible to reward green firms through a higher share price. Since the price already reflects the true value of the firm, then if a fund wants to buy shares in Corporation Green-Yay, then that will slightly increase the demand for shares in Corporation Green-Yay, and slightly increase the share price. Other holders of those shares will recognise they are now overvalued compared to their 'true' value, and sell them. This increases the supply of those shares and lowers the price back to where it was before. Again, no impact on share price.

Finally, Macaskill notes that divestment may have a positive effect in the long run:
Campaigns can use divestment as a media hook to generate stigma around certain industries, such as fossil fuel. In the long run, such stigma might lead to fewer people wanting to work at fossil-fuel companies, driving up the cost of labor for those corporations, and perhaps to greater popular support for better climate policies.
This is a much better argument in favor of divestment than the assertion that you’re directly reducing companies’ share price. If divestment campaigns are run, it should be with the aim of stigmatization in mind.
And similarly, there may be long-run benefits for green firms if campaigns to invest in them are used as a media hook to encourage more people to work at those firms (thus lowering their labour costs). Essentially, this would create a compensating differential between the wage paid to workers (in the same job) at the green firm, and those at the fossil fuel extraction firm. Because working at the fossil fuel extraction firm is less attractive than working at the green firm, workers must be compensated (through a higher wage) for working at the fossil fuel extraction firm. One might argue that this compensating differential already exists, but perhaps divestment campaigns might increase its effect?

[HT: Marginal Revolution]

Wednesday, 28 October 2015

20 common cognitive biases, plus more

I've written a number of posts about behavioural economics and the implications of various cognitive biases (you can see a selection of them under the 'behavioural economics' tag). Business Insider Australia recently provided the following handy summary of 20 common cognitive biases that affect our decision-making:


Although, I note that they've missed some pretty important ones from a behavioural economics perspective, like the illusion of knowledge (we think we know more than we actually do), present bias (we tend to heavily discount things that will happen in the future relative to things that will happen in the future), loss aversion (we value losses much more than we do equivalent gains), mental accounting (we keep mental accounts that we like to keep in positive balances), status quo bias (we don't like to change our minds), and framing (how a problem is framed can make a difference for the choices we make). Still, it's a handy reference list for some of the many biases we hold.

Monday, 26 October 2015

Lord of the Rings, tourism arrivals and harvesting

I have to say that I have been quite skeptical of the anticipated impacts of movie production on tourist arrivals. Like the Tourism New Zealand site, most of what has been written seems to be based on anecdote, and seasoned with a generous dose of excessive optimism. For example, this report from NZIER shows some evidence of a rise in tourist arrivals following the release of The Hobbit, although the analysis is fairly weak and it doesn't demonstrate causality.

I just read a paper (ungated version here) that has been sitting in my 'to-be-read' pile since 2012, by Heather Mitchell and Mark Fergusson Stewart (both from RMIT). In the paper, they look at time series data on tourist arrivals in New Zealand around the time of the release of the Lord of the Rings films, as well as in Australia around the time of the release of the Mad Max films and the Crocodile Dundee films. For good measure, they look at data on employment in hotels and restaurants in Kazakhstan around the time of the release of Borat.

Focusing on the New Zealand-specific results, they find no significant impact of the first Lord of the Rings film on tourist arrivals, but significant impacts following the release of the second and third films. Specifically, after the third film they find that monthly tourist arrivals were six percent higher. However, this was offset by a decrease in the upward trend in tourist arrivals. This slower trend increase was enough to offset the short-term boost in tourist arrivals in less than two years.

In the epidemiology literature (specifically the literature on mortality) there is a term called 'harvesting', which refers to events that are brought forward in time by the effect of exposure to some stimulus. For instance, a spell of extremely hot weather might temporarily increase mortality, but much of that mortality would be among the frail, who are at high risk of death already. Short-term mortality may be higher as a result, but overall mortality might barely change.

I suspect that we probably observe a harvesting effect on tourist arrivals. Potential tourists who have an interest in New Zealand might be induced to come to New Zealand earlier than they otherwise might have as a result of the Lord of the Rings or The Hobbit. That would lead to a short term increase in tourist arrivals, but since those tourists won't visit in the future, the longer-term effect might be close to zero.

Unsurprisingly, the NZIER report doesn't even mention the possibility of harvesting. I'm not convinced that the Mitchell and Stewart paper does the best job of evaluating this either, since they look at only a relatively short time series after the release of the films - it would be interesting to know whether the decrease in trend arrivals growth is persistent, or whether it eventually tapers out and there is a return to the long-run trend growth. Perhaps there's an opportunity here for an honours project - the required data are easily obtainable from Statistics New Zealand.

The question of whether there is sustained growth in tourist arrivals is important. It is often trumpeted as a reason to subsidise movie production (in addition to direct job creation in movie production and related industries). However, if there is only a short-term tourism impact and the long-term tourism impact is negligible, then that changes the cost-benefit evaluation of movie subsidies. Although that assumes that the government even carefully considered the costs and benefits of the deal they did with Warner Bros, and other movie subsidies they provide.

Sunday, 25 October 2015

Uber's surge pricing works

Chris Nosko (University of Chicago) has access to some pretty privileged data - Uber's internal data on app usage, requests, completed journeys, and pricing. In a recent paper (PDF) he, along with Jonathan Hall and Cory Kendrick (both Uber), uses the data to illustrate how Uber's surge pricing works to manage demand:
Uber is a platform that connects riders to independent drivers (“driver­partners”) who are nearby. Riders open the Uber app to see the availability of rides and the price and can then choose to request a ride. If a rider chooses to request a ride, the app calculates the fare based on time and distance traveled and bills the rider electronically. In the event that there are relatively more riders than driver­partners such that the availability of driver­partners is limited and the wait time for a ride is high or no rides are available, Uber employs a “surge pricing” algorithm to equilibrate supply and demand. The algorithm assigns a simple “multiplier” that multiplies the standard fare in order to derive the “surged” fare. The surge multiplier is presented to a rider in the app, and the rider must acknowledge the higher price before a request is sent to nearby drivers.
Essentially, surge pricing is used to manage excess demand - when the quantity of Uber rides demanded by users exceeds the quantity of rides available from drivers at that time. In other words, there is a shortage of available Uber drivers. Simple demand-and-supply tells us that this occurs when the price is below equilibrium, as in the diagram below. At the price P0, the quantity of Uber rides demanded is QD, but the number of Uber rides available is just QS, and the difference is the shortage (or excess demand). In this situation, the price should rise towards the equilibrium price (P1), which is what surge pricing achieves.


A traditional taxi firm would typically remain at their standard pricing, which means the excess demand is managed instead by people waiting for a taxi to become available. With surge pricing, two things happen: (1) since the price is higher, fewer people demand rides from Uber (they choose some alternative form of transport instead); and (2) more Uber drivers make themselves available (to take advantage of the higher potential earnings). Notice that at the price P1, the quantity of Uber rides demanded (Q1) is less than QD, and the quantity of Uber rides supplied (Q1) is greater than QS.

Anyway, the Hall et al. paper shows both of these effects, using data from two nights in the area surrounding Madison Square Garden in New York: (1) March 21, 2015, the night of a sold-out Ariana Grande concert at the Garden; and (2) New Years Eve 2014-15, when a software problem caused the surge pricing to fail for 26 minutes between 1am and 2am.

They find, for the first night:
...efficiency gains came from both an increase in the supply of driver­partners on the road and from an allocation of supply to those that valued rides the most.
And for the second night:
...we saw that in the absence of surge pricing, key indicators of the health of the marketplace deteriorated dramatically. Drivers were likely less attracted to the platform while, at the same time, riders requested rides in increasing numbers because the price mechanism was not forcing them to make the proper economic tradeoff between the true availability of driver­partners and an alternative transportation option. Because of these problems, completion rates fell dramatically and wait times increased, causing a failure of the system from an economic efficiency perspective.
 [HT: Marginal Revolution]

Friday, 23 October 2015

Megan McArdle on network effects

A recent Bloomberg article by Megan McArdle does a great job of explaining network effects:
So just what is a network effect? The term describes a product that gets more valuable as more people adopt it, a system that becomes stronger as more nodes are added to the network. The classic example of network effects is a fax machine. The first proud owner of a fax machine has a very expensive paperweight. The second owner can transmit documents to the guy with the pricey paperweight. The thousandth owner has a useful, but limited, piece of equipment. The millionth owner has a pretty handy little gadget.
McArdle's article also does a nice job of explaining switching costs, and how they are different from network effects. Both concepts are covered in ECON100 at Waikato because of their importance to business decision-making. As McArdle notes, network effects are really important because they can create a situation where the equilibrium number of firms in the market is one (a monopoly), which confers a large degree of market power on that firm.

How does this arise? Normally, the demand curve for a good is downward sloping. Consider a good where each person can only buy one unit. At a given price, only the people that value it more than the price will buy. As the price falls, the number of buyers increases because the marginal value of the good to those additional buyers is now above the (lower) price.

However, a good with network effects works differently. The value to the buyer depends on two things: (1) the standard downward-sloping price effect described above; and (2) the number of other users, with value increasing as the number of users increases. So, the demand curve for a good with network effects looks like the figure below (MV is marginal value). For the first few buyers the value is low (but not zero - some people like to have expensive paperweights), but as more users buy the good its marginal value to each additional user rises. However, eventually the first effect offsets this (some potential users are not attracted by your product, no matter how many users it has), and we end up with the more standard downward-sloping demand curve.


Now consider how you choose to price the good with network effects. Let's say you priced your new network-effect good at the price P0. No consumer would buy this product. Why? Because the price is above the marginal value for the first consumer. Buying this product would make them worse off. This is why firms with network-effect goods often start by giving their product away for free. To get to the point where the marginal value is greater than the price of P0, you would need to give away at least Q0 units of the good. After that the marginal value for every additional buyer is greater than the price, until we get to the equilibrium quantity at Q1. In other words, once you've got past the tipping point, market demand for your network-effect good will accelerate, potentially generating large profit opportunities.

However, that's not the end of the story. McArdle cautions:
When your network is growing rapidly, things are splendid! Every new user increases the value of your network and encourages even more people to join. But there’s a small catch: What happens if your network starts shrinking? Suddenly, it’s getting less valuable, which means more people are likely to leave, which makes it even less valuable. Rinse and repeat all the way to the court-appointed receiver’s office.
So, while network effects can be a source of market power and monopoly rents, these benefits are not permanent, and not guaranteed. One look at the roll-call of failed network goods (MySpace, Bebo, Betamax tapes), or formerly successful network goods (Microsoft operating systems, landline telephones, VHS tapes) should be enough to tell you that.

Tuesday, 20 October 2015

The optimal open road speed limit

A couple of years ago, I wrote a post about speed enforcement. However, it remains an open empirical question as to what the optimal open road speed limit is. When a driver increases their speed, the risk of an accident increases and so does the severity of the accident. That imposes a cost on the driver, and because of the non-linearity of accident damage, the marginal cost is increasing (i.e. the difference in accident severity between an accident at 110 km/hr and one at 100 km/hr is larger than the difference between an accident at 60 km/hr and one at 50 km/hr even though the speed differences are the same in absolute terms). That cost may be offset by benefits in the form lower travel times (i.e. saving the time cost of travel). Marginal benefit decreases with speed (every km/hr faster saves some time, but each km/hr saves less additional time as you go faster). The optimal speed for the driver is the speed where marginal benefit of driving faster is exactly equal to the marginal cost. At this point, driving a little bit faster entails a higher additional cost than the benefit they would receive (which is why they will drive no faster).

However, when setting a speed limit a benevolent social planner doesn't only care about the costs and benefits to drivers. Speeding drivers place additional costs on others (externalities), including risk to other drivers, pedestrians, and increased pollution and its associated health costs. So, if we wanted to know what the optimal open road speed limit is for society (not just for drivers themselves), we need to consider the additional external costs. So the optimal speed limit for society will be lower than the optimal speed limit for each individual driver (which might be one explanation for the number of drivers who consistently drive over the current limits).

Cost-benefit analyses of the speed limit are rare. Which is why I was very interested to read this paper in the Journal of Public Economics earlier this year (sorry I don't see an ungated version online) by Arthur van Benthem (Wharton School, University of Pennsylvania). In the paper, van Benthem undertakes a fairly exhaustive evaluation of the costs and benefits of a series of speed limit changes that occurred in 1987 and 1996 in the U.S. (specifically, he looks at the effects in California, Oregon and Washington states). Unlike past studies that have mostly evaluated the costs purely in terms of road accident fatalities (valued using the value of a statistical life, or VSL), this paper looks at a wider range of private and external costs, as shown in the figure below.


van Benthem first evaluates the impact of the speed limit changes on each of the variables above (fatal accidents, non-fatal accidents, infant health, etc.) by comparing roads where the speed limits were raised (mostly from 55 mph to 65 mph) with control roads where speed limits remained unchanged. He tests for and doesn't find any substitution effects, so drivers weren't induced into driving more on the roads with higher speed limits, when compared with the control roads (which is unsurprising because they are generally located in different areas).

What were the effects? In terms of the outcome variables noted in the figure above, he finds:
...that a 10 mph speed limit increase leads to a 3–4 mph increase in travel speed, 9–15% more accidents, 34–60% more fatal accidents, a shift towards more severe accidents, and elevated pollution concentrations of 14–24% (carbon monoxide), 8–15% (nitrogen oxides) and 1–11% (ozone) around the affected freeways. The increased pollution leads to a 0.07 percentage point (9%) increase in the probability of a third trimester fetal death, and a positive but small and statistically insignificant increase in the probability of infant death.
van Benthem then goes on to evaluate the costs and benefits of the speed limit changes, using common measures of the VSL (for accident-related and health-related costs), value of time (for travel time savings) and petrol prices (for increased fuel costs). He finds:
Annual net social benefits are estimated at −$189 million excluding adult health impacts, with a standard deviation of $94 million. The social costs ($345 million) exceed the benefits ($156 million) by a factor of 2.2. Using the adult health impacts from the central health impact scenario, which are admittedly uncertain, the net benefits decrease to −$390 million, with a standard deviation of $102 million... The social costs exceed the benefits 3.5 times.
In other words, the costs of increasing the speed limit outweighed the benefits by a substantial margin - the speed limit should not have been raised. And the results appear to be very robust, even when you consider a range of values for the VSL and the value of time. Of course, any cost-benefit evaluation is necessarily incomplete. It isn't possible to include every cost and benefit that might arise, and many of the costs and benefits will be fairly uncertain (as the quote above notes in the case of adult health impacts). In the paper, van Benthem notes that his analysis omits "marginal excess tax burden from changes in speeding ticket and gas tax revenues, changes in enforcement costs and increased driving pleasure at higher speeds".

So, what was the optimal speed limit? van Benthem suggests that it was not much below 55 mph. Finally, this evaluation was based on somewhat dated data. If the analysis were re-run with more up-to-date data, we might get something different - cars are now more fuel-efficient, safer to drive (even at high speeds), health care has improved (which might reduce some fatal accidents to non-fatal), and petrol prices are higher. van Benthem suggests that:
today's gap between private and social net benefits will be smaller for a 55 to 65 mph speed limit increase. For higher speed limits, the gap is likely to remain substantial because of the steeper speed-emission profile in that range and the external cost component of accidents.
Finally, the paper has a number of additional bits I found interesting:

  • The treatment effect of the speed limit changes on travel speeds increased over time - people adjusted their speeds towards the new limit, but this adjustment was not immediate. 
  • Carbon monoxide (CO) emissions triple as vehicle speeds increase from 55 mph to 65 mph!
  • van Benthem also draws a conclusion about the optimal Pigovian tax on speed, which:
would consist of a combination of a gasoline tax for climate damages, emissions taxes for local air pollutants in exhaust gas (which varywith speed), plus a speed-dependent tax to internalize accident risk imposed on others (which is also a function of traffic conditions).

Saturday, 17 October 2015

How to beat the fitness trackers (for cheaper health insurance)

A couple of weeks ago, I wrote about a Swiss insurer trialling the use of fitness tracking data (such as from a FitBit or Apple Watch) to separate the high-risk and low-risk policy-holders. It didn't take long for someone to come up with ways to beat the system:
At Unfit Bits, we are investigating DIY fitness spoofing techniques to allow you to create walking datasets without actually having to share your personal data. These techniques help produce personal data to qualify you for insurance rewards even if you can't afford a high exercise lifestyle. 
Our team of experts are undertaking an in-depth Fitbit Audit to better understand how the Fitbit and other trackers interpret data. With these simple techniques using everyday devices from your home, we show you how to spoof your walking data so that you too can qualify for the best discounts.
Watch the video on the site, and you will get a feel for what they are proposing. This is the sort of reaction that we might expect - as soon as fitness tracking becomes worth real money (in terms of lower health insurance premiums), that creates an incentive to try and beat the system.

The trouble is that it puts us right back where we started, with a pooling equilibrium (the usual consequence of adverse selection). The fit and healthy won't be able to use fitness tracker data to separate themselves from the unfit and less healthy. This will lead to higher insurance premiums for the fit and healthy, as the insurance companies will have to assume that everyone is relatively high risk. Essentially, the fit and healthy end up subsidising the insurance of the less healthy.

Note that this is also an example of moral hazard - where, after an agreement is reached, one of the parties (the insured) has an incentive to modify their behaviour (by employing the Unfit Bits' tricks) to gain an advantage (lower insurance premiums) based on the terms of the agreement (if premiums are based on how much fitness activity you engage in), and to the detriment of the other party (the insurance company).

Note also that FitBit doesn't have any incentive to stop people from 'cheating'. They aren't paid by the insurance companies (yet!). But how long before the insurance companies start to develop their own proprietary fitness trackers? Or maybe one of them will buy out FitBit?

[HT: Thomas Lumley at StatsChat]

Read more:

Friday, 16 October 2015

More evidence of declining global inequality

Last week I wrote the latest of a series of posts on global inequality, based on the work of Branko Milanovic. However, as you might expect Milanovic isn't the only one working on global inequality. In a new paper published in Applied Economics (sorry I don't see an ungated version anywhere), Rati Ram (Illinois State University) adds to the evidence that global inequality has been reducing recently.

In the paper, Ram uses the latest data from the International Comparison Program (ICP), a World Bank programme that produces consistent estimates of GDP across countries every few years. This allows inequality in per capita GDP between countries to be measured accurately (since in-between ICP data years the data quality is not as high). The latest two rounds of the ICP data are for 2005 and 2011 for 140 countries, so that is what Ram uses.

Since Ram calculates inequality based on per capita GDP, this is similar to Milanovic's "Concept 1" or "Concept 2" inequality. He uses three measures of inequality including the Gini coefficient, and two different variants of the Theil Index. As an aside, the Theil Index has recently become my preferred measure of inequality - it doesn't offer the relative simplicity of the Gini coefficient, but it is decomposable into 'within groups' and 'between groups' components, which is a much more useful tool for policy.

Ram finds a huge drop in income inequality between countries from 2005 to 2011. The Gini coefficient falls from 58.0 to 48.4 (or from 55.1 to 49.6 when you exclude China and India). Both Theil indices decrease as well (and by proportionally more). That equates to a nearly 17 percent reduction in between-country inequality (or 10 percent if you exclude China and India) in just six years. If you look back at the figure from Milanovic's 2013 paper (in my post here), you'll see that the declines over that period are similar.

To back up this evidence, Ram also compares GDP growth for six high income countries (Canada, France, Germany, Italy, the UK, and the USA) with that of three large middle income countries (Brazil, China, and India). Aggregate GDP in the three middle income countries was 43.2% of aggregate GDP in the six high income countries in 2005, but this had grown to 82.0% by 2011. Ram labels this a "dramatic rebalancing of global economic power", and I'm inclined to agree. Although the recent slowdowns in China and Brazil will hinder the continuation of this rebalancing.

However, one downside of the paper is that it concentrates only on the between-country component of inequality, and doesn't tell us what is happening to inequality within countries, or inequality between individuals. If economic growth is biased towards the wealthy, then between-country inequality can fall but overall global inequality might stay the same or increase (and depending on how you read the figure from the Milanovic paper, you might conclude either of those things is happening).

So even though the greatest share of global inequality is between countries rather than within countries, and Ram's evidence supports a decline in that component, it might not necessarily mean that global inequality (between individuals) is declining overall. To answer that question, you need more survey data (like the data extensively promoted by the work of new Nobel Prize winner Angus Deaton).

Read more:

Wednesday, 14 October 2015

Why transport chaos can be a good thing

People are creatures of habit. In behavioural economics, we refer to status quo bias. As an example, think about the route you take to get to work or school each day. You probably haven't rigorously evaluated all of the possible alternative routes - you probably tried a few different ones out, found one that appeared to work well, and have stuck to it ever since.

There's a good reason why you don't continue to improve your route selection. Herbert Simon (1978 Nobel Prize winner) suggested that humans do not optimise, but instead satisfice. That is, we look around at some of the options (not necessarily all of the options) and find one that is 'good enough'. A more rigorous interpretation of this is provided by search theory. Search theory (which among other things Peter Diamond, Dale Mortensen, and Christopher Pissarides won the 2010 Nobel Prize for) says that it is costly for us to search for things (e.g. buyers searching for sellers, commuters searching for the optimal route, etc.). We will continue to search for a better option only up to the point where the marginal benefit of continuing the search are equal to the marginal costs of searching. The marginal benefit is the benefit gained from finder a slightly better option. In the case of the commuter, it is the time saved from finding a faster route to work or school. Under search theory, the 'optimal' route is the one where any additional searching would make us worse off (marginal cost > marginal benefit).

If we were optimisers, we would (eventually through trial and error) find the fastest route to work or school, and there would be no gains from taking an alternative route. However, if we are satisficers or if we conform to search theory, then if our current preferred route is not available to us, we might experiment with other routes and find a new one that is even faster.

Which brings me to the point of this post. A recent paper by Shaun Larcom (University of Cambridge), Ferdinand Rauch (University of Oxford), and Tim Willems (University of Oxford) looks at exactly this question. The paper is summarised in a non-technical way here. Essentially they looked at public transport card data before and after the London tube strike in February 2014, and compared commuters who were affected by the strike with those who weren't.

They find:
...that those who were forced to explore alternative routes during the strike (‘the treated’) were significantly less likely to return to their pre-strike modal commute in the post-strike period, relative to the non-treated control group...
In terms of magnitude, the fraction of post-strike switchers is about five percentage points higher among the treated.
In other words, it's evidence that London commuters are not optimisers. So, are they satisficers, or is this search theory at work? The authors investigate:
Using conservative numbers for the estimated time saving and its monetary equivalent, we calculate that if commuters were adhering to the optimal search strategy, the cost of trying the most attractive untried alternative would have to be greater than £380. Given this implausibly large number, it seems that commuters in our dataset were experimenting less than what is described by the standard rational model. Instead, agents seem to satisfice in a way that is not straightforward to rationalise.
So, it appears that not only are London commuters not optimisers, they aren't optimising under search theory either.

In related news, Thomas Lumley over at Stats Chat pointed me to this Transport Blog post about the disruption to the Hutt Valley rail line in June 2013. The disruption allowed the Ministry of Transport to estimate the benefit of the rail line to commuters, at $330 million per year (in saved travel costs). Lumley quite rightly points out that if the rail line didn't exist, many people would live somewhere else instead (it might be better to live in downtown Wellington or in Porirua or Petone rather than commute from the Hutt Valley each day).

Note that in the Wellington case, that the commuters probably didn't continue driving into Wellington when the rail line was reopened. There was little to be gained from the alternative route (hence the large cost savings of the rail line).

[HT: Marginal Revolution for the former study]

Read more:


Tuesday, 13 October 2015

Nobel Prize for Angus Deaton

The 2015 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) has been awarded to Angus Deaton. I won't write too much here, but if you want to know a good deal about Deaton and his work see the links in this post by Tyler Cowen, this one by Alex Tabarrok, and additional links here.

Chris Blattman has an excellent piece, which I will largely echo. My own intersection with Deaton's work was during my PhD, where I undertook a fairly large (~680 households) household survey in Northeast Thailand. The Analysis of Household Surveys was one of many guidebooks that helped greatly with setting up the survey (along with several World Bank publications on the survey methods for the Living Standards Measurement Surveys which Deaton was also involved in), and the book was invaluable in the analysis phase (as you would expect from the title). In reading Deaton's work, I have come to realise just how much of his thinking had already been indirectly a part of my development economics training, even if my lecturers were not always explicit about their sources.

As others have mentioned, this is a very well deserved award for a wide body of work that has greatly enhanced our understanding of poverty, inequality, consumption, and development economics more generally. His contributions span both the theoretical, the empirical, and the analytical. Deaton's name had no doubt been on the shortlist for a number of years.

Sunday, 11 October 2015

Two books on economics and romance

Economists aren't exactly known for their romantic tendencies. Which is why it is unusual to see several books recently on the topic of economics and love, romance, etc. I've read two of them in the last month or so.

The first book was "Everything I Ever Needed to Know about Economics I Learned from Online Dating" by Paul Oyer, a professor at Stanford. This book is a delightful treatment of how economics can apply to a wide range of activities, not just online dating (though obviously, that is the central theme of the book). The topics that Oyer covers are mostly unsurprising, including: search theory; cheap talk; network externalities; signalling; statistical discrimination (where Oyer takes a stand much closer to mine than do Gneezy and List in their book); thick vs. thin markets; adverse selection; assortative matching; and the returns to skills.

There are some highlights to this book, notably on the second page when Oyer remarks:
Match.com, eHarmony, and OkCupid, it turns out, are no different from eBay or Monster.com. On all these sites, people come together trying to find matches. Sure there are a lot of differences between someone selling a used bowling ball on eBay and someone signing up for Match.com, but the basic idea is the same. The bowler needs to think about how to present his bowling ball to get what he wants (money, presumably) just as the Match.com participant needs to present himself to get what he wants (a partner in most cases, casual sex in others). It's really not that different.
As you read through the book, Oyer may just convince you of this. Although he couldn't convince me that online dating isn't adverse selection personified (literally!), and my ECON110 students will continue to laugh their way through the tutorial example on online dating. But, [potential spoiler alert!], at least there is a happy ending to this book.

The second book is "The Romantic Economist - A Story of Love and Market Forces" by William Nicolson. While Oyer (understandably) takes a fairly research-based approach in his book, Nicolson's book is much more a narrative. It's the story of Will's (unsuccessful) attempts to apply rational economic thinking to his love life. This book was a lot of fun, even if you won't necessarily learn as much about the applications of economics in it.

The book does cover a similar set of topics to Oyer's: supply and demand; the efficient market hypothesis; market power; game theory; signalling; bargaining power; investment; credible threats; sunk costs and opportunity costs; and backwards induction. The book also has some highlights, like a model of sweethearts and dickheads (where sweethearts find it difficult to signal that they are sweethearts rather than dickheads), the game theory of toilet seats (I'll be using this example in ECON100 next year for sure!), and the opportunity costs of being in a relationship. Unfortunately, the book doesn't have nearly as happy an ending.

One small criticism of The Romantic Economist is that it is written from the perspective of a man, and does come across in a lot of places as pretty sexist (although if you can suspend your indignation for the duration of the read, it is pretty funny too). Nicolson does provide a disclaimer early in the book, and rightly I think it invites an opportunity for a similar book from a woman's perspective!

Both books are recommended - Oyer's if you want to learn some real-world applications of economics (backed by robust research in most cases), and Nicolson's if you're looking for some light reading related to economics.

Saturday, 10 October 2015

Are teaching and research substitutes or complements?

A couple of weeks ago I wrote a post on adjuncts being better teachers than tenured or tenure-track professors. My argument there was that the results were not particularly surprising:
Teaching and research both require investment in terms of time and effort. While some may argue that teaching and research are complementary, I'm not convinced. I think they're substitutes for most (but not all) faculty (I'll cover more on that in a post in the next week). Faculty who do teaching but little or no research can be expected to put more effort into teaching than faculty who do a lot of (particularly high quality, time intensive) research. So, contingent teaching faculty should do a better job of teaching on average. These results seem to support that.
However, the question of whether teaching and research are substitutes or complements remains a fairly open question. The theoretical framework that underlies a lot of the work in this area dates from this excellent William Becker paper from 1975 (gated).

Which brings me to this paper in Applied Economics (ungated earlier version here) by Aurora Garcia-Gallego (Universitat Jaume I), Nikolaos Georgantizis (Universidad de Granada and Universitat Jaume I), Joan Martin-Montaner (Universitat Jaume I), and Teodosio Perez-Amaral (Universidad Complutense). In the paper, the authors use yearly panel data from Universitate Jaume I in Spain over the period 2002-2006 to investigate the question of whether better researchers are also better teachers or not. They also look at administrative work as well, but I want to focus instead on the teaching/research data and results.

I've been in contact with the authors, as I was sorely tempted to write a comment on their paper for publication in the journal [*]. The authors were kind enough to provide me with some additional analysis that doesn't appear in their paper, that I will share with you below.

Anyway, I'll first explain what my issue is with the paper. Here's what the authors found:
Summarizing our results, we find that professors with a typical research output are somewhat better teachers than professors with less research. Moreover, nonresearchers are 5 times more likely than researchers to be poor teachers. In general, the quality of university-level teaching is positively related with published research across most levels of research output.
Those seem like fairly strong results, but of course that depends on how things are measured. The authors' measure of research is based on an internal measure of research quality, while teaching quality "is obtained from students' responses to an overall satisfaction survey question using a 0-9 Likert scale", and the teaching quality measure is calculated for each professor as the average evaluation across all of their courses. I know what you're thinking but no, my issue isn't with conflating teaching quality with popularity. However, I do think the teaching measure is a problem. Here's a density plot of the measure of teaching quality (provided to me by the authors):


So, what you have there is a reasonably normal distribution, centred on five. Which is what you would expect from Likert scale data, especially if you are averaging the scores across many courses. But wait! What's that huge bar at zero? Are there really a large number of teachers with zero teaching quality? That seems very implausible. I highly suspect that missing data has been treated as zeroes - not necessarily by the authors themselves, but probably in the administrative database they are using.

When the authors separate their data into researchers and non-researchers, this is what the histograms look like:


So, that large spike at zero is a feature of the non-researchers sub-sample, but not so much in the researchers sub-sample. Which supports my argument that this is a missing data problem - it's more likely that you would have missing data for fixed-term or part-time (adjunct) staff, who are also more likely to be non-researchers. Unfortunately, that probably drives the results that the authors get.

In fact, the authors were kind enough to re-run their analysis for me, excluding the 85 observations where teaching quality was less than one. Here's their results (the first regression shows the original results, and the second regression excludes the observations with teaching quality less than one):


For those of you who lack a keen eye for econometrics, I'll summarise. The key result from the first regression is that the variable research1 has a large and statistically significant and positive relationship with the dependent variable (teaching quality). This shows that better researchers have higher teaching quality. However, when you remove the hokey data (in the second regression), the coefficient on research1 halves in size and becomes statistically insignificant. Which doesn't necessarily mean there is no effect - the regression might simply be underpowered to identify what could be a very small effect (although with nearly 2,000 observations and lots of degrees of freedom you'd expect it to have reasonable statistical power).

All of which means that this paper doesn't provide strong evidence for research and teaching being complements. Neither does it provide evidence for research and teaching being substitutes (for which there would have had to have been with a significant negative relationship between research and teaching in the regression model).

I've invited the authors to respond - I'll keep you posted.

*****

[*] The editors of the journal didn't respond to my query as to whether they would consider publishing a comment, so instead I summarised much of what I would have written in this post.

Thursday, 8 October 2015

Economists are susceptible to framing too

Back in August, I blogged about a paper showing that philosophers suffer the same cognitive biases as everyone else. Now, a recent NBER Working Paper (pdf) by Daniel Feenberg, Ina Ganguli, Patrick Gaule, and Jonathan Gruber has shown that economists (or at least, the readers of NBER Working Papers) are affected by framing too. Neil Irwin also wrote about it at the Upshot last month.

The authors looked at how the order that NBER Working Papers appear in the Monday "New This Week" email update affects the number of downloads, and subsequently the number of citations, that each paper receives. Now, since the papers are listed in numerical order, which papers appear at the top of the list is essentially random. If all readers of "New This Week" were rational, the order that papers appear in the list would make no difference to which ones they chose to read.

However, it appears the order does matter. The authors write:
Our findings are striking: despite the effectively random allocation of papers to the NTW ranking, we find much higher hits, downloads and citations of papers presented earlier in the list. The effects are particularly meaningful for the first paper listed, with a 33% increase in views, a 29% increase in downloads, and a 27% increase in citations from being listed first. For measures of downloads and hits, although not for citations, there are further declines as papers slide down the list. However, the very last position is associated with a boost in views and downloads.
On top of that, it isn't just all readers of the NBER email that are affected. The framing effects are significant when the authors restrict the sample just to 'experts', being those in academia.

Why would these framing effects occur? A rational reader would weigh up the costs and benefits of reading the email (or the rest of the email) to identify papers that interest them. Given that the order of papers is essentially random, the first paper has the same chance of being of interest as the tenth paper (i.e. the costs and benefits are the same for every link in the NTW email). So, if a rational person reads the first link, they should read every link.

However, people are not purely rational. Framing can make a difference. I can think of two reasons why framing might be important in the case of NTW emails. First, perhaps we suffer from a short attention span. So, when reading through the NTW email, the early papers have our close attention but by the time we get towards the end of the email, we are mainly skimming the titles very quickly. I sometimes catch myself doing this when reading eTOCs sent by journals, especially if I get a lot of them on the same day. However, the authors test the effect of the length of the list, and the effects are not significant.

Second, perhaps we only have limited time available to read NBER Working Papers each week. Think of it as a time budget, which is exhausted once we've read one or two (or n) papers. So, we stop paying attention once we have opened the first one or two (or n) links because we know we won't have enough time to read them. Clearly I wish I had this problem. Instead I print them out and they sit in an ever-increasing pile of "gee-that-would-be-interesting-to-read" papers (which is why I occasionally blog about some paper that is quite dated - you can tell I picked a random paper from the middle of my pile). The authors actually test the opposite - whether having a first paper that has a 'star' author encourages people to read more papers that appear later in the list (i.e. that people decide whether the whole list is worth perusing based on the quality of the first link). They find some fairly weak evidence that having a star author on the first paper reduces the favouritism of the last paper. So maybe the latter of my two explanations alternatives explains the framing effect here.

So, knowing that this is a problem, what to do about it? I guess it depends on what your goal is. If you're an author of an NBER Working Paper, you want to ensure your paper gets to the top of the list so it will be downloaded and cited more. So, maybe there's an incentive for side-payments to whoever puts the NTW list together, or whoever assigns the working paper numbers? More seriously, the authors suggest that randomising the order of papers in the list would improve things, from the authors standpoint. It wouldn't solve the framing problem, but at least it would ensure that authors couldn't game the system.

[HT: Marginal Revolution]

Tuesday, 6 October 2015

Changing global inequality, migration and open borders

In the last couple of months, I've written a couple of blog posts on Branko Milanovic's research on global inequality over recent centuries (see here and here). In this post, I want to update to some of his latest research on global income inequality published in Global Policy in 2013 (ungated version here).

In the paper, Milanovic again looks at how global inequality has changed over time. This time the important comparisons are only from 1952-2011 (rather than over past centuries) - his Figure 2 is reproduced below.


"Concept 1" inequality is inequality between countries, i.e. differences in average incomes between countries (without weighting by country size, so for example India and Israel would count the same). "Concept 2" inequality takes the population size into account, but still measures inequality as if everyone had the average income in their country, i.e. it ignores inequality within countries. "Concept 3" inequality is closest to true global inequality because it is between individuals the world over (or at least, those covered adequately by household surveys). Concept 3 is obviously the best measure of inequality of the three, and I guess you can see almost whatever pattern you want to see from those dots (maybe it is trending upwards; or maybe you focus on the downward trend of the last three dots?). Milanovic labels this diagram "the mother of all inequality disputes".

Anyway, regardless of what you think is happening to "Concept 3" inequality, it is clear that "Concept 1" inequality increased substantially from the 1950s to 2000, then declined since then, while "Concept 2" inequality has been declining slowly over most of the period, before accelerating since the 1990s. If you remember my post of Milanovic's earlier work, he showed that inequality between countries had grown substantially over the last two centuries. The lastest data is showing a reversal of that long term trend. The poorest countries are growing the fastest, and this is lowering between-country inequality substantially. And when you consider that the most populous poor countries (China, India) have been growing fast, then the decrease in "Concept 2" inequality has been substantial.

What's surprising then is the persistence of "Concept 3" inequality remaining high, which must be due to increases in within-country inequality, particularly in fast-growing populous countries like China. So, while the growing middle class in poor countries is getting much better off, the poorest in those countries may not be much better off than they were years ago.

Milanovic notes that there are three ways in which to reduce global inequality:

  1. Increasing the growth rates of poor countries (relative to rich countries), especially those of populous poor countries like China, India, Indonesia, Nigeria, etc.
  2. Introducing global redistributive schemes, e.g. through much-increased development assistance for poor countries
  3. Migration.
Migration would enable the poor to improve their living standards. Even if they are within the poorest sections of the population in a rich country, they may be better off than being around the median (or below) in a poorer country. Michael Clemens has made a similar case for the benefits of freer migration. However, completely opening borders for economic migrants is unlikely to happen any time soon, even though rich western nations could benefit greatly from an influx of young migrants to offset their rapidly ageing labour forces (more on that in a later post). Milanovic notes:

...there are seven points in the world where rich and poor countries are geographically closest to each other, whether it is because they share a border, or because the sea distance between them is minimal. You would not be surprised to find out that all these seven points have mines, boat patrols, walls and fences to prevent free movement of people.
For the record, those seven borders are on land: U.S.-Mexico; Greece-Macedonia/Albania; Saudi Arabia-Yemen; North Korea-South Korea; Israel-Palestine; and by sea: Spain-Morocco; and Indonesia-Malaysia.

Read more:


Sunday, 4 October 2015

Video on signalling

One of my favourite topics to teach in ECON100 and ECON110 involves information asymmetry, including adverse selection and moral hazard. Long-time readers of my blog will have noticed that a number of  my posts cover similar topics. That's both because these topics are difficult for students to understand (so blogging about it helps them), and because there are lots of real-world applications that I can discuss (such as yesterday's blog post on technology and health insurance).

Anyway, the point of this post was to draw attention to one of the latest MRUniversity videos, which is on signalling:


Enjoy!

[HT: Marginal Revolution]

Saturday, 3 October 2015

Why your Fitbit or Apple Watch could soon get you cheaper health insurance

Last year I wrote a post about the impact of technology on car insurance, noting that car insurers had started offering discounts to car owners who installed a black box in their car that could monitor their driving behaviour. Now, a Swiss insurer is looking to introduce the health insurance version:
Swiss health insurers could demand higher premiums from customers who live sedentary lifestyles under plans to monitor people’s health through wearable digital fitness devices.
CSS, one of Switzerland’s biggest health insurers, said on Saturday it had received a “very positive” response so far to its pilot project, launched in July, which is monitoring its customers’ daily movements...
The pilot also aims to discover to what extent insured people are willing to disclose their personal data, and whether self-monitoring encourages them to be more active in everyday life, pushing them to take 10,000 steps a day.
I noted in the car insurance example last year that the companies were using the black box mainly to overcome moral hazard. However, in this case the technology probably much more effective for solving adverse selection problems.

An adverse selection problem arises because the uninformed party cannot tell those with 'good' attributes from those with 'bad' attributes. To minimise the risk to themselves of engaging in an unfavourable market transaction, it makes sense for the uninformed party to assume that everyone has 'bad' attributes. This leads to a pooling equilibrium - those with 'good' and 'bad' attributes are grouped together because they can't easily differentiate themselves. This creates a problem if it causes the market to fail.

In the case of insurance, the market failure may arise as follows (this explanation follows Stephen Landsburg's excellent book The Armchair Economist). Let's say you could rank every person from 1 to 10 in terms of risk (the least risky are 1's, and the most risky are 10's). The insurance company doesn't know who is high-risk or low-risk. Say that they price the premiums based on the 'average' risk ('5' perhaps). The low risk people (1's and 2's) would be paying too much for insurance relative to their risk, so they choose not to buy insurance. This raises the average risk of those who do buy insurance (to '6' perhaps). So, the insurance company has to raise premiums to compensate. This causes some of the medium risk people (3's and 4's) to drop out of the market. The average risk has gone up again, and so do the premiums. Eventually, either only high risk people (10's) buy insurance, or no one buys it at all. This is why we call the problem adverse selection - the insurance company would prefer to sell insurance to low risk people, but it's the high risk people who are most likely to buy.

How does a pedometer help solve this problem? Well, if the insurance company provides a short-term insurance contract conditional on wearing the pedometer, then they can use that time to gather information about the wearer. The pedometer won't tell the insurance company about your eating habits, but it will tell them how active your lifestyle is, allowing them to some extent to separate the high risk and low risk people, and then price future insurance accordingly.

Couldn't you just refuse to be part of this and not wear the pedometer? I guess you could. But think about it from the insurance company's perspective. Who is going to refuse the pedometer? The low risk people will pay a lower premium by agreeing to wear it, since it will show they are low risk. So, only high risk people will refuse. Which the article picks up as well:
The implication is that people who refuse to be monitored will be subject to higher premiums, said Blick.
And, if you think that you could just avoid this completely, or attach the pedometer to your dog, or some other workaround:
Fitness wristbands such as Fitbit are just the beginning of a revolution in healthcare, believes Ohnemus.
“Eventually we will be implanted with a nano-chip which will constantly monitor us and transmit the data to a control centre,” he said.
Which sounds very much like the future that Adam Ozimek is foreseeing:
Constant measurement will include many things that to our eyes look like serious encroachments on privacy. Our health, spending, and time use will be easily and often measured. These will start off as opt-in systems, but the better they work the more economic incentive people will have to sign up. For example, for a big enough discount on health insurance you will probably agree to swallow the health tracking devices. Eventually, it probably won’t be a choice. The good new is after opting in to so much voluntary tracking this won’t seem like as big of a deal to people in the future as it does to us.
In a way, this will make us much less free as we are faced with prices for many behaviors that used to be costless to us. But it will also mean that the costs that we bare for other people’s behaviors will decline and the dollar cost of government will shrink, which will make us more free in a sense.
Finally, what about moral hazard? Moral hazard is the tendency for someone who is imperfectly monitored to take advantage of the terms of a contract (a problem of post-contractual opportunism). I'm not sure the case is nearly as strong for moral hazard being a problem of health insurance. However, this is how we would explain it. In countries that have a private or insurance-based healthcare system, people without health insurance have a large financial incentive to eat healthily, exercise, and so on, because if they get sick they must cover the full cost of their healthcare themselves (or go without care, substantially lowering their quality of life). Once they are insured though, people have less financial incentive to eat healthily and exercise because they have transferred part or all of the financial cost of any illness onto the insurer (though they would still face the opportunity cost of lost income while they are in hospital, etc.). The insurance contract creates a problem of moral hazard - the insured person's behaviour could change after the contract is signed.

Now, health insurers aren't stupid and insurance markets have developed in order to reduce moral hazard problems. This is why we have excesses (deductibles) and co-payments - paying an excess or a co-payment puts some of the financial burden of any illness back on the insured person and increases the financial incentive for eating healthily and exercising. The pedometer clearly gives the insurer the ability to more closely monitor people's behaviour, but would people exercise more if they know their insurer is watching? That's harder to say.

Overall though, wearable technology is going to make it easier for health insurance companies to price their premiums according to risk. So, if you're the healthy type your Fitbit will likely earn you a lower health insurance premium.

[HT: Marginal Revolution]

Read more: