Thursday 24 December 2015

The black market in WINZ payment cards

One of the common examples I use in illustrating the role of incentives for my ECON110 class is the black market in WWII Great Britain. In short, many products (e.g. meat) were rationed. Essentially each household registered with their local shops, and the shops were provided with only the amount of meat for their registered customers. However, some households would prefer less meat and more sugar, so a complex system of black market trades started to occur, whereby households could obtain the goods they actually wanted, rather than those the authorities deemed they should have (you can read more here).

Black markets tend to arise whenever the government limits what citizens are allowed to spend their money on. For instance, in the U.S. food stamp programme (a.k.a. Supplemental Nutrition Assistance Program, or SNAP) many recipients sell their SNAP vouchers for cash, often with the complicity of shopkeepers (see here and here for example).

And now we have a local example, with payment cards from Work and Income New Zealand (WINZ) showing up for sale on Facebook. The New Zealand Herald reports:
Work and Income payment cards are showing up for sale on Facebook trading groups.
In a screenshot provided to the Herald, one person offers a $100 payment card for sale for $40 on the "Buy and Sell Hamilton" Facebook group.
And this is in spite of WINZ attempts to make it difficult for this sort of abuse:
When a card is issued, the recipient must sign it and payments are verified by matching the signature on the receipt to the back of the card.
Grants for food and hardship must be used within three days. 
Of course, setting rules on what payment cards can be used for makes them less valuable to the recipients than cash. It also increases the costs to the government because of the need to enforce the rules. And there needs to be some form of sanctions for recipients who break the rules. Having sanctions increases the cost of abuse for the payment card recipients, by making abuse more difficult (increasing the change of being caught). Presumably there are also penalties for the person who buys and tries to use a payment card in the name of someone else (under fraud laws I expect).

The payment card will only be able to be sold for less than face value, partly because the recipient (seller) probably wants cash fast (so is willing to give up some of the face value of the payment card for cash in hand now), and partly to compensate the buyer for the risk they face (of penalties for fraudulently using a payment card in the name of someone else).

The more urgent the sale (within less than three days), or the more costly the penalties for the buyer, the greater the difference will be between the face value of the payment card and the price it will be sold for. And so, we end up with the situation where a $100 WINZ payment card is being sold for $40.

Saturday 19 December 2015

China's zombie companies are playing chicken

Earlier this month, Andrew Batson wrote an interesting blog article about China's zombie companies:
One of the more interesting developments in official Chinese discussions about the economy has been the appearance of the term “zombie companies”... money-losing companies that seem to stay alive far longer than economic fundamentals warrant. This problem is particularly acute in the commodity sectors: a global supply glut has driven down prices of iron ore and coal to multi-year lows, levels where China’s relatively low-quality and high-cost mines have difficulty being competitive. And yet they continue operating despite losing money, because it is easier to keep producing than to completely shut down.
In ECON100 we no longer cover cost curves in detail, so we also don't talk about the section of the firm's marginal cost curve where it makes losses but prefers to continue trading because the losses from trading are smaller than the losses from shutting down. However, this is exactly the situation for China's zombie firms. As Batson notes:
An excellent story this week in the China Economic Times on the woes of the coal heartland of Shanxi quoted one executive saying, “If we produce a ton of coal, we lose a hundred yuan. If we don’t produce, we lose even more.”
Another aspect of the reluctance of China's zombie companies to shut down is strategic, and in this case it may be that the zombie companies continue to operate even if their losses would be smaller by shutting down. What the zombie companies are doing is playing a form of the 'chicken game'. In the classic version of the game of chicken, the two players are driving cars and line up at each end of the street. They accelerate towards each other, and if one of the drivers swerves out of the way, the other wins. If they both swerve, neither wins, and if neither of them swerve then both die horribly in a fiery car accident.

Now consider the game for zombie companies, as expressed in the payoff table below (assuming for simplicity that there are only two zombie firms, A and B). If either firm shuts down, they incur a small loss (including if both firms shut down). However, if either firm continues operating while the other firm shuts down, the remaining firm is able to survive and return to profitability. Finally, if both firms continue operating, both incur a big loss.


Where are the Nash equilibriums in this game? To identify them, we can use the 'best response' method. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium).

For our game outlined above:
  1. If Zombie Company A continues operating, Zombie Company B's best response is to shut down (since a small loss is better than a big loss) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Zombie Company A shuts down, Zombie Company B's best response is to continue operating (since survival and profits is better than a small loss);
  3. If Zombie Company B continues operating, Zombie Company A's best response is to shut down (since a small loss is better than a big loss); and
  4. If Zombie Company B shuts down, Zombie Company A's best response is to continue operating (since survival and profits is better than a small loss).
Note that there are two Nash equilibriums, where one of the companies shuts down, and the other continues operating. However, both firms want to be the firm that continues operating. This is a type of coordination game, and it is likely that both firms will try to continue operating, in the hopes of being the only one left (but leading both to incur big losses in the meantime!).

What's the solution? To avoid the social costs of the zombie companies continuing to operate and generating large losses, the government probably needs to intervene. Or, as noted at the bottom of the Batson blog article, mergers of these firms will remove (or mitigate) the strategic element, which is the real problem in this case.

Wednesday 16 December 2015

The average gains from trade may depend on human capital

The gains from trade is one of the (few?) propositions of economic theory that is supported by virtually all economists. At least, the theory holds for individuals trading with each other. Despite this overall agreement there is still no consensus on whether international trade is equally good for all countries. Sceptics like Dani Rodrik and Nobel prizewinner Joseph Stiglitz argue that globalisation (and by extension, free trade) may have gone too far.

The basic argument that free trade is good for all countries relies on increases in aggregate wellbeing (gains in consumer and/or producer surplus), that are shared by consumers and producers in each country, making them better off. Also, openness places pressure on governments and households to invest more in human capital in order to compete in the global market, making the average citizen better off. The argument against this suggests that, because poorer countries rely on tariffs on trade goods for revenue, openness to trade undermines government income and makes it harder for the government to afford to pay for the welfare state, health and education expenditures. Reductions in these areas make the average citizen worse off.

A recent paper (sorry, I don't see an ungated version anywhere) by Stephen Kosack (University of Washington) and Jennifer Tobin (Georgetown University) suggests that both of these arguments may be true. How can that be? It all depends on the level of human capital of the country. Kosack and Tobin explain:
One consequence of the increased trade is the reallocation of economic activities that require high levels of human capital to countries with capable, productive workers. In countries that already have such work-forces, increasing trade appears to reinforce human development, adding both resources and rational for further improvements in people's lives and capacities. But in most countries, the workforce has not yet developed such capacities, and in these countries, increased trade tends to undermine the incentives and the ability of governments to invest in citizens and citizens to invest in themselves.
Kosack and Tobin use data since 1980 from the Human Development Index, and find that trade is overall negatively associated with human development. However, this relationship holds strongly for low-HDI countries, whereas for high-HDI countries the relationship is positive (i.e. trade further increases human development). Their results are robust to a number of different specifications and data.

However, I do have one concern. Many of the lowest-HDI countries are also the sub-Saharan African countries that have, since the 1980s, suffered the brunt of the HIV/AIDS epidemic. While my reading is that there is only weak evidence of an impact of the epidemic on GDP, there has certainly been a large impact on life expectancy (one of the components of HDI). Kosack and Tobin do not explicitly control for this. Having said that, when they exclude the health component of the HDI from their dependent variable, their key results do not change significantly. So, perhaps there is something to this.

Interestingly, New Zealand did rate an explicit mention in the paper. At New Zealand's level of HDI, the results suggest that openness to trade is good - "a one standard deviation increase in trade openness is associated with an increase in HDI Rate of nearly 2%". The turning point between 'good' and 'bad' is at the level of HDI of Mauritius or Brazil.

It is important to note that the average citizens of low-HDI countries were not made worse off by openness to trade in absolute terms, only relative to high-HDI countries with a similar level of openness to trade. My conclusion from reading the paper is that, while openness to trade is a good thing, it is better for some countries than others. I don't think anyone would have disagreed with that even before this paper was published. My take is that the debate on whether globalisation has gone too far is still not settled (and I'll continue to have spirited discussions of it in my ECON110 class each year, no doubt).

Monday 14 December 2015

Levitt and Lin can help catch cheating students

In a recent NBER Working Paper (sorry I don't see an ungated version anywhere), Steven Levitt (University of Chicago - better known as co-author of Freakonomics) and Ming-Jen Lin (National Taiwan University) demonstrate a simple algorithm that identifies students likely to be cheating in exams. The authors were brought in to investigate cheating in an "introductory national sciences course at a top American university", where a number of students had reported cheating on a midterm.

The algorithm essentially compares the number of shared incorrect answers between students sitting next to each other, with those between students not sitting next to each other. In other words, students could sit immediately next to each other (i.e. not spread out around the room). Levitt and Lin find:
Students who sit next to one another on the midterm have an additional 1.1 shared incorrect answers... students who sat next to each other have roughly twice as many shared incorrect answers as would be expected by chance.
Essentially, they find that "upwards of ten percent of the students cheated on the midterm in a manner that is detectable using statistics". Then for the final exam, they changed things up and cheating fell to levels that were basically null (the analysis suggested that four students cheated in the final exam).

What did they do? A number of things changed:

  • There were four teaching assistants invigilating the exam, rather than one (for a class of over 240 students);
  • There were two different versions of the exam, which were randomly allocated to students; and
  • Students were randomly allocated to seats.
One last point - it is not straightforward to attribute the relative decrease in cheating to the randomisation alone. This is because the professor sent a series of emails to students calling for confessions of cheating following the midterm and reminding students that "cheating is morally wrong". So, students were probably primed for enhanced attention to be paid to their exam behaviour and extra-vigilant not to cheat (or to be perceived as cheating). No doubt this was reinforced by the change in procedures noted above.

Having said that, having students seated immediately next to each other for a test is just stupid (not having 100% multiple choice exams is probably a good idea either). If you don't want your students to cheat on exams, don't make it easy for them to cheat on exams. Increasing the costs of cheating (the penalties remain the same, but the probability of being caught increases when you have more supervision) and lowering the benefits (randomising seating so it is harder to sit beside or behind your friends) reduces the incentives for cheating. Then you wouldn't need Levitt and Lin to find the cheaters.

Saturday 12 December 2015

Why you should take a copy of your credit report on a first date

A long-standing result in the economics of relationships is assortative matching (or mating) - that people tend to match with others who are like them, whether in terms of age, ethnicity, income, education, or whatever. But what about unobservable characteristics, like trustworthiness - are people similar to their partners in terms of how trustworthy they are?

The interesting thing about unobservable characteristics is that they are, well, unobservable. They are a form of asymmetric information - you know how trustworthy you are, but potential partners do not know. And hidden characteristics like trustworthiness can lead to adverse selection, if some people can take advantage of the information asymmetry (which less trustworthy people, almost by definition, are likely to do).

As I've noted before in the context of dating:
An adverse selection problem arises because the uninformed parties cannot tell high quality dates from low quality dates. To minimise the risk to themselves of going on a horrible date, it makes sense for the uninformed party to assume that everyone is a low-quality date. This leads to a pooling equilibrium - high-quality and low-quality dates are grouped together because they can't easily differentiate themselves. Which means that people looking for high-quality dates should probably steer clear of online dating.
What you need is a way of signalling that you are trustworthy. With signalling, the informed party finds some way to credibly reveal the private information (that they are trustworthy) to the uninformed party. There are two important conditions for a signal to be effective: (1) it needs to be costly; and (2) it needs to be more costly to those with lower quality attributes (those who are less trustworthy). These conditions are important, because if they are not fulfilled, then those with low quality attributes (less trustworthy people) could still signal themselves as having high quality attributes (being more trustworthy). But what would make a good signal of trustworthiness?

Coming back to assortative matching for the moment, a recent working paper by Jane Dokko (Brookings), Geng Li (Federal Reserve Bank), and Jessica Hayes (UCLA) provides some interesting evidence that is suggestive that people match on their credit scores (as well as education, etc.). Credit scores are a measure of creditworthiness (and an indirect measure of trustworthiness), and the paper considers how credit scores are associated with relationship 'success' or 'failure'.

The authors use data from the Federal Reserve Bank of New York Consumer Credit Panel, which has about 12 million 'primary sample' records and about 30 million other consumer records (of people who live with those in the primary sample). Given some constraints in the dataset, the authors use an algorithmic approach to identify the formation of new cohabitating relationships (and the dissolution of previous relationships), giving them a sample of nearly 50,000 new couples.

The authors find that:
...conditional on observable socioeconomic and demographic characteristics, individuals in committed relationships have credit scores that are highly correlated with their partners’ scores. Their credit scores tend to further converge with their partners’, particularly among those in longer-lasting relationships. Conversely, we find the initial match quality of credit scores is strongly predictive of relationship outcomes in that couples with larger score gaps at the beginning of their relationship are more likely to subsequently separate. While we find that part of such a correlation is attributable to poorly matched couples’ lower chances of using joint credit accounts, acquiring new credit, and staying away from financial distress, the mismatch in credit scores seems to be important for relationship outcomes beyond these credit channels.
We also provide suggestive evidence that credit scores reveal information about one’s underlying trustworthiness in a similar way as subjective, survey-based measures. Moreover, we find that survey-based measures of trustworthiness are also associated with relationship outcomes, which implies that differentials in credit scores may also reflect mismatch in couples’ trustworthiness.
In other words, there is assortative matching in terms of credit scores, and since credit scores reveal information about trustworthiness, that suggests there is also assortative matching in terms of trustworthiness. In other words, more trustworthy people are likely looking for partners who are also more trustworthy.

Which brings us back to signalling - if you want to provide a signal of your trustworthiness, you might want to consider providing your potential date with a verified copy of your credit report. It's costly to provide (perhaps not in monetary terms, but there is an intrinsic cost of revealing your credit information to someone else!), and more costly if you have a lower credit score (since you face a higher probability of your date deciding they have something better to do with their Saturday night).

[HT: Marginal Revolution]

Thursday 10 December 2015

Keytruda, and why Pharmac looks for the best value treatments

I was privileged to attend a presentation by Professor Sir Michael Marmot on Tuesday. It was on inequality and health (which I'm not going to talk about in this post), and one of the points he made struck me - Sir Michael suggested that he doesn't make the economic case for reducing health inequality, he makes the moral case.

The reason that comment struck me is that economists are often unfairly characterised as not having regard for the moral case, particularly in the context of the allocation of health care spending. However, I'm not convinced that the moral case and the economic case for how health care spending is allocated are necessarily different. Toby Ord makes an outstanding moral argument in favour of allocating global health resources on the basis of cost-effectiveness (I recommend this excellent essay). Ill spend the rest of the post demonstrating why, using the example of Pharmac funding (or rather non-funding) of the new cancer drug Keytruda, that is currently big news in New Zealand (see here and here and here).

First, it is worth noting that Pharmac essentially has a fixed budget, which has increased from about $635 million in 2008 to $795 million in 2015. Pharmac uses that money to provide treatments at free or subsidised cost to New Zealanders. However, Pharmac can't provide an unlimited amount of treatments because its funding is limited. So, naturally it looks for the best value treatments.

What are the best value treatments? In the simplest terms, the best value treatments are the treatments that provide the most health benefits per dollar spent. A low-cost treatment that provides a large increase in health for patients is considered to be superior to a high-cost treatment that provides a small improvement in health.

Low-cost-high-benefit vs. high-cost-low-benefit is an easy comparison to make. But what about low-cost-low-benefit vs. high-cost-high-benefit? That is a little trickier. Economists use cost-effectiveness analysis to measure the cost of providing a given amount of health gains. If the health gains are measured in a common unit called a Quality-Adjusted Life Years (QALYs) then we call it cost-utility analysis (you can read more about QALYs here, as well as DALYs - an alternative measure). QALYs are essentially a measure of health that combines length of life and quality of life.

Using the gain in QALYs from each treatment as our measure of health benefits, a high-benefit treatment is one that provides more QALYs than a low-benefit treatment, and we can compare them in terms of the cost-per-QALY. The superior treatment is the one that has the lowest cost-per-QALY.

Following this model in the context of a limited pool of funds to pay for health care, then the treatments that are funded with higher priority then are the ones that have the lowest cost-per-QALY. This is essentially the model that the Pharmac follows, as do other countries such as the UK. The National Institute for Health and Care Excellence (NICE) sets a funding threshold of £30,000 per QALY - treatments that cost less than £30,000 per QALY are more likely to be funded, and those that cost more are less likely. In New Zealand, the effective cost-per-QALY for Pharmac-funded treatment was $35,714 for the last financial year.

Now consider Keytruda, a new 'wonder drug' for treating melanoma. The downside is that Keytruda is extremely expensive - $300,000 per patient for a two-year course of treatment. Of course the cost-per-QALY isn't calculated as simply as dividing that cost by two because patients may gain many years of healthy life as a result of treatment, but Pharmac rated Keytruda as "low priority", in part because of the high cost.

Andrew Little has suggested that Labour would override Pharmac's decision not to fund Keytruda if elected, and John Key has also wavered in the face of public demand for the drug. Would that be a good thing? Of course, it would be good for the melanoma patients who would receive Keytruda free or heavily subsidised. But, in the context of a limited funding pool for Pharmac, forcing the funding of Keytruda might mean that savings need to be made elsewhere [*], including treatments that provide a lower cost-per-QALY. So at the level of the New Zealand population, some QALYs would be gained from funding Keytruda, but even more QALYs would be lost because of the other treatments that would no longer be able to be funded.

And so, I hope you can see why the economic case and the moral case for the allocation of health care spending need not necessarily be different. By allocating scare health care resources using an economic case, we ensure the greatest health for all New Zealanders.

[*] Fortunately, neither political party is suggesting that funding for Keytruda would come out of Pharmac's existing limited budget. However, that doesn't mitigate the issue of overriding Pharmac's decision-making. Even if Pharmac's budget is increased to cover the cost of providing Keytruda to all eligible patients, there may be other treatments that have lower cost-per-QALY than Keytruda that are currently not funded but could have been within a larger Pharmac budget.

Wednesday 9 December 2015

Decreasing access to alcohol might make drug problems worse

Earlier in the year, Alex Tabarrok at Marginal Revolution pointed to this new paper by Jose Fernandez, Stephan Gohmann, and Joshua Pinkston (all University of Louisville). I though it interesting at the time, and I've finally gotten to reading it now. In the paper, the authors essentially compare 'wet' counties and 'dry' counties in Kentucky, in terms of the number of methamphetamine (meth) labs (there's also an intermediate category, the 'moist' counties, where alcohol is available in some areas but not others). I was surprised to read that more than a quarter of all counties in Kentucky are dry (where the sale of alcohol is banned).

Anyway, it's an interesting analysis, with the hypothesis that in counties where alcohol is less available (and so more expensive to obtain), drugs like meth become a relatively cheaper substitute, which increases the quantity of meth consumed (and produced). At least, this is what we would expect from simple economic theory. The authors use a number of different methods, including OLS regression and propensity-score matching, but their preferred method is an instrumental variables approach (which I have earlier discussed here). The instrument of choice is religious affiliation in 1936 (which had a large impact on whether a county became 'dry' after prohibition was ended, and probably doesn't affect the number of meth labs today).

My main concern on reading the paper was the number of meth labs was measured as the number of meth lab seizures, which probably depends on the degree of enforcement activity by police. However, among their robustness checks the authors look at the rate of property crime and don't find it related to their measure of meth lab seizures (although it would be interesting to see whether property crime differed between the wet and dry counties systematically as well).

Onto their results, they find:
relative to wet counties, dry counties have roughly two additional meth lab seizures annually per 100,000 population. This suggests that, if all counties were to become wet, the total number of meth lab seizures in Kentucky would decline by about 25 percent.
The results appear to be quite robust to the choice of measure of alcohol availability (including alcohol outlet density), and estimation method. If you dispense with the meth lab data and look at data on emergency room visits for burns (a likely consequence of amateur meth labs), then the results are similar. And they're not driven by unobserved health trends (no relationship between alcohol availability and either childhood obesity or infant mortality).

Many local councils would probably like to reduce access to alcohol. However, if Fernandez et al.'s analysis holds up for other areas, it suggests that reducing access to alcohol might make drug problems worse.

[HT: Marginal Revolution]

Tuesday 8 December 2015

Reason to be increasingly skeptical of survey-based research

I've used a lot of different surveys in my research, dating back to my own PhD thesis research (which involved three household surveys in the Northeast of Thailand). However in developed countries, the willingness of people to complete surveys has been declining for many years. That in itself is not a problem unless there are systematic differences between the people who choose to complete surveys and those who don't (which there probably are). So, estimates of many variables of interest are likely to be biased in survey-based research. Re-weighting surveys might overcome some of this bias, but not completely.

A recent paper (ungated) in the Journal of Economic Perspectives by Bruce Meyer (University of Chicago), Wallace Mok (Chinese University of Hong Kong), and James Sullivan (University of Notre Dame), makes the case that things are even worse than that. They note that there are three sources of declining accuracy for survey-based research:

  1. Unit non-response - where participants fail to answer the survey at all, maybe because they refuse or because they can't be contacted (often we deal with this by re-weighting the survey);
  2. Item non-response - where participants fail to answer specific questions on the survey, maybe because the survey is long and they get fatigued or because they are worried about privacy (often we deal with this my imputing the missing data); and
  3. Measurement error - where participants answer the question, but do not given accurate responses, maybe because they don't know or because they don't care (unfortunately there is little we can do about this).
Meyer et al. look specifically at error in reporting transfer receipts (e.g. social security payments, and similar government support). The scary thing is that they find that:
...although all three threats to survey quality are important, in the case of transfer program reporting and amounts, measurement error, rather than unit nonresponse or item nonresponse, appears to be the threat with the greatest tendency to produce bias.
In other words, the source of the greatest share of bias is likely to be measurement error, the one we can do the least to mitigate. So, that should give us reason to be increasingly skeptical of survey-based research, particularly for survey questions where there is high potential for measurement error (such as income). It also provides a good rationale for increasing use of administrative data sources where those data are appropriate, especially integrated datasets like Statistics New Zealand's Integrated Data Infrastructure (IDI), which I am using for a couple of new projects (more on those later).

Finally, I'll finish on a survey-related anecdote which I shared in a guest presentation for the Foundations of Global Health class at Harvard here today. Unit non-response might be an increasing issue for survey researchers, but there is at least one instance I've found where unit non-response is not an issue. When doing some research in downtown Hamilton at night with drinkers, we had the problem of too many people wanting to participate. Although in that case, maybe measurement error is an even larger problem? I'll blog in more detail on that research at a later date.

[HT: David McKenzie at the Development Impact blog]

Sunday 6 December 2015

Climate change, violence and crime

Last year I wrote a post about climate change and violence, based on this paper (ungated here) by Hsiang et al. (the same Solomon Hsiang who was a co-author on the paper in Nature I discussed in my last post). Like most papers, the Hsiang et al. paper looks at cross-country differences in conflict. Within-country evaluations are much less common. Which makes this recent paper by Jean-Francois Maystadt (Lancaster University), Margherita Calderone (World Bank), and Liangzhi You (IFPRI and Chinese Academy of Agricultural Sciences) of interest. In the paper, Maystadt et al. look at local warming and conflict in North and South Sudan.

The authors use data measured at the 0.05 degree level (latitude and longitude) over the period 1997 to 2009. I strongly recommend reading the data section of the article, as it has pointers to a number of excellent sources of global spatially-explicit data that would be useful for a number of projects, not just in the context of climate change.

They use time and grid-cell fixed effects to "be able to draw causal inferences", but I probably wouldn't characterise their findings as necessarily causal. Or at least not definitively so. They find:
A change in temperature anomalies of 1 standard deviation is found to increase the frequency of violent conflict by 32%... temperature variations may have affected about one quarter (26%) of violent events in Sudan. On the contrary, no significant impact is found for rainfall anomalies...
Temperature anomalies (deviations from mean temperature) were associated with over a quarter of conflict events in the Sudan, which is a large number. The authors investigate the mechanisms for this (it is unlikely to be water stress because rainfall is not significant), and find that pastoralist areas (where livestock are an important source of income) are particularly affected by temperature. The authors conclude that conflict over natural resources (i.e. water) is exacerbated in these areas, but if that were the case you would expect rainfall to be a bigger factor. I'd be more inclined to believe that heat stress affects livestock in negative ways (weight loss, dehydration, mortality), that affects income security for pastoralists.

On a different but related note, earlier in the year I read this 2014 paper (ungated earlier version here) by Matthew Ranson (Abt Associates), but I hadn't had a chance to blog about it until now. In the paper, Ranson looks at the relationship between monthly weather patterns and crime in the U.S. This paper is interesting because Ranson doesn't stop at looking simply at the relationship, but projects the change in crime over the rest of the century and estimates the social costs of the additional crimes. He finds a number of interesting things:
Across a variety of offenses, higher temperatures cause more crime. For most categories of violent crimes, this relationship appears approximately linear through the entire range of temperatures experienced in the continental United States. However, for property crimes (such as burglary and larceny), the relationship between temperature and crime is highly non-linear, with a kink at approximately 50 °F...
...in the year 2090, crime rates for most offense categories will be 1.5-5.5% higher because of climate change... The present discounted value of the social costs of these climate-related crimes is between 38 and 115 billion dollars.
I would suggest that, if you did a similar analysis for New Zealand, we might see something similar (with the magnitude of social costs being much lower of course due to smaller population). Both papers provide additional reasons to hope for some agreement in Paris.

Friday 4 December 2015

Climate change, economic growth and inequality

This week and next, world leaders (and activists) are gathered in Paris to negotiate an agreement to combat climate change. So it seems timely to do a couple of posts on climate change. I'd been holding off on writing about this, and among NZ bloggers Mark Johnson has beaten me to it, but I'm going to do this post anyway.

Last month, the journal Nature published a new article looking at the impacts of temperature change on economic production, by Marshall Burke (Stanford), Solomon Hsiang and Edward Miguel (both UC Berkeley). The paper was covered by media back in October (see The Economist or The Washington Post).

The problem with most studies that attempt to evaluate the impact of temperature on economic output (or growth or productivity) is that they compare warmer countries with cooler countries. But of course there are other differences between countries that are difficult to control for. Burke et al. try an alternative approach - comparing each country in cooler years with the same country in warmer years. In short, they:
analyse whether country-specific deviations from growth trends are non-linearly related to country-specific deviations from temperature and precipitation trends, after accounting for any shocks common to all countries.
The key results are most usefully summarised in the figure below, which shows the change in growth rates for different annual average temperature. The panel on the left shows the overall results, while the panels on the right disaggregate the results between rich and poor countries, early (per 1990) and later periods, and the effects on agricultural and non-agricultural GDP.


Essentially, Burke et al. have demonstrated that temperature has a non-linear effect on economic growth - growth is maximised at an annual average temperature of about 13 degrees Celsius. However, the important results have to do with the implications for rich and poor countries (see panel b in the figure). Most rich countries are in cooler climates (think Europe, or North America for example), while poorer countries are typically in climates that already have average annual temperatures well in excess of the optimum (think Africa or South or Southeast Asia, for example). So, climate change is very likely to have unequal impacts on economic growth between rich and poor countries. It may even make some rich countries initially better off as they approach the optimum temperature (Chris Mooney suggests Canada or Sweden as examples), while simultaneously making most poor countries significantly worse off.

So, unfortunately it seems likely that global (between country) inequality may be exacerbated by the changing climate, because "hot, poor countries will probably suffer the largest reduction in growth". This would undo some of the significant progress that has been made over the last half century in particular.

Importantly, technological change to date hasn't modified the relationship between temperature and growth (see panel c in the figure), which suggests that world leaders need to change the temperature directly, rather than trying to modify the relationship between temperature and growth, if the projected outcome (77% of countries being poorer in per capita terms by 2100) is to be avoided. The authors conclude:
If societies continue to function as they have in the recent past, climate change is expected to reshape the global economy by substantially reducing global economic output and possibly amplifying existing global economic inequalities, relative to a world without climate change.
 Read more:

Wednesday 2 December 2015

What coffee farmers do with certification

This is my third post on Fair Trade this week (see the earlier posts here and here). Having discussed what Fair Trade does for prices (not much) and incomes (also not much), it's worth asking whether there is a better alternative.

In a new paper in the journal World Development (sorry I don't see an ungated version anywhere), Bart van Rijsbergen, Willlem Elbers (both Radboud University in the Netherlands), Ruerd Ruben (Wageningen University), and Samuel Njuguna (Jomo Kenyatta University of Agriculture and Technology in Kenya) compare Utz certification with Fair Trade certification, based on data from 218 coffee farmers in Kenya. Importantly, they use a propensity score matching approach (which I discussed briefly in this earlier post) to make the comparisons.

The key difference between Utz certification and Fair Trade is that Utz focuses on improved agricultural practices (that should result in higher coffee quality), while Fair Trade instead provides a price premium, as discussed in my earlier posts. Some of the surveyed coffee farmers held both certifications, some were Fair Trade only, and some were non-certified. Here's what they find:
Whereas Fairtrade clearly enhances further specialization in coffee production and more engagement in dry coffee processing, Utz-certified enhances input-intensification of coffee production and multi-certification opts for coffee renovation and the diversification of coffee outlets...
...household welfare and livelihood effects of coffee certification remain generally rather disappointing.
Neither certification scheme had much effect on income, mainly because farms only derived one-quarter to one-third of their income from coffee sales, and certified coffee sales were only around one-third of that income.

Which brings me to this other new paper I read today from the journal Food Policy (ungated here), by Wytse Vellema (Ghent University), Alexander Buritica Casanova (CIAT), Carolina Gonzalez (CIAT and IFPRI), and Marijke D'Haese (Ghent University). In the paper the authors use data from coffee farmers in Colombia , where they also note that coffee is only one (of several) income sources for most farm households. Their comparisons rely on cross-sectional mean differences (rather than a matching approach), so their results need to be treated with a little bit of caution.

Again, they find little effect of coffee certification (this time the certification is mostly Starbucks C.A.F.E. and Nespresso AAA) on household incomes. However, what caught my eye about the paper was the discussion of income and substitution effects for coffee farmers:
Having farm certification reduced income from on-farm agricultural production [MC: excluding coffee production] and agricultural wage labour, likely indicating re-allocation of resources away from these activities. Such re-allocation of labour is driven by substitution and income effects. As total household labour is fixed, increased returns to one activity cause households to substitute labour away from other activities. Households are most likely to substitute labour away from activities with low return or which are considered less ’satisfying’... Income effects lead to an increased consumption of leisure, reducing overall hours worked. The negative combined impact on incomes from on-farm agricultural production and agricultural labour shows that substitution effects dominate income effects.
As we discuss in ECON100, because with certification coffee production is more rewarding farmers reallocate their labour time to that activity and away from farming other products and from agricultural wage labour (substitution effect). Higher income from coffee leads them to consume more leisure, reducing their labour allocation to all farming activities (income effect). The overall effect on farm household income is negligible (Vellema et al. note that coffee income increased, but overall income did not).

One last point is important here. There appears to be an increasing proliferation of certification schemes, and many farmers hold multiple certifications (in the Vellema et al. paper, about 29% of the sample held two certifications, and 6% held three certifications). For farmers, the marginal benefit of an additional certification probably decreases as they get more certified (since a good proportion of their crop is able to be sold through their existing certified channels), while the marginal cost probably increases (the opportunity costs involved with spending time maintaining multiple, and possibly conflicting, certifications). I wonder, what is the optimal number of certifications for a given coffee farmer?

Read more:

Tuesday 1 December 2015

Comparative advantage and trade, Dr Seuss style

Justin Wolfers must have some interesting classes. He just posted this poem in the style of Dr Seuss by one of his students. It covers comparative advantage, task allocation, and trade. My favourite bit:

If tasks were switched amongst our three friends,
Opportunity cost would rise to no end.
Each should focus on that which they do best,
And trade for those products made by the rest.

Nice. It reminds me of this Art Carden effort from a few years ago, "How Economics Saved Christmas", which is an assigned reading in my ECON110 class.


Monday 30 November 2015

Fair trade, coffee prices, and farmer incomes

This is my second post on Fair Trade (see my earlier post here). Having established in the previous post that sustainability labels matter for consumers (at least according to the study I reviewed in that post), the question is whether Fair Trade matters for farmers. Does it increase the prices they receive by enough to offset the certification costs? Does it increase farmer incomes?

The basic mechanics of the Fair Trade pricing system work like this: farmers receive a stated minimum price for coffee or the market price, whichever is the greater. They also receive an additional premium, which is to be used only for social or business development purposes, as democratically determined by members of each cooperative.

A recent paper (ungated longer and earlier version here) by Alain de Janvry (UC Berkeley), Craig McIntosh (UC San Diego), and Elisabeth Sadoulet (UC Berkeley) argues that easy (not quite free) entry into the supply of fair trade coffee eliminates any premium that coffee producers receive from being Fair Trade certified. Since the barriers to entry into Fair Trade production are relatively low, if the Fair Trade premium is high many producers seek to become certified, competing with the existing suppliers for a limited market for Fair Trade coffee, and reducing the proportion of each supplier's coffee that is sold at the Fair Trade premium (effectively reducing the average price received).

This outcome is driven by the actions of the certifiers. Because the certifiers only source of income is to provide certification, the incentive is for them to over-certify relative to the number of certified farmer cooperatives that would maximise farmer profits. Since the farmers pay for the certification, de Janvry et al. argue that the result is:
...that the price premiums in the FT system have largely flowed toward certifiers rather than producers as intended by the consumers of FT coffee.
They demonstrate their results using data from Coffeecoop, a Central American association of coffee cooperatives where all coffee is Fair Trade certified, over the period 1997 to 2009. However, because not all Fair Trade certified coffee is sold as Fair Trade, the authors can compare Fair Trade prices with non-Fair-Trade prices for the same batch of coffee, which allows them to control for quality (which is an important determinant of coffee prices).

They show that:
...the nominal premium was quite significant in the years 2001 to 2004 with low NYC price, reaching an average of 60c to 64c per pound over a market price of 63c per pound but falling to 5c to 9c per pound over a market price of 126c per pound from 2006 to 2008, even though the social premium in these years should have been at least 10c per pound.
That seems quite good overall, until you consider what happens to the proportion of coffee sold as Fair Trade, where they confirm that the sales share moves inversely with the Fair Trade premium (i.e. when the premium is high, the cooperative is able to sell a lower share of coffee as Fair Trade). The overall result is that:
The average effective premium over the thirteen years of our data is 4.4c per pound over an average NYC price of 107c per pound, and only 1.8c per pound over the last five years, 2005 to 2009.
And when they consider the costs of certification, which they estimate at 3c per pound:
...the average result of participating in the FT market has been only 2.5c per pound for the whole period of observation and with a loss of 1.2c per pound over the last five years.
You may consider that as only one result based on a single case study, and while it looks at prices at the cooperative level, it doesn't look at the effects at the farmer's level.

Ana Dammert and Sarah Mohan (both Carleton University) recently reviewed the literature on the economics of Fair Trade in the Journal of Economic Surveys (ungated earlier version here). They point out that establishing the impacts of Fair Trade for farmers is not straightforward - there is likely to be a selection bias, since farmers who expect to do well out of certification are more likely to choose to become certified. Few studies have accounted for this well, and those that have done have mostly used some form of propensity-score matching, where they essentially compare farmers that are certified with those who aren't but otherwise look very similar to the certified farmers (e.g. same size farm, same land quality, same education, same farm assets, etc.).

In terms of these higher-quality studies, the one study using this approach to look at prices found no evidence of increases in prices received by farmers. And for incomes, there are some gains but those gains are small relative to alternative income generation activities for rural dwellers like migration or employment in the rural non-farm economy. Having said that, they also find that:
Another strand of the literature accounts for selection bias and shows that Fair Trade producers have better assets, higher rates of savings and higher levels of animal stocks and perceive their land as having high renting value...
That is difficult to reconcile with little increase in incomes. So, it's possible that there are small gains for coffee farmers from Fair Trade certification. However, it's hard to say that those gains justify the much higher prices that coffee consumers pay (although the consumers do receive a 'warm glow' benefit as well).

Finally, maybe some of the other labelling initiatives are better for farmers than Fair Trade? Consumers were willing to pay more for Rainforest Alliance labelling than Fair Trade, after all. I'll return to this last point in a future post.

Read more:


Sunday 29 November 2015

Douglass C. North, 1920-2015

I'm a bit late to this due to Thanksgiving-related activities here in the U.S., but Nobel laureate Douglass C. North passed away earlier this week. North shared the 1993 Nobel Prize with Robert Fogel (who passed away in 2013) for "having renewed research in economic history by applying economic theory and quantitative methods in order to explain economic and institutional change". North's work led to the development of both cliometrics (the quantitative study of economic history) and new institutional economics.

I use a little bit of North's work in my ECON110 class, where we spend half a topic on property rights and their historical development in western countries. He is also one of the co-authors of the required textbook for that class, which is up to its 19th edition.

Washington University in St Louis has an obituary here, and the New York Times also has an excellent obituary. Tyler Cowen has collected a number of links on Douglass North here.

It is really sad - we seem to be going through a bad few years in terms of the loss of economics Nobel winners.

Thursday 26 November 2015

Sustainability labels do matter

I've been reading a few papers on aspects of Fair Trade recently (I'll blog some of them over the coming days). Like many economists, I'm not sold on the positive effects of fair trade - but more on that later. In this post, I want to focus on this recent paper by Ellen Van Loo (Ghent University), Vincenzina Caputo (Korea University), Rodolfo Nayga Jr. and Han-Seok Seo (both University of Arkansas), and Wim Berbeke (Norwegian Institute for Bioeconomy Research), published in the journal Ecological Economics (sorry I don't see an ungated version anywhere).

In the paper, the authors do a couple of interesting things. First and foremost, they use discrete choice modelling (a type of non-market valuation technique, where you repeatedly present people with different hypothetical options, and they choose the option they prefer - a technique I've used for example in this paper (ungated earlier version here), and in a forthcoming paper in the journal AIDS and Behavior that I'll blog about later), to investigate people's willingness-to-pay (WTP) for different sustainability labels on coffee. The different labels they look at are USDA Organic certified, Fair Trade certified, country-of-origin labelling, and Rainforest Alliance certified. If people truly value these difference labels (presumably because of the certification), then they should be willing to pay more for products that have them.

Second, the authors use eye-tracking technology to follow which characteristics of the products people pay the most attention to when making these hypothetical choices. Eye-tracking involves following the movement of the eyes so that you can identify where the subject is looking, and for how long they are concentrating on elements they are looking at. I'd say it's quite an exciting thing to do in the context of discrete choice modelling, since there is always the change that people don't pay attention to all of the attributes of the hypothetical products (or scenarios) they are presented with.

Anyway, the authors found a number of things of interest, starting with:
When evaluating the coffee attributes, participants attached the highest importance to the flavor followed by the price, type of roast and in-store promotions... the sustainability labels are perceived as less important compared to other coffee attributes, with USDA Organic and Fair Trade being more important than Rainforest Alliance.
They also discovered there were three distinct types of consumers:

  1. "Indifferent" - consumers who didn't pay attention to either price or sustainability labelling (9.9% of the sample);
  2. "Sustainability and price conscious" - consumers who attached a high importance to both sustainability labelling and price (58.0% of the sample); and
  3. "Price conscious" - consumers who attached a high importance to price but not sustainability labelling (32.1% of the sample).
The eye-tracking results confirmed that the consumers who said they attached higher importance to sustainability labelling (or price) did indeed pay more attention to those attributes when selecting their choice in the discrete choice exercises. But did they want to pay more for them? The authors find that:
USDA Organic is the highest value attribute [among the sustainability labels]... followed by Rainforest Alliance and Fair Trade...
USDA Organic had the highest WTP among all the sustainability labels examined, resulting in a WTP premium of $1.16 for a package of 12 oz. This is followed by the Rainforest Alliance label and the Fair Trade label ($0.84 and $0.68, respectively).
So, people are willing to pay more for sustainable coffee. How does that compare with what they actually pay though? The authors note:
The actual price premium for coffee with a sustainability label ranges from $1.5 to $2.3/12 oz. when comparing coffee products with and without the label from the same brand.
Which suggests that the retailers are over-charging for these sustainable coffee products, relative to what the average consumer is willing to pay. However, the results overall do suggest that sustainability labels do matter. It doesn't tell us whether that is a good thing overall though - a point I'll no doubt come back to later.

Wednesday 25 November 2015

Try this: Regional activity report

Last week I wrote a post about tourism, that looked at data from the Ministry of Business, Innovation and Employment (MBIE)'s Regional Activity Report. This online data is a treasure-trove of summary statistics for all of the regions and territorial authorities.

If you scroll down you can also see how the regions and territorial authorities compare across eight sets of indicators:

  1. Social and Income - including household income, household income distribution, earnings by industry, deprivation index, and internet;
  2. Housing - including mean weekly rent, median house price, mean house value, and new dwellings;
  3. Workforce - including employment rate, labour force participation rate, NEET rate, unemployment rate, quarterly turnover rate, employment by industry, and employment by occupation;
  4. Education - including national standards achievement, and NCEA Level 2;
  5. Population - including population estimates, population projections, international migration, population by ethnicity, population by age group, and rural-urban proportions;
  6. Economic - including GDP per capita, GDP by industry, businesses by employees, new building consents, and new car registrations;
  7. Agriculture - including agricultural share of regional GDP, and area in farms; and
  8. Tourism - including guest nights per capita, accommodation occupancy rate, tourism spend, international guest nights, and international visits.
In most cases the data tracks changes over time as well, some of it back to 2001. 

While most of this data was already freely available (from Statistics New Zealand, mostly), having it all collected in a single place and in a very user friendly interface, makes it an excellent resource. Even better, you can easily download any of the data into CSV files to play with yourself.

I won't provide an example of what you can do with it. I'm sure you're all more than capable of playing with the data yourselves. Enjoy!

Monday 23 November 2015

Streams aren't drowning music sales

A couple of weeks back I wrote a post about digital music sales, specifically about Radiohead's short-lived free offering of their album In Rainbows, and how it didn't impact on their digital music sales. A broader question, that we discuss in ECON110, is how new business models are affecting more traditional revenue streams in the media markets. Take the example of streaming music (or streaming video), which has exploded in popularity in recent years (although not everyone believes this is necessarily an example of a new industry - witness the notorious views on Spotify of Radiohead's Thom Yorke).

The important question, from the point of view of the artists, is does streaming pay off? Does it cannibalise music sales (digital or otherwise)? A recent paper (ungated version here) by Luis Aguiar (Institute for Prospective Technological Studies in Spain) and Joel Waldfogel (University of Minnesota) looks specifically at this question.

There are third main possibilities with digital streaming. First, they might cannibalise music sales, because listeners find streaming cheaper or more convenient than owning their own catalogue of music. Second, they might stimulate music sales, perhaps because they allow people to sample music they otherwise would not have heard. Third, they might increase revenues despite having little effect on music sales, because consumers who previously downloaded the music from illegitimate sources instead decide to stream the music. Or some combination of these.

The biggest problem with looking at this econometrically is that if you simply run a regression with music sales and streaming, you are likely to observe a positive relationship, simply because more popular music tracks will attract both more music sales and more streaming. You can tell a similar story about pirating of music. Indeed, when Aguiar and Waldfogel look at song-level data, they observe positive relationships.

So, instead they move to looking at aggregated data, which will be less vulnerable to song-level popularity effects. Specifically, they use weekly data on song-level digital sales in each of 21 countries for 2012-2013, artist-level piracy via torrents, both aggregated to the country level, and a country-level index of Spotify use. They find several important results:
First, when we use aggregate data, we find sales and piracy displacement by Spotify. Second, when we use data on the US covering the period of substantial change in Spotify use, we find smaller sales displacement than when we use 2013 data. Third, the coefficients relating Spotify use to recorded music sales differ by format. The coefficients for digital track sales are generally higher than the coefficients for albums, but many of the album coefficients are also negative and significant. Our best estimate indicates than an additional 137 streams displaces one track sale.
Now, that isn't the end of the story. Artists should be less worried about the degree of cannibalisation per se than what it does to their revenue. Streaming adds to artist revenue, while lost sales obviously reduce revenue. What is happening to revenue overall? Aguiar and Waldfogel show that, based on reasonable assumptions about revenue per stream, that Spotify is essentially revenue-neutral for artists.

Finally, there are a few reasons to be wary of these results. First, as Andrew Flowers has noted, the per-stream artist revenue that Spotify claims is contested. So, perhaps it is lower and leads to net decreases in artist revenue. Second, the results are based on data from streams of top-50 songs, which are extrapolated to all streams. There is no way to know whether this extrapolation is reasonable without access to Spotify's proprietary data (and they're not volunteering it - if I was cynical, I might point out that this is awfully convenient!). Third, as Andrew Flowers points out, we don't know about the distribution of artist revenue. It's likely to be concentrated among a few high-profile artists, with the little guys getting very little. So, streaming might be net positive for the big artists and net negative for the little artists (or the other way around), and net neutral overall. Again, this is something we couldn't evaluate without more disaggregated Spotify data (and a good identification strategy to overcome the issues noted above).

[HT: Marginal Revolution]

Sunday 22 November 2015

The gender bias in economics

A fair amount has been written over the last couple of weeks about gender bias in economics. This Justin Wolfers piece in the New York Times was one of the catalysts, and it was followed up by Dan Diamond at Forbes and Jeff Guo at the Washington Post. The storm has been mostly about the new paper by Anne Case and Angus Deaton, which has more often than not been reported as a Deaton paper, with Case mentioned almost as an afterthought (so it's worth noting that Anne Case is a well-respected economics professor at Princeton in her own right).

Tyler Cowen at Marginal Revolution then pointed to this new paper by Heather Sarsons (a PhD candidate at Harvard, where I am based for the next month), entitled "Gender differences in recognition for group work" (which was also reported on by Jeff Guo in his article). In the paper, Sarsons first notes that there is a well-established literature noting that women are less likely to be promoted than men, and that "over 30% of the observed gap in tenure rates can not be accounted for by observable productivity differences or family commitments". She then looks specifically at tenure decisions for economics faculty, testing whether co-authoring of papers prior to tenure decisions has different effects for male and female academic economists.

The choice of economics as the field to study is deliberate. In many fields, the order of authorship in co-authored papers is meaningful - the first author was often the largest contributor, the second author was the second-largest contributor, and so on (how 'large' relative contributions are is open to negotiation, I guess). In contrast, in economics it is more likely that co-authors appear in alphabetical order, regardless of the merit of their relative contributions [*]. So, while in other fields the order of authors' names provides a signal of their relative contributions to co-authored papers, this typically isn't the case for economics. Sarsons's hypothesis is that this leads employers (universities) to have to make judgment calls about the relative contributions of the co-authors, and that these judgment calls tend to go against women (because the employers' prior belief is that female economists are lower quality than male economists).

Using data from 552 economists over the period 1975 to 2014, she first finds that:
Approximately 70% of the full sample received tenure at the first institution they went up for tenure at but this masks a stark difference between men and women. Only 52% of women receive tenure while 77% of men do. There is no statistically significant difference in the number of papers that men and women produce although men do tend to publish in slightly better journals...
An additional paper is associated with a 5.7% increase in the probability of receiving tenure for both men and women but a constant gender gap between promotion rates persists. Women are on average 18% less likely to receive tenure than a man, even after controlling for productivity differences. 
So, women received tenure at a lower rate than men, but why? The total number of papers they publish is no different and doesn't make a difference to the probability of tenure, and the difference in paper quality is actually rather small (even though it is statistically significant). Turning to her specific hypothesis about co-authorship, she finds that:
an additional coauthored paper for a man has the same effect on tenure as a solo-authored paper. An additional solo-authored paper is associated with a 7.3% increase in tenure probability and an additional coauthored paper is associated with an 8% increase.
For women, a sole-authored paper has an effect that is not statistically significantly different from that for men, but the effect of an additional co-authored paper is nearly 6 percentage points lower (i.e. a 2% increase, rather than an 8% increase, in the probability of receiving tenure).

To test the robustness of her findings, she does a similar analysis with sociologists (where the social norm is authorship by order of relative contributions), and finds no significant differences in tenure decisions between men and women. She concludes:
The data are not in line with a traditional model of statistical discrimination in which workers know their ability and anticipate employer discrimination...
The results are more in line with a model in which workers do not know their ability or do not anticipate employer discrimination, and where employers update on signals differently for men and women.
So, there is a bias against women. How far does this bias extend? There is a conventional wisdom that men are more suited for economics than women (to which I say: they should attend one of my ECON110 classes, where the female students are more often than not at the top of the class). This recent paper by Marianne Johnson, Denise Robson, and Sarinda Taengnoi (all from University of Wisconsin Oshkosh) presents a meta-analysis of the gender gap in economics performance at U.S. universities. Meta-analysis involves combining the results of many previous studies to generate a single (and usually more precise) estimate of the effect size. It (hopefully) overcomes the biases inherent in any single study, such as the study of gender gaps in economics performance.

In the paper, the authors [**] take results from 68 studies, containing 325 regressions. They find a number of things of note:
...only 30.7% of regressions actually conform to the conventional wisdom - that men statistically significantly outperform women... In 9.2% of regressions, we find that women performed statistically significantly better...
Notable is the increase in regressions that find that women outperforming [sic] men after 2005...
We find a negative coefficient on year, which would indicate that the performance gap is narrowing over time, regardless of whether we use the year of data collection or the year of publication as our time measure.
So, the observed performance gap is declining over time. Which brings me back to Sarsons's paper. She used data from 1975 to 2014, which is a wide span of time, over which things have gotten better for female academics (surely?). I wonder what happens if she accounts for year of tenure? I doubt the effect goes away, but at least we might know a bit more about the changes over time and if indeed things are getting better.

[Update: I just had one of those wake-up-and-realise-you-said-something-wrong moments. Sarsons does include year fixed effects in her econometric model, so is already controlling for changes over time somewhat, but not changes in the difference in tenure probability between men and women over time (which would involve interacting that fixed effect with the female dummy or with one or more of the other variables in her model)].

[HT: Shenci Tang and others]

*****

[*] There are many exceptions to this co-authoring rule. My rule-of-thumb in co-authoring is that the first author is the one whose contributions were the greatest, with all other authors listed alphabetically.

[**] All women co-authors, which relates to the Sarsons's paper, since one of her other results was that co-authoring with other women didn't result in as large a penalty as co-authoring with men.

Saturday 21 November 2015

Consumer technology and insurance fraud

This week is International Fraud Awareness Week. So, this story in the New Zealand Herald last week was a bit early. It notes:
Insurance fraud spikes whenever a new technology upgrade, such as the release of a new iPhone, occurs, says Dave Ashton, head of the Insurance Council's Insurance Claims Register...
Speaking at the council's annual conference this week, Ashton said when new technology was released, many people want to upgrade their model and therefore claimed their older models had been stolen, lost, or accidentally damaged. The ensuing insurance pay-out funded the new upgrade.
This type of insurance fraud is an example of what economists call moral hazard. Moral hazard is the tendency for someone who is imperfectly monitored to take advantage of the terms of a contract (a problem of post-contractual opportunism). Smartphone owners who are uninsured have a large financial incentive to look after their devices and avoid dropping them in the toilet (which I note one of our previous ECON100 tutors did, twice), because if they damage the device they must cover the full cost of repair ore replacement themselves (or use a damaged phone, etc.). Once their phone is insured, the driver has less financial incentive to look after their device because they have transferred part or all of the financial cost of any accident onto the insurer. The insurance contract creates a problem of moral hazard - the smartphone owner's behaviour could change after the contract is signed.

Things are worse in the case of the type of insurance fraud noted in the article. This goes beyond simply being less careful, and extends to deliberate misrepresentation to take advantage of the terms of the insurance contract.

If you want to understand why, you only need to consider the incentives. A rational (or quasi-rational) person makes decisions based on the costs and benefits, e.g. the costs and benefits of claiming that your old phone was stolen. The benefit is the added benefit from the new phone (compared with the old phone). The costs are the moral cost (from lying to the insurance company), as well as the expected cost associated with being caught (which depends on the probability of being caught, and the financial penalty you face if caught).

Most people don't engage in insurance fraud because the combined costs (moral plus potential financial punishment) are greater than the perceived benefits. However, when a new phone is released the benefits of fraud increase significantly (because the new phones provide a much increased benefit). Again, this doesn't mean that everyone will engage in insurance fraud, but at least some people will.

Insurance companies are not stupid though, and they are moving on this, according to the Herald article:
The council this week launched upgraded technology on the register which allows more 'big data' analysis, including hotspots such as the number of burglary claims in a particular neighbourhood. It can also provide predictive analysis around likely claims for things like weather events based on the collective claims history.
So you can probably expect your next claim for a damaged iPhone to come under a little more scrutiny, if you are claiming just after a new phone is released.

Read more:

Thursday 19 November 2015

Uber just told us there is elastic demand for rides

The New Zealand Herald ran a story on Tuesday about Uber cutting fares by 10 per cent in New Zealand. In the story, they quote the general manager for Uber in New Zealand, Oscar Peppitt:
"When we are able to cut our prices, that actually leads to an increase in earnings for drivers...
When we're able to drop those prices, earnings for drivers increase, and increase disproportionately to what we drop."
Essentially, Peppitt is telling us that demand for rides on Uber is what economists term price elastic. When demand is elastic, that means that an increase in price will result in a more than proportional decrease in the quantity sold (e.g. a 10% increase in price might lead to a 20% decrease in quantity). The reverse is also true - a decrease in price will result in a more than proportional increase in the quantity sold (e.g. a 10% decrease in price might lead to a 20% increase in quantity).

When demand is elastic, decreasing your price will increase total revenue. This is because for a simple firm (like a taxi) total revenue is simply price multiplied by quantity. If price decreases, but quantity increases by a greater proportion, then total revenue will increase.

Here's where things get tricky, because firms don't operate in isolation. Elasticities have a strategic element, as we teach in ECON100. Uber might face elastic demand for rides when it lowers prices and other taxi firms don't match the price decrease. Taxi customers would suddenly find that Uber's rides are much cheaper than those of other taxi firms (they are already, but we won't worry about that for now). Many customers will switch to Uber, leading to a large increase in demand (relatively elastic demand). This is illustrated in the diagram below. When Uber lowers their price from P0 to P1, and no other firm matches the new price, then Uber faces the demand curve D1. Quantity increases by a lot, from Q0 to Q1. Total revenue, which is the rectangle under the price and to the left of the quantity, increases from P0*Q0 to P1*Q1 (it should be clear the rectangle of total revenue becomes larger).


However, if the other taxi firms also lower their prices, Uber doesn't have the same cost advantage. There will be some increase in customers overall (because prices are lower for customers from all firms), but Uber won't capture all of those new customers because the other firms are cheaper now too. That means that the quantity demanded from Uber will increase, but not by as much. In the diagram above, when Uber lowers their price from P0 to P1, and other firms also lower their prices, then Uber faces the demand curve D2. Quantity increases a little, from Q0 to Q2. Total revenue decreases from P0*Q0 to P1*Q2 (it should be clear the rectangle of total revenue becomes smaller).

To make matters worse, if marginal costs are increasing then not only does total revenue decrease, but profits for taxi drivers will decrease too (note that this is a possibility even if total revenue increases slightly). And we haven't even considered the distributional impacts (some Uber drivers might get lots more rides, while others get the same, depending on when they are available to drive, where they are located, and so on). So it's probably overly simplistic to assume that lower prices and more rides will make all Uber drivers better off. It certainly shouldn't attract more drivers to sign up for Uber.

So maybe Uber has other motives for lowering prices? Market penetration pricing perhaps? Or just good publicity to try and capture market share?

Tuesday 17 November 2015

More on hobbits and tourism

A few weeks back, I posted about the impact of Lord of the Rings on tourism arrivals in New Zealand. The conclusion was that there was a short-term rise in tourist arrivals to New Zealand after the films, but that the effect did not persist in the longer term.

Last week the Herald ran a story about the impact of the Hobbit films on local tourism, specifically tourist spending in the Matamata-Piako District (where the Hobbiton Movie Set is located). The story was backed up by an impressive data visualisation on the new Herald Insights site. From the story:
The Hobbit film trilogy has catalysed a spending surge in the Matamata region by tourists from the likes of Australia, Germany, United Kingdom and North America.
The amount spent in the area by those tourists has risen at a greater magnitude over the past five years, relative to 2009 spending, than in any other region in New Zealand.
Of course, this is great news for the Matamata-Piako District, as it means more tourist spending, and more jobs in tourism, accommodation, and other services. However, it doesn't mean that overall tourist arrivals have increased (thankfully the Herald story doesn't imply this either), and one might rightly wonder which areas may have lost tourism spending as a result of tourists flocking to Matamata instead?

MBIE's regional activity report is an outstanding interactive tool for taking at least an initial look at these questions. Expanding on the Herald's example, German tourists' spending in Matamata-Piako increased by 535% between 2009 and 2014. The big losers (of German tourist spending) over the same period appear to be Porirua City (down 50%), Hauraki District (down 40%), and Palmerston North City (down 28%). See here for details.

It is also worth noting that German tourists were responsible for just 1.9% of tourist spending in Matamata-Piako in 2014. The trend in increased spending is apparent across many groups for Matamata-Piako - there are similar (but not as large in relative terms) spikes for spending by tourists from the rest of Europe (excluding Germany and the UK), the U.S., Canada, and Australia. But not for China or Japan.

So, an overall win for Matamata-Piako, but hard to say whether it is a net win for New Zealand.

Monday 16 November 2015

Joseph Stiglitz on high frequency trading

I’ve had the occasional disagreement with my more left-leaning students about my views on markets. Usually, they’ve gotten the wrong impression of where I stand, which is that although markets are usually a good thing, there are limits. One of the things that has always quietly concerned me has been the way that financial markets work (or sometimes, fail to work). So, when I read this piece by Joseph Stiglitz (2001 Nobel Prize winner), I was quite happy to see that we share some disquiet in this area. Overall, Stiglitz appears to be quite negative on the effects of high frequency trading.

The paper is very readable (not uncommon based on other writing by Stiglitz I have read), even for those who are not technically inclined. Some highlights:
High frequency trading… is mostly a zero sum game—or more accurately, a negative sum game because it costs real resources to win in this game...
A market economy could not operate without prices. But that does not mean that having faster price discovery, or even more price discovery necessarily leads to a Pareto improvement… it is not obvious that more trading (e.g. flash trading) will result in the markets performing price discovery better...
…there may be little or no social value in obtaining information before someone else, though the private return can be quite high… And because the private return can exceed the social return, there will be excessive investments in the speed of acquisition of information.
…if sophisticated market players can devise algorithms that extract information from the patterns of trades, it can be profitable. But their profits come at the expense of someone else. And among those at whose expense it may come can be those who have spent resources to obtain information about the real economy… But if the returns to investing in information are reduced, the market will become less informative.
…managing volatility and its consequences diverts managerial attention from other activities, that would have led to faster and more sustainable growth.
Stiglitz concludes:
While there are no easy answers, a plausible case can be made for tapping the brakes: Less active markets can not only be safer markets, they can better serve the societal functions that they are intended to serve.
In other words, when it comes to financial markets, sometimes less is more.

[HT: Bill Cochrane]

Sunday 15 November 2015

Corporate prediction markets work well, given time

Prediction markets were all the rage in the 2000s. The Iowa Electronic Markets were at their peak, and James Surowiecki wrote the bestseller The Wisdom of Crowds. The basic idea is that the average forecast of a bunch of people is better than the forecast of most (or sometimes all) experts. I was quite surprised that prediction markets appeared to go away in recent years, or at least they weren't in the news much (probably crowded out by stories about big data). I was especially surprised we didn't see many stories about corporate prediction markets, which suggested they weren't particularly prevalent. It turns out that wasn't the case at all.

This paper in the Review of Economic Studies (ungated earlier version here) by Bo Cowgill (UC Berkeley) and Eric Zitzewitz (Dartmouth College) shows that this wasn't the case at all, as demonstrated by their Table 1:


The paper has much more of interest of course. It looks at three prediction markets, at Google, Ford, and an unnamed basic material and energy conglomerate (Firm X), and tests whether these markets are efficient. They find:
Despite large differences in market design, operation, participation, and incentives, we find that prediction market prices at our three companies are well calibrated to probabilities and improve upon alternative forecasting methods. Ford employs experts to forecast weekly vehicle sales, and we show that contemporaneous prediction market forecasts outperform the expert forecast, achieving a 25% lower mean-squared error... At both Google and Firm X market-based forecasts outperform those used in designing the securities, using market prices from the first 24 hours of trading so that we are again comparing forecasts of roughly similar vintage.
In other words, the prediction markets perform well. There are some inefficiencies though - for instance, Google's market exhibits an optimism bias, which is driven by traders who are overly optimistic about their own projects (and their friends' projects), as well as new hires being especially optimistic. However, the inefficiencies disappear over time, and:
Improvement over time is driven by two mechanisms: first, more experienced traders trade against the identified inefficiencies and earn higher returns, suggesting that traders become better calibrated with experience. Secondly, traders (of a given experience level) with higher past returns earn higher future returns, trade against identified inefficiencies, and trade more in the future. These results together suggest that traders differ in their skill levels, they learn about their ability over time, and self-selection causes the average skill level in the market to rise over time.
So, prediction markets work well for firms, given enough time for inefficiencies to be driven out of the markets. This is what you would expect - traders who are consistently poor forecasters either drop out of the market or they learn to be better forecasters.

However, as the authors note they were limited to looking at just three prediction markets, and only those who would share data with them. There is likely to be some survival bias here - prediction markets that don't work well won't last in the market long, and are unlikely to be observed. On the other hand, by the time of writing the paper, the markets at Google and Ford had closed down in spite of their good overall predictive performance. On this last point, the authors note that "decisions about the adoption of corporate prediction markets may... depend on factors other than their utility in aggregating information". Other forecasters don't like being shown up.

[HT: Marginal Revolution]