Thursday, 24 December 2015

The black market in WINZ payment cards

One of the common examples I use in illustrating the role of incentives for my ECON110 class is the black market in WWII Great Britain. In short, many products (e.g. meat) were rationed. Essentially each household registered with their local shops, and the shops were provided with only the amount of meat for their registered customers. However, some households would prefer less meat and more sugar, so a complex system of black market trades started to occur, whereby households could obtain the goods they actually wanted, rather than those the authorities deemed they should have (you can read more here).

Black markets tend to arise whenever the government limits what citizens are allowed to spend their money on. For instance, in the U.S. food stamp programme (a.k.a. Supplemental Nutrition Assistance Program, or SNAP) many recipients sell their SNAP vouchers for cash, often with the complicity of shopkeepers (see here and here for example).

And now we have a local example, with payment cards from Work and Income New Zealand (WINZ) showing up for sale on Facebook. The New Zealand Herald reports:
Work and Income payment cards are showing up for sale on Facebook trading groups.
In a screenshot provided to the Herald, one person offers a $100 payment card for sale for $40 on the "Buy and Sell Hamilton" Facebook group.
And this is in spite of WINZ attempts to make it difficult for this sort of abuse:
When a card is issued, the recipient must sign it and payments are verified by matching the signature on the receipt to the back of the card.
Grants for food and hardship must be used within three days. 
Of course, setting rules on what payment cards can be used for makes them less valuable to the recipients than cash. It also increases the costs to the government because of the need to enforce the rules. And there needs to be some form of sanctions for recipients who break the rules. Having sanctions increases the cost of abuse for the payment card recipients, by making abuse more difficult (increasing the change of being caught). Presumably there are also penalties for the person who buys and tries to use a payment card in the name of someone else (under fraud laws I expect).

The payment card will only be able to be sold for less than face value, partly because the recipient (seller) probably wants cash fast (so is willing to give up some of the face value of the payment card for cash in hand now), and partly to compensate the buyer for the risk they face (of penalties for fraudulently using a payment card in the name of someone else).

The more urgent the sale (within less than three days), or the more costly the penalties for the buyer, the greater the difference will be between the face value of the payment card and the price it will be sold for. And so, we end up with the situation where a $100 WINZ payment card is being sold for $40.

Saturday, 19 December 2015

China's zombie companies are playing chicken

Earlier this month, Andrew Batson wrote an interesting blog article about China's zombie companies:
One of the more interesting developments in official Chinese discussions about the economy has been the appearance of the term “zombie companies”... money-losing companies that seem to stay alive far longer than economic fundamentals warrant. This problem is particularly acute in the commodity sectors: a global supply glut has driven down prices of iron ore and coal to multi-year lows, levels where China’s relatively low-quality and high-cost mines have difficulty being competitive. And yet they continue operating despite losing money, because it is easier to keep producing than to completely shut down.
In ECON100 we no longer cover cost curves in detail, so we also don't talk about the section of the firm's marginal cost curve where it makes losses but prefers to continue trading because the losses from trading are smaller than the losses from shutting down. However, this is exactly the situation for China's zombie firms. As Batson notes:
An excellent story this week in the China Economic Times on the woes of the coal heartland of Shanxi quoted one executive saying, “If we produce a ton of coal, we lose a hundred yuan. If we don’t produce, we lose even more.”
Another aspect of the reluctance of China's zombie companies to shut down is strategic, and in this case it may be that the zombie companies continue to operate even if their losses would be smaller by shutting down. What the zombie companies are doing is playing a form of the 'chicken game'. In the classic version of the game of chicken, the two players are driving cars and line up at each end of the street. They accelerate towards each other, and if one of the drivers swerves out of the way, the other wins. If they both swerve, neither wins, and if neither of them swerve then both die horribly in a fiery car accident.

Now consider the game for zombie companies, as expressed in the payoff table below (assuming for simplicity that there are only two zombie firms, A and B). If either firm shuts down, they incur a small loss (including if both firms shut down). However, if either firm continues operating while the other firm shuts down, the remaining firm is able to survive and return to profitability. Finally, if both firms continue operating, both incur a big loss.


Where are the Nash equilibriums in this game? To identify them, we can use the 'best response' method. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium).

For our game outlined above:
  1. If Zombie Company A continues operating, Zombie Company B's best response is to shut down (since a small loss is better than a big loss) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Zombie Company A shuts down, Zombie Company B's best response is to continue operating (since survival and profits is better than a small loss);
  3. If Zombie Company B continues operating, Zombie Company A's best response is to shut down (since a small loss is better than a big loss); and
  4. If Zombie Company B shuts down, Zombie Company A's best response is to continue operating (since survival and profits is better than a small loss).
Note that there are two Nash equilibriums, where one of the companies shuts down, and the other continues operating. However, both firms want to be the firm that continues operating. This is a type of coordination game, and it is likely that both firms will try to continue operating, in the hopes of being the only one left (but leading both to incur big losses in the meantime!).

What's the solution? To avoid the social costs of the zombie companies continuing to operate and generating large losses, the government probably needs to intervene. Or, as noted at the bottom of the Batson blog article, mergers of these firms will remove (or mitigate) the strategic element, which is the real problem in this case.

Wednesday, 16 December 2015

The average gains from trade may depend on human capital

The gains from trade is one of the (few?) propositions of economic theory that is supported by virtually all economists. At least, the theory holds for individuals trading with each other. Despite this overall agreement there is still no consensus on whether international trade is equally good for all countries. Sceptics like Dani Rodrik and Nobel prizewinner Joseph Stiglitz argue that globalisation (and by extension, free trade) may have gone too far.

The basic argument that free trade is good for all countries relies on increases in aggregate wellbeing (gains in consumer and/or producer surplus), that are shared by consumers and producers in each country, making them better off. Also, openness places pressure on governments and households to invest more in human capital in order to compete in the global market, making the average citizen better off. The argument against this suggests that, because poorer countries rely on tariffs on trade goods for revenue, openness to trade undermines government income and makes it harder for the government to afford to pay for the welfare state, health and education expenditures. Reductions in these areas make the average citizen worse off.

A recent paper (sorry, I don't see an ungated version anywhere) by Stephen Kosack (University of Washington) and Jennifer Tobin (Georgetown University) suggests that both of these arguments may be true. How can that be? It all depends on the level of human capital of the country. Kosack and Tobin explain:
One consequence of the increased trade is the reallocation of economic activities that require high levels of human capital to countries with capable, productive workers. In countries that already have such work-forces, increasing trade appears to reinforce human development, adding both resources and rational for further improvements in people's lives and capacities. But in most countries, the workforce has not yet developed such capacities, and in these countries, increased trade tends to undermine the incentives and the ability of governments to invest in citizens and citizens to invest in themselves.
Kosack and Tobin use data since 1980 from the Human Development Index, and find that trade is overall negatively associated with human development. However, this relationship holds strongly for low-HDI countries, whereas for high-HDI countries the relationship is positive (i.e. trade further increases human development). Their results are robust to a number of different specifications and data.

However, I do have one concern. Many of the lowest-HDI countries are also the sub-Saharan African countries that have, since the 1980s, suffered the brunt of the HIV/AIDS epidemic. While my reading is that there is only weak evidence of an impact of the epidemic on GDP, there has certainly been a large impact on life expectancy (one of the components of HDI). Kosack and Tobin do not explicitly control for this. Having said that, when they exclude the health component of the HDI from their dependent variable, their key results do not change significantly. So, perhaps there is something to this.

Interestingly, New Zealand did rate an explicit mention in the paper. At New Zealand's level of HDI, the results suggest that openness to trade is good - "a one standard deviation increase in trade openness is associated with an increase in HDI Rate of nearly 2%". The turning point between 'good' and 'bad' is at the level of HDI of Mauritius or Brazil.

It is important to note that the average citizens of low-HDI countries were not made worse off by openness to trade in absolute terms, only relative to high-HDI countries with a similar level of openness to trade. My conclusion from reading the paper is that, while openness to trade is a good thing, it is better for some countries than others. I don't think anyone would have disagreed with that even before this paper was published. My take is that the debate on whether globalisation has gone too far is still not settled (and I'll continue to have spirited discussions of it in my ECON110 class each year, no doubt).

Monday, 14 December 2015

Levitt and Lin can help catch cheating students

In a recent NBER Working Paper (sorry I don't see an ungated version anywhere), Steven Levitt (University of Chicago - better known as co-author of Freakonomics) and Ming-Jen Lin (National Taiwan University) demonstrate a simple algorithm that identifies students likely to be cheating in exams. The authors were brought in to investigate cheating in an "introductory national sciences course at a top American university", where a number of students had reported cheating on a midterm.

The algorithm essentially compares the number of shared incorrect answers between students sitting next to each other, with those between students not sitting next to each other. In other words, students could sit immediately next to each other (i.e. not spread out around the room). Levitt and Lin find:
Students who sit next to one another on the midterm have an additional 1.1 shared incorrect answers... students who sat next to each other have roughly twice as many shared incorrect answers as would be expected by chance.
Essentially, they find that "upwards of ten percent of the students cheated on the midterm in a manner that is detectable using statistics". Then for the final exam, they changed things up and cheating fell to levels that were basically null (the analysis suggested that four students cheated in the final exam).

What did they do? A number of things changed:

  • There were four teaching assistants invigilating the exam, rather than one (for a class of over 240 students);
  • There were two different versions of the exam, which were randomly allocated to students; and
  • Students were randomly allocated to seats.
One last point - it is not straightforward to attribute the relative decrease in cheating to the randomisation alone. This is because the professor sent a series of emails to students calling for confessions of cheating following the midterm and reminding students that "cheating is morally wrong". So, students were probably primed for enhanced attention to be paid to their exam behaviour and extra-vigilant not to cheat (or to be perceived as cheating). No doubt this was reinforced by the change in procedures noted above.

Having said that, having students seated immediately next to each other for a test is just stupid (not having 100% multiple choice exams is probably a good idea either). If you don't want your students to cheat on exams, don't make it easy for them to cheat on exams. Increasing the costs of cheating (the penalties remain the same, but the probability of being caught increases when you have more supervision) and lowering the benefits (randomising seating so it is harder to sit beside or behind your friends) reduces the incentives for cheating. Then you wouldn't need Levitt and Lin to find the cheaters.

Saturday, 12 December 2015

Why you should take a copy of your credit report on a first date

A long-standing result in the economics of relationships is assortative matching (or mating) - that people tend to match with others who are like them, whether in terms of age, ethnicity, income, education, or whatever. But what about unobservable characteristics, like trustworthiness - are people similar to their partners in terms of how trustworthy they are?

The interesting thing about unobservable characteristics is that they are, well, unobservable. They are a form of asymmetric information - you know how trustworthy you are, but potential partners do not know. And hidden characteristics like trustworthiness can lead to adverse selection, if some people can take advantage of the information asymmetry (which less trustworthy people, almost by definition, are likely to do).

As I've noted before in the context of dating:
An adverse selection problem arises because the uninformed parties cannot tell high quality dates from low quality dates. To minimise the risk to themselves of going on a horrible date, it makes sense for the uninformed party to assume that everyone is a low-quality date. This leads to a pooling equilibrium - high-quality and low-quality dates are grouped together because they can't easily differentiate themselves. Which means that people looking for high-quality dates should probably steer clear of online dating.
What you need is a way of signalling that you are trustworthy. With signalling, the informed party finds some way to credibly reveal the private information (that they are trustworthy) to the uninformed party. There are two important conditions for a signal to be effective: (1) it needs to be costly; and (2) it needs to be more costly to those with lower quality attributes (those who are less trustworthy). These conditions are important, because if they are not fulfilled, then those with low quality attributes (less trustworthy people) could still signal themselves as having high quality attributes (being more trustworthy). But what would make a good signal of trustworthiness?

Coming back to assortative matching for the moment, a recent working paper by Jane Dokko (Brookings), Geng Li (Federal Reserve Bank), and Jessica Hayes (UCLA) provides some interesting evidence that is suggestive that people match on their credit scores (as well as education, etc.). Credit scores are a measure of creditworthiness (and an indirect measure of trustworthiness), and the paper considers how credit scores are associated with relationship 'success' or 'failure'.

The authors use data from the Federal Reserve Bank of New York Consumer Credit Panel, which has about 12 million 'primary sample' records and about 30 million other consumer records (of people who live with those in the primary sample). Given some constraints in the dataset, the authors use an algorithmic approach to identify the formation of new cohabitating relationships (and the dissolution of previous relationships), giving them a sample of nearly 50,000 new couples.

The authors find that:
...conditional on observable socioeconomic and demographic characteristics, individuals in committed relationships have credit scores that are highly correlated with their partners’ scores. Their credit scores tend to further converge with their partners’, particularly among those in longer-lasting relationships. Conversely, we find the initial match quality of credit scores is strongly predictive of relationship outcomes in that couples with larger score gaps at the beginning of their relationship are more likely to subsequently separate. While we find that part of such a correlation is attributable to poorly matched couples’ lower chances of using joint credit accounts, acquiring new credit, and staying away from financial distress, the mismatch in credit scores seems to be important for relationship outcomes beyond these credit channels.
We also provide suggestive evidence that credit scores reveal information about one’s underlying trustworthiness in a similar way as subjective, survey-based measures. Moreover, we find that survey-based measures of trustworthiness are also associated with relationship outcomes, which implies that differentials in credit scores may also reflect mismatch in couples’ trustworthiness.
In other words, there is assortative matching in terms of credit scores, and since credit scores reveal information about trustworthiness, that suggests there is also assortative matching in terms of trustworthiness. In other words, more trustworthy people are likely looking for partners who are also more trustworthy.

Which brings us back to signalling - if you want to provide a signal of your trustworthiness, you might want to consider providing your potential date with a verified copy of your credit report. It's costly to provide (perhaps not in monetary terms, but there is an intrinsic cost of revealing your credit information to someone else!), and more costly if you have a lower credit score (since you face a higher probability of your date deciding they have something better to do with their Saturday night).

[HT: Marginal Revolution]

Thursday, 10 December 2015

Keytruda, and why Pharmac looks for the best value treatments

I was privileged to attend a presentation by Professor Sir Michael Marmot on Tuesday. It was on inequality and health (which I'm not going to talk about in this post), and one of the points he made struck me - Sir Michael suggested that he doesn't make the economic case for reducing health inequality, he makes the moral case.

The reason that comment struck me is that economists are often unfairly characterised as not having regard for the moral case, particularly in the context of the allocation of health care spending. However, I'm not convinced that the moral case and the economic case for how health care spending is allocated are necessarily different. Toby Ord makes an outstanding moral argument in favour of allocating global health resources on the basis of cost-effectiveness (I recommend this excellent essay). Ill spend the rest of the post demonstrating why, using the example of Pharmac funding (or rather non-funding) of the new cancer drug Keytruda, that is currently big news in New Zealand (see here and here and here).

First, it is worth noting that Pharmac essentially has a fixed budget, which has increased from about $635 million in 2008 to $795 million in 2015. Pharmac uses that money to provide treatments at free or subsidised cost to New Zealanders. However, Pharmac can't provide an unlimited amount of treatments because its funding is limited. So, naturally it looks for the best value treatments.

What are the best value treatments? In the simplest terms, the best value treatments are the treatments that provide the most health benefits per dollar spent. A low-cost treatment that provides a large increase in health for patients is considered to be superior to a high-cost treatment that provides a small improvement in health.

Low-cost-high-benefit vs. high-cost-low-benefit is an easy comparison to make. But what about low-cost-low-benefit vs. high-cost-high-benefit? That is a little trickier. Economists use cost-effectiveness analysis to measure the cost of providing a given amount of health gains. If the health gains are measured in a common unit called a Quality-Adjusted Life Years (QALYs) then we call it cost-utility analysis (you can read more about QALYs here, as well as DALYs - an alternative measure). QALYs are essentially a measure of health that combines length of life and quality of life.

Using the gain in QALYs from each treatment as our measure of health benefits, a high-benefit treatment is one that provides more QALYs than a low-benefit treatment, and we can compare them in terms of the cost-per-QALY. The superior treatment is the one that has the lowest cost-per-QALY.

Following this model in the context of a limited pool of funds to pay for health care, then the treatments that are funded with higher priority then are the ones that have the lowest cost-per-QALY. This is essentially the model that the Pharmac follows, as do other countries such as the UK. The National Institute for Health and Care Excellence (NICE) sets a funding threshold of £30,000 per QALY - treatments that cost less than £30,000 per QALY are more likely to be funded, and those that cost more are less likely. In New Zealand, the effective cost-per-QALY for Pharmac-funded treatment was $35,714 for the last financial year.

Now consider Keytruda, a new 'wonder drug' for treating melanoma. The downside is that Keytruda is extremely expensive - $300,000 per patient for a two-year course of treatment. Of course the cost-per-QALY isn't calculated as simply as dividing that cost by two because patients may gain many years of healthy life as a result of treatment, but Pharmac rated Keytruda as "low priority", in part because of the high cost.

Andrew Little has suggested that Labour would override Pharmac's decision not to fund Keytruda if elected, and John Key has also wavered in the face of public demand for the drug. Would that be a good thing? Of course, it would be good for the melanoma patients who would receive Keytruda free or heavily subsidised. But, in the context of a limited funding pool for Pharmac, forcing the funding of Keytruda might mean that savings need to be made elsewhere [*], including treatments that provide a lower cost-per-QALY. So at the level of the New Zealand population, some QALYs would be gained from funding Keytruda, but even more QALYs would be lost because of the other treatments that would no longer be able to be funded.

And so, I hope you can see why the economic case and the moral case for the allocation of health care spending need not necessarily be different. By allocating scare health care resources using an economic case, we ensure the greatest health for all New Zealanders.

[*] Fortunately, neither political party is suggesting that funding for Keytruda would come out of Pharmac's existing limited budget. However, that doesn't mitigate the issue of overriding Pharmac's decision-making. Even if Pharmac's budget is increased to cover the cost of providing Keytruda to all eligible patients, there may be other treatments that have lower cost-per-QALY than Keytruda that are currently not funded but could have been within a larger Pharmac budget.

Wednesday, 9 December 2015

Decreasing access to alcohol might make drug problems worse

Earlier in the year, Alex Tabarrok at Marginal Revolution pointed to this new paper by Jose Fernandez, Stephan Gohmann, and Joshua Pinkston (all University of Louisville). I though it interesting at the time, and I've finally gotten to reading it now. In the paper, the authors essentially compare 'wet' counties and 'dry' counties in Kentucky, in terms of the number of methamphetamine (meth) labs (there's also an intermediate category, the 'moist' counties, where alcohol is available in some areas but not others). I was surprised to read that more than a quarter of all counties in Kentucky are dry (where the sale of alcohol is banned).

Anyway, it's an interesting analysis, with the hypothesis that in counties where alcohol is less available (and so more expensive to obtain), drugs like meth become a relatively cheaper substitute, which increases the quantity of meth consumed (and produced). At least, this is what we would expect from simple economic theory. The authors use a number of different methods, including OLS regression and propensity-score matching, but their preferred method is an instrumental variables approach (which I have earlier discussed here). The instrument of choice is religious affiliation in 1936 (which had a large impact on whether a county became 'dry' after prohibition was ended, and probably doesn't affect the number of meth labs today).

My main concern on reading the paper was the number of meth labs was measured as the number of meth lab seizures, which probably depends on the degree of enforcement activity by police. However, among their robustness checks the authors look at the rate of property crime and don't find it related to their measure of meth lab seizures (although it would be interesting to see whether property crime differed between the wet and dry counties systematically as well).

Onto their results, they find:
relative to wet counties, dry counties have roughly two additional meth lab seizures annually per 100,000 population. This suggests that, if all counties were to become wet, the total number of meth lab seizures in Kentucky would decline by about 25 percent.
The results appear to be quite robust to the choice of measure of alcohol availability (including alcohol outlet density), and estimation method. If you dispense with the meth lab data and look at data on emergency room visits for burns (a likely consequence of amateur meth labs), then the results are similar. And they're not driven by unobserved health trends (no relationship between alcohol availability and either childhood obesity or infant mortality).

Many local councils would probably like to reduce access to alcohol. However, if Fernandez et al.'s analysis holds up for other areas, it suggests that reducing access to alcohol might make drug problems worse.

[HT: Marginal Revolution]

Tuesday, 8 December 2015

Reason to be increasingly skeptical of survey-based research

I've used a lot of different surveys in my research, dating back to my own PhD thesis research (which involved three household surveys in the Northeast of Thailand). However in developed countries, the willingness of people to complete surveys has been declining for many years. That in itself is not a problem unless there are systematic differences between the people who choose to complete surveys and those who don't (which there probably are). So, estimates of many variables of interest are likely to be biased in survey-based research. Re-weighting surveys might overcome some of this bias, but not completely.

A recent paper (ungated) in the Journal of Economic Perspectives by Bruce Meyer (University of Chicago), Wallace Mok (Chinese University of Hong Kong), and James Sullivan (University of Notre Dame), makes the case that things are even worse than that. They note that there are three sources of declining accuracy for survey-based research:

  1. Unit non-response - where participants fail to answer the survey at all, maybe because they refuse or because they can't be contacted (often we deal with this by re-weighting the survey);
  2. Item non-response - where participants fail to answer specific questions on the survey, maybe because the survey is long and they get fatigued or because they are worried about privacy (often we deal with this my imputing the missing data); and
  3. Measurement error - where participants answer the question, but do not given accurate responses, maybe because they don't know or because they don't care (unfortunately there is little we can do about this).
Meyer et al. look specifically at error in reporting transfer receipts (e.g. social security payments, and similar government support). The scary thing is that they find that:
...although all three threats to survey quality are important, in the case of transfer program reporting and amounts, measurement error, rather than unit nonresponse or item nonresponse, appears to be the threat with the greatest tendency to produce bias.
In other words, the source of the greatest share of bias is likely to be measurement error, the one we can do the least to mitigate. So, that should give us reason to be increasingly skeptical of survey-based research, particularly for survey questions where there is high potential for measurement error (such as income). It also provides a good rationale for increasing use of administrative data sources where those data are appropriate, especially integrated datasets like Statistics New Zealand's Integrated Data Infrastructure (IDI), which I am using for a couple of new projects (more on those later).

Finally, I'll finish on a survey-related anecdote which I shared in a guest presentation for the Foundations of Global Health class at Harvard here today. Unit non-response might be an increasing issue for survey researchers, but there is at least one instance I've found where unit non-response is not an issue. When doing some research in downtown Hamilton at night with drinkers, we had the problem of too many people wanting to participate. Although in that case, maybe measurement error is an even larger problem? I'll blog in more detail on that research at a later date.

[HT: David McKenzie at the Development Impact blog]

Sunday, 6 December 2015

Climate change, violence and crime

Last year I wrote a post about climate change and violence, based on this paper (ungated here) by Hsiang et al. (the same Solomon Hsiang who was a co-author on the paper in Nature I discussed in my last post). Like most papers, the Hsiang et al. paper looks at cross-country differences in conflict. Within-country evaluations are much less common. Which makes this recent paper by Jean-Francois Maystadt (Lancaster University), Margherita Calderone (World Bank), and Liangzhi You (IFPRI and Chinese Academy of Agricultural Sciences) of interest. In the paper, Maystadt et al. look at local warming and conflict in North and South Sudan.

The authors use data measured at the 0.05 degree level (latitude and longitude) over the period 1997 to 2009. I strongly recommend reading the data section of the article, as it has pointers to a number of excellent sources of global spatially-explicit data that would be useful for a number of projects, not just in the context of climate change.

They use time and grid-cell fixed effects to "be able to draw causal inferences", but I probably wouldn't characterise their findings as necessarily causal. Or at least not definitively so. They find:
A change in temperature anomalies of 1 standard deviation is found to increase the frequency of violent conflict by 32%... temperature variations may have affected about one quarter (26%) of violent events in Sudan. On the contrary, no significant impact is found for rainfall anomalies...
Temperature anomalies (deviations from mean temperature) were associated with over a quarter of conflict events in the Sudan, which is a large number. The authors investigate the mechanisms for this (it is unlikely to be water stress because rainfall is not significant), and find that pastoralist areas (where livestock are an important source of income) are particularly affected by temperature. The authors conclude that conflict over natural resources (i.e. water) is exacerbated in these areas, but if that were the case you would expect rainfall to be a bigger factor. I'd be more inclined to believe that heat stress affects livestock in negative ways (weight loss, dehydration, mortality), that affects income security for pastoralists.

On a different but related note, earlier in the year I read this 2014 paper (ungated earlier version here) by Matthew Ranson (Abt Associates), but I hadn't had a chance to blog about it until now. In the paper, Ranson looks at the relationship between monthly weather patterns and crime in the U.S. This paper is interesting because Ranson doesn't stop at looking simply at the relationship, but projects the change in crime over the rest of the century and estimates the social costs of the additional crimes. He finds a number of interesting things:
Across a variety of offenses, higher temperatures cause more crime. For most categories of violent crimes, this relationship appears approximately linear through the entire range of temperatures experienced in the continental United States. However, for property crimes (such as burglary and larceny), the relationship between temperature and crime is highly non-linear, with a kink at approximately 50 °F...
...in the year 2090, crime rates for most offense categories will be 1.5-5.5% higher because of climate change... The present discounted value of the social costs of these climate-related crimes is between 38 and 115 billion dollars.
I would suggest that, if you did a similar analysis for New Zealand, we might see something similar (with the magnitude of social costs being much lower of course due to smaller population). Both papers provide additional reasons to hope for some agreement in Paris.

Friday, 4 December 2015

Climate change, economic growth and inequality

This week and next, world leaders (and activists) are gathered in Paris to negotiate an agreement to combat climate change. So it seems timely to do a couple of posts on climate change. I'd been holding off on writing about this, and among NZ bloggers Mark Johnson has beaten me to it, but I'm going to do this post anyway.

Last month, the journal Nature published a new article looking at the impacts of temperature change on economic production, by Marshall Burke (Stanford), Solomon Hsiang and Edward Miguel (both UC Berkeley). The paper was covered by media back in October (see The Economist or The Washington Post).

The problem with most studies that attempt to evaluate the impact of temperature on economic output (or growth or productivity) is that they compare warmer countries with cooler countries. But of course there are other differences between countries that are difficult to control for. Burke et al. try an alternative approach - comparing each country in cooler years with the same country in warmer years. In short, they:
analyse whether country-specific deviations from growth trends are non-linearly related to country-specific deviations from temperature and precipitation trends, after accounting for any shocks common to all countries.
The key results are most usefully summarised in the figure below, which shows the change in growth rates for different annual average temperature. The panel on the left shows the overall results, while the panels on the right disaggregate the results between rich and poor countries, early (per 1990) and later periods, and the effects on agricultural and non-agricultural GDP.


Essentially, Burke et al. have demonstrated that temperature has a non-linear effect on economic growth - growth is maximised at an annual average temperature of about 13 degrees Celsius. However, the important results have to do with the implications for rich and poor countries (see panel b in the figure). Most rich countries are in cooler climates (think Europe, or North America for example), while poorer countries are typically in climates that already have average annual temperatures well in excess of the optimum (think Africa or South or Southeast Asia, for example). So, climate change is very likely to have unequal impacts on economic growth between rich and poor countries. It may even make some rich countries initially better off as they approach the optimum temperature (Chris Mooney suggests Canada or Sweden as examples), while simultaneously making most poor countries significantly worse off.

So, unfortunately it seems likely that global (between country) inequality may be exacerbated by the changing climate, because "hot, poor countries will probably suffer the largest reduction in growth". This would undo some of the significant progress that has been made over the last half century in particular.

Importantly, technological change to date hasn't modified the relationship between temperature and growth (see panel c in the figure), which suggests that world leaders need to change the temperature directly, rather than trying to modify the relationship between temperature and growth, if the projected outcome (77% of countries being poorer in per capita terms by 2100) is to be avoided. The authors conclude:
If societies continue to function as they have in the recent past, climate change is expected to reshape the global economy by substantially reducing global economic output and possibly amplifying existing global economic inequalities, relative to a world without climate change.
 Read more:

Wednesday, 2 December 2015

What coffee farmers do with certification

This is my third post on Fair Trade this week (see the earlier posts here and here). Having discussed what Fair Trade does for prices (not much) and incomes (also not much), it's worth asking whether there is a better alternative.

In a new paper in the journal World Development (sorry I don't see an ungated version anywhere), Bart van Rijsbergen, Willlem Elbers (both Radboud University in the Netherlands), Ruerd Ruben (Wageningen University), and Samuel Njuguna (Jomo Kenyatta University of Agriculture and Technology in Kenya) compare Utz certification with Fair Trade certification, based on data from 218 coffee farmers in Kenya. Importantly, they use a propensity score matching approach (which I discussed briefly in this earlier post) to make the comparisons.

The key difference between Utz certification and Fair Trade is that Utz focuses on improved agricultural practices (that should result in higher coffee quality), while Fair Trade instead provides a price premium, as discussed in my earlier posts. Some of the surveyed coffee farmers held both certifications, some were Fair Trade only, and some were non-certified. Here's what they find:
Whereas Fairtrade clearly enhances further specialization in coffee production and more engagement in dry coffee processing, Utz-certified enhances input-intensification of coffee production and multi-certification opts for coffee renovation and the diversification of coffee outlets...
...household welfare and livelihood effects of coffee certification remain generally rather disappointing.
Neither certification scheme had much effect on income, mainly because farms only derived one-quarter to one-third of their income from coffee sales, and certified coffee sales were only around one-third of that income.

Which brings me to this other new paper I read today from the journal Food Policy (ungated here), by Wytse Vellema (Ghent University), Alexander Buritica Casanova (CIAT), Carolina Gonzalez (CIAT and IFPRI), and Marijke D'Haese (Ghent University). In the paper the authors use data from coffee farmers in Colombia , where they also note that coffee is only one (of several) income sources for most farm households. Their comparisons rely on cross-sectional mean differences (rather than a matching approach), so their results need to be treated with a little bit of caution.

Again, they find little effect of coffee certification (this time the certification is mostly Starbucks C.A.F.E. and Nespresso AAA) on household incomes. However, what caught my eye about the paper was the discussion of income and substitution effects for coffee farmers:
Having farm certification reduced income from on-farm agricultural production [MC: excluding coffee production] and agricultural wage labour, likely indicating re-allocation of resources away from these activities. Such re-allocation of labour is driven by substitution and income effects. As total household labour is fixed, increased returns to one activity cause households to substitute labour away from other activities. Households are most likely to substitute labour away from activities with low return or which are considered less ’satisfying’... Income effects lead to an increased consumption of leisure, reducing overall hours worked. The negative combined impact on incomes from on-farm agricultural production and agricultural labour shows that substitution effects dominate income effects.
As we discuss in ECON100, because with certification coffee production is more rewarding farmers reallocate their labour time to that activity and away from farming other products and from agricultural wage labour (substitution effect). Higher income from coffee leads them to consume more leisure, reducing their labour allocation to all farming activities (income effect). The overall effect on farm household income is negligible (Vellema et al. note that coffee income increased, but overall income did not).

One last point is important here. There appears to be an increasing proliferation of certification schemes, and many farmers hold multiple certifications (in the Vellema et al. paper, about 29% of the sample held two certifications, and 6% held three certifications). For farmers, the marginal benefit of an additional certification probably decreases as they get more certified (since a good proportion of their crop is able to be sold through their existing certified channels), while the marginal cost probably increases (the opportunity costs involved with spending time maintaining multiple, and possibly conflicting, certifications). I wonder, what is the optimal number of certifications for a given coffee farmer?

Read more:

Tuesday, 1 December 2015

Comparative advantage and trade, Dr Seuss style

Justin Wolfers must have some interesting classes. He just posted this poem in the style of Dr Seuss by one of his students. It covers comparative advantage, task allocation, and trade. My favourite bit:

If tasks were switched amongst our three friends,
Opportunity cost would rise to no end.
Each should focus on that which they do best,
And trade for those products made by the rest.

Nice. It reminds me of this Art Carden effort from a few years ago, "How Economics Saved Christmas", which is an assigned reading in my ECON110 class.