Tuesday, 31 December 2019

Marijuana legalisation and the displacement of drug dealers

What do marijuana dealers do when their illegal product become legal? Perhaps they become legitimate business people, and open a store. However, they have a particular set of skills that is more aligned with criminal enterprise than with legitimate retail, so perhaps they move into selling other illegal goods, like harder drugs. That is the question that Heyu Xiong (now at Case Western Reserve University) addresses in his 2018 job market paper, which as far as I can tell, has not yet been published.

The research question, of how drug dealers respond to legalisation, is important, particularly in countries that have yet to legalise such as New Zealand. The idea that legalisation simply eliminates criminal activity is overly simplistic, and that is exactly what Xiong finds. He uses data on prison admissions and releases from three U.S. states that have legalised marijuana (Colorado, Washington, and Oregon) over the period from 2000-2016 (for Colorado and Washington) or 2007-2017 (Oregon). He looks at the recidivism rate for those convicted of marijuana-related crime before and after legalisation, using several different methods and several comparison populations. He finds that:
...state adoption of marijuana legalization is associated with a significant increase in the risk of recidivism for marijuana dealers. Following legalization, marijuana offenders become 4 to 5 percentage points more likely to re-enter prison within 9 months of release. The effect is sizable, corresponding to a near 50% increase from a baseline rate of 10 percent. When decomposed by crime categories, I find the overall increase masks two countervailing effects. One, marijuana offenders became less likely to commit future marijuana offenses. Two, this reduction is offset by the transition to the trafficking of other drugs. As a result, the observed criminality of former marijuana traffickers increased. Because participation in other type of crimes did not vary significantly, the revealed patterns are consistent with the importance of drug-industry specific human capital in explaining the persistence of criminal choices.
So, marijuana legalisation doesn't reduce crime by marijuana dealers, but increases it. There were significant increases in offences related to other (harder) drugs, and a smaller (but still significant) increase in property crime. Why did that happen? Xiong notes that:
...a 1% increase in marijuana crime within the county corresponds to a .2% increase in number of establishments...
The spatial pattern of entry reveal that legal dispensaries entered precisely in locations where illegal dealers operated. Given their close proximity, this implies that legal entrants directly competed with the incumbent illegal dealers. Hence, illegal retailers was supplanted by legitimate trade.
It should be no surprise that the legitimate marijuana sellers would want to locate in the places where demand is highest, which just happens to be where the drug dealers were previously located. Xiong also finds that:
...the retail prices of marijuana dropped significantly following legalization. Additionally, owing to the large-scale legal entry and lower search frictions, much of the within-state price dispersion disappeared.
The combination of lower (and more stable) prices suggests that competition in the market was high. That competition, especially being in the neighbourhood of the drug dealers, pushed the drug dealers out of the market, and they had to find something else to do. Xiong also finds that they don't shift into legal work. Looking at whether the offenders use employment services after they are released, he finds only a:
...small effect on utilization of these employment services, even amongst sub-populations consisting only of people who are eligible and have not returned to incarceration. With the NLSY data, I fail to detect any increase in weeks worked or income from wages. Altogether, the transition to legitimate employment resulting from criminal displacement is evidently low.
Overall, this paper is helpful in thinking about the issues, and the costs and benefits, associated with marijuana legalisation. While legalisation has many evident benefits, not least a reduction in the amount of resources devoted to policing the illegal market, that must be weighed up against the costs. Those costs may include the diversion of marijuana dealers into the selling of harder drugs that come with higher social costs.

[HT: Marginal Revolution, back in November 2018]

Monday, 30 December 2019

Erotic capital and happiness

If you have a research paper that investigates the correlation between self-reported attractiveness and subjective wellbeing (or happiness), how do you ensure that it gets read by more people? If you're the author of this 2017 article published in the journal Research in Social Stratification and Mobility (sorry, I don't see an ungated version online), you refer to attractiveness as 'erotic capital'. For the record, the author was Felix Requena (University of Malaga).

The idea of referring to attractiveness as a form of capital isn't a crazy idea. In economics, capital is a factor of production that is used to produce other goods and services. Requena takes a Bourdieuian view of capital, but extends it:
Like social capital, erotic capital focuses on the benefits received by individuals who participate in groups and in the deliberate construction of sociability in order to create resources that generate relational and other benefits, such as power or influence...
People constantly use the different types of power available to them to obtain the resources they desire... According to their circumstances and particular assets, some use economic power, others use acquired skills and abilities (human capital), others their social relations (social capital), and others use erotic power (beauty or appeal). Like other types of capital, erotic capital serves as another way to influence the social environment.
In that case, attractiveness is a form of capital because it can be used to produce things of value to the person through the power to influence the social environment. Others may quibble as to whether this is simply an extension of human or social capital, or whether it sits between those two types of capital, but such definitional issues are not important. The real question that the article addresses is the relationship of erotic capital to happiness.

Requena uses data from the Spanish General Social Survey in 2013, which had a sample size of a bit more than 5000 people. Erotic capital is self-reported attractiveness on a 0-10 scale. Happiness is measured on a 0-6 scale. Controlling for gender, income (economic capital), education (human capital), and the size of a person's social network (social capital), he finds that:
For men and women, erotic capital ranked first (men: 0.104, women: 0.105), followed by social capital (men: 0.017, women: 0.018), human capital (men: 0.014, women: 0.014) and economic capital (men: 0.001, women: 0.0001). Contrary to what we predicted in H2, the analysis showed no significant differences in the importance of erotic capital for men and women.
In other words, erotic capital had the strongest correlation with happiness, of the four capitals included in the regression analysis. Of course, other than the correlation vs. causation issue, there is a problem of multicollinearity in this analysis. It has previously been established that more attractive people earn more (see for example here, or my book review of Daniel Hamermesh's excellent book Beauty Pays), and that attractiveness is related to educational choices (see here). So, it isn't clear that Requena's measure of erotic capital isn't also picking up some of the variation in happiness that is actually due to differences in economic capital or human capital. This article is far from the last word on attractiveness and happiness.

Finally, if you're reading the article, I would recommend ignoring all of the analysis related to self-reported importance of the different types of capital to success in life, and their relationship to happiness. I don't think it tells us what the author thinks it does, and it certainly doesn't contribute anything to our understanding of the relationship between attractiveness and happiness.

Sunday, 29 December 2019

E-bike commuting, happiness, and survivorship bias

If I told you that e-bike commuters are happier than drivers are, would you conclude that travelling to work by e-bike made people happier? Perhaps you would, but if you did, you would be confusing correlation with causation. Perhaps happier people are more likely to commute using e-bikes? Or perhaps only higher-income people can afford an e-bike, and higher-income people are happier? It is difficult to say. However, jumping straight from the observed correlation into studying why e-bike commuters are happier shouldn't be your next step, especially not if you are going to base your study on talking to only 24 e-bike commuters.

However, that's exactly what this study published in the Journal of Transport & Health (sorry, I don't see an ungated version online), by Kirsty Wild and Alistair Woodward (both University of Auckland), did. It was covered in the New Zealand Herald earlier this year, but I held off on writing about it until I had a chance to read the research myself.

The problem isn't so much the research itself - interviewing e-bike commuters about what they like about e-bike commuting is fine. However, extrapolating that to answer the question about what should be put in place to encourage e-bike commuting, as this study does, is fraught. The reason is survivorship bias.

Almost by definition, if you interview current e-bike commuters, then you're interviewing people who tried e-bike commuting, and liked it. However, there are a bunch of people who tried e-bike commuting and hated it - they don't commute by e-bike any more, and they didn't get interviewed. In other words, the current e-bike commuters are the survivors from a larger group of people that have tried e-bike commuting at some time.

The problem in this case is that those two groups (survivors and non-survivors) are different. At the very least, the survivors like e-bike commuting, and the non-survivors don't (or, at least, they don't like it enough to continue commuting by e-bike). Interviewing the survivors tells you nothing about what the non-survivors liked or didn't like about e-bike commuting. It could be that the things that the survivors like about commuting by e-bike are exactly the things that the non-survivors hated about it. And you can't tell from this research, because former e-bike commuters (the non-survivors) were not interviewed.

So, if you decide to base decisions about cycling infrastructure on what current e-bike commuters like about it, there is no guarantee that e-bike commuting would increase as a result. The people who want to commute by e-bike with the current infrastructure are already commuting by e-bike. What you really want to know is, what do the people who don't currently commute by e-bike want?

I'd find research like this a lot more plausible if they had interviewed people who gave up on commuting by e-bike, or if they interviewed both groups. Or, even better, if they ran an experiment where they distributed e-bikes randomly to some people and asked them to use them for commuting, and then interviewed that experimental group about what they liked and did not like.

Otherwise, you simply get more of what the survivors already like, and don't necessarily create the right environment to increase e-bike commuting at all.

Wednesday, 18 December 2019

Randomising research publications

If you've ever had the misfortune of being drawn into a conversation with me about research funding, then you will have heard my view that, after an initial cull of low-quality research funding applications, all remaining applications should be assigned ping-pong balls, which are then drawn randomly from a bin to allocate the available funding. You could even make a big event of it - the researchers' equivalent of a live lotto draw.

In fact, as this Nature article notes, this approach has begun to be adopted, including in New Zealand:
Albert Einstein famously insisted that God does not play dice. But the Health Research Council of New Zealand does. The agency is one of a growing number of funders that award grants partly through random selection. Earlier this year, for example, David Ackerley, a biologist at Victoria University of Wellington, received NZ$150,000 (US$96,000) to develop new ways to eliminate cells — after his number came up in the council’s annual lottery.
“We didn’t think the traditional process was appropriate,” says Lucy Pomeroy, the senior research investment manager for the fund, which began its lottery in 2015. The council was launching a new type of grant, she says, which aimed to fund transformative research, so wanted to try something new to encourage fresh ideas...
...supporters of the approach argued that blind chance should have a greater role in the scientific system. And they have more than just grant applications in their sights. They say lotteries could be used to help select which papers to publish — and even which candidates to appoint to academic jobs.
The latest issue of the journal Research Policy has an interesting article by Margit Osterloh (University of Zurich) and Bruno Frey (University of Basel), which argues for randomisation of the selection of which papers to publish (it seems to be open access, but just in case here is an ungated version). Their argument relies on the fact the Journal Impact Factors (JIFs) are a poor measure of the quality of individual research, and yet a lot of research is evaluated in terms of the impact factor of the journal in which it is published. Moreover, they note that:
...many articles whose frequency of citation is high were published in less well-ranked journals, and vice versa... Therefore, it is highly problematic to equate publication in “good” academic journals with “good” research and to consider publication in low-ranked journals automatically as signifying less good research.
Despite this problem with impact factors, they continue to be used. Osterloh and Frey argue that this is because the incentives are wrong. Researchers who publish in a high impact factor journal are benefiting from 'borrowed plumes', because the journal impact factor is largely driven by a small number of highly cited papers:
It is exactly the skewed distribution of citations that is beneficial for many authors. As argued, the quality of two thirds to three quarters of all articles is overestimated if they are evaluated according to the impact factor of the journal in which they were published. Thus, a majority of authors in a good journal can claim to have published well even if their work has been cited little. They are able to adorn themselves with borrowed plumes...
Osterloh and Frey present three alternatives to the current journal publication system, before presenting their own fourth alternative:
When reviewers agree on the excellent quality of a paper, it should be accepted, preferably on an “as is” basis (Tsang and Frey, 2007). Papers perceived unanimously as valueless are rejected immediately. Papers that are evaluated differently by the referees are randomized. Empirical research has found reviewers´ evaluations to be more congruent with poor contributions (Cicchetti, 1991; Bornmann, 2011; Moed, 2007; Siler et al., 2015) and fairly effective in identifying extremely strong contributions (Li and Agha, 2015). However, reviewers’ ability to predict the future impact of contributions has been shown to be particularly limited in the middle range in which reviewers´ judgements conform to a low degree (Fang et al., 2016).25 Such papers could undergo a random draw.
In other words, the best papers are accepted immediately, the worst papers are rejected immediately, and the papers where the reviewers disagree are accepted (or rejected) at random. Notice the similarity to my proposal for research grant funding at the start of this post.

The journal issue that the Osterloh and Frey article was published in also has three comments on the article. The first comment is by Ohid Yaqub (University of Sussex), who notes a number of unresolved questions about the proposal, and essentially argues for more research before any radical proposal to shake up the journal publication system is implemented:
Randomisation in the face of JIF may carry unintended consequences and may not succeed in dislodging the desire for journal rankings by some other measure(s). It should wait until we have more widely appreciated theory on peer review and citation, more inclusive governing bodies that can wield some influence over rankings and their users, and a stronger appetite for investing in evaluation.
The second comment is by Steven Wooding (University of Cambridge), who also argues that impact factors are a heuristic (a rule of thumb) used for judging research quality. Like Yaqub, he argues for more evidence, but in his case he argues for more evidence on why people are using journal impact factors and testing and evaluating the alternatives:
If you want people to stop using a heuristic, you need to ask what they are using it for, why they are using it, and to understand what their other options are. We agree that JIF use needs to be curbed. Our difference of opinions about trends in JIF use and the best way to reduce it should be settled by good evidence on whether its use is increasing or falling; where and why JIF is still used; and by testing and evaluating different approaches to curb JIF use.
The third comment is by Andrew Oswald (University of Warwick), who presents a mathematical case in favour of randomisation. Oswald shows that, if the distribution of quality research papers is convex (as would be the case if there are a few blockbuster papers, and many papers of marginal research significance), then randomisation is preferable:
Consider unconventional hard-to-evaluate papers that will eventually turn out to be either (i) intellectual breakthroughs or (ii) valueless. If the path-breaking papers are many times more valuable than the poor papers are valueless, then averaging across them will lead to a net gain for society. The plusses can be so large that the losses do not matter. This is a kind of convexity (of scientific influence). Averaging across the two kinds of papers, by drawing them randomly from a journal editor's statistical urn, can then be optimal.
Finally, Osterloh and Frey offer a response to the comments of the other three authors (at least for now, all of the comments and the response are open access).

A huge amount of under-recognised time and effort is spent on the research quality system. Journal reviewers are typically unpaid, as often are reviewers of research grants and those on appointments committees. However, there is a large opportunity cost of their time spent in reviewing manuscripts or applications. It isn't clear that the benefits of this effort outweigh the costs. As Osterloh and Frey note, there seems little correlation between the comments and ratings of different reviewers. Every academic researcher can relate stories about occasions where they have clearly lost in the 'reviewer lottery'. In the face of these issues, it is time to reconsider key aspects of the way that research is funded and published, and the way that academic appointments are made. If executed well, an approach based on randomisation would save significantly on cost, while potentially increasing the benefits of ensuring that high quality research is funded and published, and that high quality applicants are not overlooked.

[HT: Marginal Revolution for the Nature article]

Monday, 16 December 2019

Book review: Poor Economics

When Abhijit Banerjee and Esther Duflo (along with Michael Kremer) won the Nobel Prize a couple of months ago, I resolved to push Banerjee and Duflo's 2011 book Poor Economics a lot closer to the top of my pile of books-to-be-read. I finally finished reading it last week and I have to say, it might be one of the most thoroughly researched and well-written summaries of a research area that I have ever read. Every page is packed with details, which makes it incredibly difficult to excerpt. However, consider this bit on the work of their future Nobel partner Kremer:
In the early 1990s, Michael Kremer was looking for a simple test case to perform one of the first randomized evaluations of a policy intervention in a developing country. For this first attempt, he wanted a noncontroversial example in which the intervention was likely to have a large effect. Textbooks seemed to be perfect: Schools in western Kenya (where the study was to be conducted) had very few of them, and the near-universal consensus was that the books were essential inputs. The results were disappointing. There was no difference in the average test scores of students who received textbooks and those who did not. However, Kremer and his colleagues did discover that the children who were initially doing very well (those who had scores near the top in the test given before study began) made marked improvement in the schools where given out. The story started to make sense. Kenya's language of education is English, and the textbooks were, naturally, in English. But for most children, English is only the third language (after their local language and Swahili, Kenya's language), and they speak it very poorly. Textbooks in English were never going to be very useful for the majority of children... This experience has been repeated in many places with other inputs (from flip charts to improved teacher ratios). As long as they're not accompanied by a change in pedagogy or in incentives, new inputs don't help very much.
The book covers a wide scope of randomised controlled trials (RCTs) in developing countries, which is exactly the work for which Banerjee and Duflo (and Kremer) won the Nobel Prize. They don't just limit themselves to discussing their own research though. The reader will receive a thorough grounding in the state of knowledge (or at least, the state of knowledge in 2011, as this is a field that has been moving quickly). If you want to gain an understanding of why the authors won the Nobel Prize and what their contributions have been, this is a good place to start. All of the examples are linked through Banerjee and Duflo's philosophy, which can be summarised as:
...attend to the details, understand how people decide, and be willing to experiment...
Banerjee and Duflo don't shy away from the criticisms of RCTs in development economics either. They engage with critics like William Easterly and Angus Deaton, and the criticisms that small experiments don't necessarily scale up well, or are not useful unless they identify or test some underlying theory that can be useful in a wider context. Unlike many authors, Banerjee and Duflo are quite realistic about their ambitions, but at the same time they have a strong counter-argument:
We may not have much to say about macroeconomic policies or institutional reform, but don't let the apparent modesty of the enterprise fool you: Small changes can have big effects. Intestinal worms might be the last subject you want to bring up on a hot date, but kids in Kenya who were treated for their worms at school for two years, rather than one (at the cost of $1.36 USD PPP per child and per year, all included), earned 20 percent more as adults every year, meaning $3,269 USD PPP over a lifetime... But to scale this number, note that Kenya's highest sustained per capita growth rate in modern memory was about 4.5 percent in 2006-2008. If we could press a macroeconomic policy lever that could make that kind of unprecedented growth happen again, it would still take four years to raise average incomes by the same 20 percent. And, as it turns out, no one has such a lever.
I learned a lot in reading this book. This included why microinsurance doesn't seem to work even when microfinance does (there is an entire chapter on this, but it boils down to adverse selection and moral hazard, which is exactly the problem of insurance in the rest of the world), and why there are so many unfinished buildings in developing countries (the poor use them as a way of committing to saving, because subtracting bricks that have been partially built into a house is much more difficult than raiding a savings account). I even learned about a real-world example of a Giffen Good (my ECONS101 tutors will know that I have been a Giffen Good skeptic for a long time), which comes from this 2008 paper (ungated version here).

The book doesn't strike me as a textbook, although I know some who are using it as such. It is very empirical, and that is a good thing. This is an excellent place to start, if you want to understand the economics of the poor, and the state of research as it was in 2011. Highly recommended, and I look forward to also reading their new book, Good Economics for Hard Times.

Sunday, 15 December 2019

Alcohol consumption worldwide

Our World in Data recently updated their excellent article on alcohol consumption, with interactive graphics showing cross-sectional differences between countries, and trends over time. The data are pretty clear - New Zealand is neither the least, nor the most, negatively affected by alcohol consumption. Here are a few of the graphs that stuck out for me (though I encourage you to browse through the whole article and have a play with the graphs yourself). First, total alcohol consumption per capita over time:

Notice that New Zealand is towards the bottom of this group of Western countries all the way through the series from 1890 to 2014. Next, consumption by type of beverage for New Zealand, over time:

There's only a few data points in this one, but the massive increase in wine consumption since the 1970s is apparent. Finally, an illustration of why alcohol is such a concern for many public health researchers:

The big four mortality risk factors are high blood pressure, smoking, high blood sugar, and obesity. Then there's a big gap, but alcohol leads the rest of the risk factors in terms of the number of deaths, contributing to over 1200 deaths per year. Once you factor in the other costs of alcohol-related harms, it's easy to see why this is a focus.

Anyway, Our World in Data is an excellent site, and this is just one of many articles on topics as diverse as income inequality and renewable energy. It's a great place to visit to visualise some data, and the best part is that the graphs are (somewhat) customisable and the data are freely accessible as well. Enjoy!

Wednesday, 11 December 2019


What does an economics professor do when they have three cats and too much time on their hands? Create a website called Kittyconomics, of course. People love cat videos. So, what better way to help people love economics, than using cat videos to do so?

You can also view the videos on the Kittyconomics YouTube channel. At the moment, there are only four short (about 90 seconds long) videos, on: (1) Preferences; (2) Value; (3) Opportunity cost; and (4) Scarcity. They don't quite explain things in the way I would, and I'd question the order of the videos (for instance in ECONS101 and ECONS102, I talk about scarcity before talking about opportunity cost).

However, they are fun and useful, and a good way to make some lazy cats earn their keep. Enjoy!

[HT: Marginal Revolution]

Thursday, 5 December 2019

Summer reading list for the Prime Minister

Earlier this week, NZIER released their first Summer Reading List for the Prime Minister, which they "hope the Prime Minister, her advisors, and anyone interested in economics and public policy, will find both informative and enjoyable to read". I had a small (and uncredited role) in the list, recommending one of the books (2014 Nobel Prize winner Jean Tirole's Economics for the Common Good) that made it onto the final list.

The reading list includes:

You can read a brief synopsis of each book on the reading list PDF file. Here was my short list of recommendations (with some brief notes):
  • Factfulness, by Hans Rosling. Rosling developed Gapminder, and a strong proponent of using data rather than intuition to answer questions (I reviewed it here);
  • Reinventing Capitalism in the Age of Big Data, by Viktor Mayer-Schonberger and Thomas Ramge. This one is one my soon-to-read list, but seems important for understanding the period of creative destruction we are going through right now;
  • Economics Rules: The Rights and Wrongs of the Dismal Science, by Dani Rodrik. This one is a more sensible version of Doughnut Economics, by an author who is less in love with their own metaphors. Rodrik himself notes that "this book both celebrates and critiques economics" (I reviewed it here);
  • Scarcity: The New Science of Having Less and How It Defines Our Lives, by Sendhil Mullainathan and Eldar Shafit. I haven't read this one yet, but it looks like it strikes a good balance between empirical research and real-world application, while selling an important story about the importance of understanding how people respond to having less than they need.
  • Economics for the Common Good, by Jean Tirole. This one is probably a bit dense, and long (560+ pp.) but I had to include something from a recent Nobel laureate. This is Tirole's reflections on the most important contributions to society that economics can make in the future.
My list is clearly more contemporary, with all the books released in the last few years. However, if you're looking for some summer reading for yourself, the books on either list are a good place to start. Enjoy!

Monday, 2 December 2019

Sunday opening hours and alcohol consumption

According to the research literature (nicely summarised in the book Alcohol: No Ordinary Commodity), one of the most effective policies for reducing alcohol consumption and related harm is to reduce the opening hours of licensed outlets. However, that policy advice seems to apply to on-licence outlets (bars, nightclubs, etc.). If bars are open more hours, people drink more alcohol at bars - that seems pretty uncontroversial. However, the literature is much less clear on the effects of off-licence outlets' (bottle stores, supermarkets, etc.) opening hours on consumption. If off-licence outlets are open longer, do people purchase and consume more alcohol?

The reason for the lack of clarity may be because people purchase alcohol from off-licence outlets to consume at home, and they can also store alcohol at home for future consumption. So a forward-planning consumer can purchase alcohol for consumption at home when they find themselves close to an off-licence outlet, and then store it for later consumption.

However, are consumers really forward planning in this way? I have a report coming out soon that looks at the purchasing behaviour of pre-drinkers (people who had been drinking before going out to the CBD). I can't release the details of that report just yet, but in the meantime, consider this 2016 article by Douglas Bernheim (Stanford), Jonathan Meer (Texas A&M), and Neva Novarro (Pomona College), published in the American Economic Journal: Economic Policy (ungated earlier version here).

Bernheim et al. are actually investigating (somewhat indirectly) whether consumers regulate the amount of alcohol they store at home, in response to the availability of retail alcohol. If consumers restrict the availability of alcohol to themselves by not storing quantities of alcohol at home, then restricting alcohol sales on Sundays would reduce alcohol consumption (and liberalising Sunday sales would increase consumption).

Bernheim et al. investigate this by looking at regulations across U.S. states that either restricted or liberalised the number of hours that alcohol could be sold on Sundays, over the period from 1970 to 2007, and how those changes related to changes in alcohol consumption. They found that:
Widening the allowable on-premise Sunday sales window by 1 hour is associated with a statistically significant 0.94 percent (standard error = 0.29 percent) increase in sales. Taking the linear specification literally, the point estimate implies that allowing 12 hours of Sunday on-premises sales increases total liquor consumption by roughly 11 percent...
In contrast, expanding the allowable off-premise Sunday sales window by 1 hour is associated with a small and statistically insignificant 0.08 percent (standard error = 0.24 percent) increase in sales. Formally, we cannot reject the hypothesis that the effect of off-premises sales hours is zero.
In other words, lower Sunday hours for on-licences (bars, etc.) reduces state-level consumption, but lower Sunday hours for off-licences has no effect. Bernheim et al. conclude that:
...the observed pattern coincides with predictions for time-consistent or naïvely time- inconsistent consumers who have reasonably good memories and low costs of carrying inventories.
In other words, consumers do seem to stock up on alcohol, whether purposefully or accidentally (this research is unable to distinguish between those two situations). So, regulating opening hours for off-licences as a way of reducing consumption is likely to be frustrated by consumer behaviour.

Friday, 29 November 2019

Rent control and inequality in San Francisco

Rent control is a staple in introductory economics courses. The idea that a policy that has popular support from the public nevertheless has negative impacts on the very tenants that it aims to help, is an important story to tell (see here and here and here for previous posts on rent controls). The negative impacts of rent controls are supported by a simple supply and demand model of the market for rental housing. However, it is also supported by empirical data.

A new article by Rebecca Diamond, Tim McQuade, and Franklin Qian (all Stanford), published in the journal American Economic Review, provides support in the case of San Francisco. The authors looked at what happened to tenants and properties affected by a 1994 change in rent control laws:
Rent control in San Francisco began in 1979, when acting Mayor Dianne Feinstein signed San Francisco’s first rent control law... This law capped annual nominal rent increases to 7 percent and covered all rental units built before June 13, 1979 with one key exemption: owner-occupied buildings containing 4 units or less... These “mom and pop” landlords were cast as being less profit-driven than large-scale, corporate landlords, and more similar to the tenants being protected. These small multi-family structures made up about 44 percent of the rental housing stock in 1990, making this a large exemption to the rent control law.
While this exemption was intended to target “mom and pop” landlords, in practice small multi-families were increasingly purchased by larger businesses who would then sell a small share of the building to a live-in owner so as to satisfy the rent control law exemption. This became fuel for a new ballot initiative in 1994 to remove the small multi-family rent control exemption. This ballot initiative barely passed in November 1994. Suddenly, all multi-family structures with four units or less built in 1979 or earlier were now subject to rent control. These small multi-family structures built prior to 1980 remain rent-controlled today, while all of those built from 1980 or later are still not subject to rent control.
Diamond et al. essentially compare properties (and tenants living in properties) before and after the law change, comparing those that were (the treatment group) and were not (the control group) newly subjected to rent control. This 'difference-in-differences' analysis allows them to extract the impact of rent control. They find a number of interesting things, including that:
...on average, in the medium to long term the beneficiaries of rent control are between 10 and 20 percent more likely to remain at their 1994 address relative to the control group and, moreover, are more likely to remain in San Francisco. Further, we find the effects of rent control on tenants are stronger for racial minorities, suggesting rent control helped prevent minority displacement from San Francisco... On the other hand, individuals in areas with quickly rising house prices and with few years at their 1994 address are less likely to remain at their current address, consistent with the idea that landlords try to remove tenants when the reward is high, through either eviction or negotiated payments.
On the latter point, they note that there are a number of ways that landlords can subvert rent control, such as:
First, landlords could try to legally evict their tenants by, for example, moving into the properties themselves, known as owner move-in eviction. Alternatively, landlords could evict tenants according to the provisions of the Ellis Act, which allows evictions when an owner wants to remove units from the rental market: for instance, in order to convert the units into condos or a tenancy in common.18 Finally, landlords are legally allowed to negotiate with tenants over a monetary transfer convincing them to leave. In this way, tenants may “bring their rent control with them” in the form of a lump sum tenant buyout.
On top of all that, they also found that:
...landlords actively respond to the imposition of rent control by converting their properties to condos and TICs or by redeveloping the building in such as a way as to exempt it from the regulations. In sum, we find that impacted landlords reduced the supply of available rental housing by 15 percent. Further, we find that there was a 25 percent decline in the number of renters living in units protected by rent control, as many buildings were converted to new construction or condos that are exempt from rent control.
This is a point that I made in this earlier post. Diamond et al. also note that their results imply interesting effects of rent control on inequality:
In the short run, rent control prevents displacement of the initial 1994 tenants from San Francisco, especially among racial minorities. To the extent that these 1994 tenants are of lower income than those moving into San Francisco over the following years, rent control increases income inequality. However, this short-term effect decays over time. Eight years after the law change, 4.5 percent of the tenants treated by rent control were able to remain in San Francisco because of rent control. However, five years later, this effect had decayed to 3.7 percent, and will likely continue to decline in the future.
In the long run, on the other hand, landlords are able to respond to the rent control policy change by substituting toward types of housing exempt from rent control price caps, upgrading the housing stock, and lowering the supply of rent-controlled housing. Indeed, the prior section showed that as of 2015, the average property treated by rent control has higher income residents than similar market rate properties. The long-term landlord response thus offsets rent control’s initial effect of keeping lower income tenants in the city by replacing them with residents of above-average income. In this way, rent control works to increase income inequality in both the short run and in the long run, but through different means. Rent control’s short-term effects increases the left tail of the income distribution, while the long-term effects increase the right tail.
I'm not sure that this is what advocates of rent controls would be expecting. However, it serves as another cautionary point on the effects of rent controls.

[HT: Marginal Revolution]

Thursday, 28 November 2019

Making individual actions to reduce climate change

My Waikato colleague Zack Dorner had an article in The Spinoff back in September:
Regardless of how doomed you think we are, you may still think individual actions are pointless. You’re one of seven billion people in the world; your decisions are a drop in the ocean that won’t make a difference. I agree that policy change is the most important tool when it comes to climate action. But where does that leave individual actions? Do they also make a difference?...
The bottom line: when you take individual actions on climate change you are contributing to a global public good, which benefits 7 billion people now and many more in the future. And done right, you are encouraging others to change too, by helping to shift social norms. So don’t let anyone tell you your individual actions on climate change are not making a difference.
Zack's argument rests on three points: (1) global public goods; (2) the social cost of carbon; and (3) establishing new social norms. However, I think there is a stronger case to be made for individual climate action, based on social preferences (such as altruism).

I wrote a related post back in 2016, about the Paris agreement on climate change. Traditional game theoretical approaches would suggest that action on climate change is an example of the prisoners' dilemma - while every decision-maker would be better off if everyone works together, each decision-maker individually is better off if they act in their own best interests (and not with everyone else). So, in the case of individual actions to reduce climate change, we would all be better off if we drove our cars a bit less, none of us individually has a strong enough incentive to do so. Unless, as I pointed out in relation to the Paris agreement:
...the prisoners' dilemma looks quite different if the players have social preferences. For example, if players care not only about their own payoff, but also about the payoff of the other player...
The game now changes substantially, and reducing emissions becomes a dominant strategy for both players!
These points don't just apply to countries deciding whether to reduce carbon emissions. They also apply to individuals deciding whether to take individual action on climate change. If we care about other people, whether that be people living right now or people living in the future, then it starts to make sense to take individual action on climate change right now. As I discussed in the earlier post, it doesn't take much in the way of altruistic preferences for taking climate action to become the dominant strategy (this is a point I used to make in my old ECON100 class, which was unfortunately cut out when we made the transition to ECONS101 and needed to include more macroeconomics content instead).

Read more:

Wednesday, 27 November 2019

Are people willing to pay to avoid harm from international honey laundering?

You probably had to read the title of this post a couple of times. Yes, it does say honey laundering, with an "h". It's taken from the title of this paper, by Chian Jones Ritten (University of Wyoming) and co-authors, published in the Australian Journal of Agricultural and Resource Economics earlier this year (sorry, I don't see an ungated version). The paper focuses on food fraud, which refers to "the intentional substitution, addition, tampering or misrepresentation of food, food ingredients, or food packaging". They note that:
Evidence suggests that Chinese honey is being transshipped and relabelled to mask the true origin of the honey to avoid large tariffs and potential bans, also known as honey laundering... The practice of honey laundering is so prolific that an estimated one-third of honey available for sale in the United States is illegally imported from China and may contain illegal antibiotics and heavy metals...
The focus on Chinese honey is important because:
Chinese honey has the potential to contain illegal and unsafe antibiotics (specifically, Chloramphenicol, enrofloxacin, and ciprofloxacin) and high levels of herbicides and pesticides...
But do honey consumers care? Essentially, Jones Ritten et al. ran an experiment to test whether consumers were willing to pay a US$2.48 premium for an eight-ounce jar of locally produced honey, and tested whether consumers who were first given information about "the negative health implications of honey laundering" were more willing to pay the premium. They found that:
In total, 53.38% of participants across the treatments chose local honey at a $2.48 premium over honey of unknown origin...
Once we control for only honey preferences and use, access to honey laundering information significantly increases the probability (P < 0.10) of participants being willing to pay a $2.48 premium for an 8-ounce jar of local honey... When also including the influence of demographic variables on the probability of paying the premium, honey laundering information still significantly increases the probability (P < 0.05) of participants choosing local honey...
The results suggest that providing honey laundering information increases the probability of participants being willing to pay the premium by as much as 27 percentage points...
So, consumer information does affect consumers' stated preferences for honey, but not for everyone. However, my last sentence also highlights the problem with this study. They only asked what the consumers would do (their stated preference), and didn't actually require a honey purchase (which would be a revealed preference). So, we don't know whether the consumers would actually follow through on their stated preference. Perhaps they could have required a honey purchase?

It also turns out that older consumers were more willing to pay the premium, which led the authors to conclude that:
Targeting older consumers will most likely be successful at garnering more local consumers that are willing to pay for local honey than targeting younger consumers.
However, if older consumers are already more willing to buy locally produced honey, that doesn't mean that the information had a bigger effect on them. The authors could have tested that directly with their data, by running separate analyses for different age groups, or interacting the experimental treatment variable with age, but for some reason they chose not to.

Aside from those issues, it is a nice study. If you want to avoid international honey laundering, buy local.

Tuesday, 26 November 2019

The future impact of climate change on inequality in the U.S.

I just finished reading this 2017 article published in the journal Science, by Solomon Hsiang (UC Berkeley) and co-authors, which investigates the economic impact of climate change in the U.S. The headline results are unsurprising:
The combined value of market and nonmarket damage across analyzed sectors—agriculture, crime, coastal storms, energy, human mortality, and labor—increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average.
Interestingly, the biggest contributor to economic impact is mortality:
The greatest direct cost for GMST [Global Mean Surface Temperature] changes larger than 2.5°C is the burden of excess mortality, with sizable but smaller contributions from changes in labor supply, energy demand, and agricultural production...
However, what was more interesting was the spatial impacts and their distribution, summarised in the following map:

The counties that suffer the greatest impacts of climate change are those in the South and Midwest, where the mortality impacts are likely to be the greatest due to higher summer temperatures. In contrast, in the North and Northwest, this is offset by lower winter mortality due to milder winters. However, the areas projected to suffer the greatest impacts are also the areas that include most of the poorest counties in the U.S. This is likely to increase inequality over time. As Hsiang et al. explain:
In general (except for crime and some coastal damages), Southern and Midwestern populations suffer the largest losses, while Northern and Western populations have smaller or even negative damages, the latter amounting to net gains from projected climate changes. Combining impacts across sectors reveals that warming causes a net transfer of value from Southern, Central, and Mid-Atlantic regions toward the Pacific Northwest, the Great Lakes region, and New England... Because losses are largest in regions that are already poorer on average, climate change tends to increase preexisting inequality in the United States.
The last thing the U.S. needs is another contributor to income inequality, but it seems like climate change is set to make a bad situation worse.

It would be interesting to do a similar analysis for New Zealand, not necessarily in terms of inequality, but simply looking at the impacts of climate change on mortality. On that research question, we currently know very little. To what extent will increased summer mortality, predominantly in the north of the country, offset lower winter mortality in the south? Does a wetter west and a drier east of the country matter? These and related questions might make a good project for a motivated Masters or Honours student.

Monday, 25 November 2019

Christmas tree decorations and happiness

The New Zealand Herald reported yesterday:
A psychologist says Christmas decorations bring a sense of nostalgia for happier times and, as such, do make people happier.
"In a world full of stress and anxiety people like to associate with things that make them happy and Christmas decorations evoke those strong feelings of the childhood," psychologist Steve McKeown told Unilad.
According to McKeown, those decorations work as visual cues and a pathway back to those feelings of excitement of childhood...
A study published in the Journal of Environmental Psychology also showed that there is a correlation between decorating your home for Christmas and seeming friendlier and more social to neighbours.
The key word in that last sentence is correlation. Just because you can tell a cool story that seems to explain some observed relationship between two variables, it doesn't make that relationship causal. In this case, just because people who put up Christmas decorations are happier, it doesn't mean that decorating causes people to be happier (regardless of nostalgia or whatever).

Perhaps happier people are more likely to decorate (reverse causation). Perhaps people who are less stressed at work and have a better work-life balance are both happier, and more likely to find the time to decorate for Christmas (a third variable, work-life balance causes both happiness and decorating). Or perhaps, the observed correlation is just spurious (like the excellent selection at Tyler Vigen's site, Spurious Correlations).

As an aside, I can't find the Journal of Environmental Psychology article that McKeown refers to, unless it is this one from 1989 (gated), which doesn't mention happiness at all! All in all, this story is a bit of a fail, and as always, it pays to take these claims of causal relationships with more than a pinch of salt.

Saturday, 23 November 2019

Superstar effects and inequality in the music industry

Following on from Thursday's post on superstar and tournament effects in social media, the Wall Street Journal earlier this year (gated) gave us some good data on superstar effects in the music industry:
A small number of superstars like Beyoncé and Taylor Swift is gobbling up an increasingly outsize share of concert-tour revenues, as music’s biggest acts dominate the business like never before.
Sixty percent of all concert-ticket revenue world-wide went to the top 1% of performers ranked by revenue in 2017, according to an analysis by Alan Krueger, a Princeton University economist. That’s more than double the 26% that the top acts took home in 1982.
Just 5% of artists took home nearly the entire pie: 85% of all live-music revenue, up from 62% about three decades earlier, according to Mr. Krueger’s research. “The middle has dropped out of music, as more consumers gravitate to a smaller number of superstars,” he writes in a new book, “Rockonomics,” set to come out in June. (Mr. Krueger died in March.)"
"Performers’ royalties—for acts big and small—are generally much smaller on streaming than on records, CDs or download sales, so artists have to turn to concert revenue for more of their income. And it’s only the superstars who have the ability to charge significantly more for tickets than their predecessors did a generation ago. That leaves non-superstar performers competing for a shrinking share of the concert pie...
Meanwhile, at the bottom of the industry, the lowest 2,500 acts ranked by revenue grossed an average of about $2,500 in 2017 from concert tickets, out of the 10,808 touring acts that year that Mr. Krueger studied. There were 109 acts in the top 1%."
As I noted in the earlier post, superstar effects occur because top performers are paid (in part) based on the amount of value that they generate. Usually this is the value generated for their employer, but in the case of (self-employed) music stars, it might also be the value they generate for their legions of fans. The more fans whose demand they can satisfy, the more they will earn.

Also readily apparent from the WSJ article quote above, is that superstar effects contribute to inequality. If the top 5% of artists are earning 85% of all live-music revenue, then that represents a pretty high level of inequality. A back-of-the-envelope calculation [*] leads to an estimated Gini coefficient for the music industry of 80. That is much higher than the Gini coefficient for any country - according to the World Bank, South Africa is the most unequal country, with a Gini coefficient of 63.

And note that inequality among music artists has been increasing over time. Of course, superstar effects is only one of many contributors to inequality, and inequality among musicians will be a trivial contributor to overall inequality. However, superstar effects are present in many industries, including art, books, entertainment, and even academia.

Finally, as a side note, I'm really looking forward to reading the late Alan Krueger's book Rockonomics. I trust that it builds substantially on the paper that I discussed in this 2017 post.

[HT: The Dangerous Economist]


[*] Assuming that the income share of the bottom 95% of the population is 15%, and the income share of the top 5% is 85%, and assuming a kinked-line Lorenz curve, leads to an area under the Lorenz curve of 0.1, and a Gini Coefficient of 0.8. See here for more on how to calculate the Gini coefficient, using a Lorenz curve made up of kinked lines.

Thursday, 21 November 2019

Social media influencers and superstar effects

In my ECONS102 class, we talk about why earnings differ between different jobs. However, even within jobs that are ostensibly the same, workers may have different wages. Putting aside the gender wage gap and discrimination, two reasons for differences in wages are superstar effects and tournament effects.

Superstar effects, described by Sherwin Rosen in the 1980s [*], occur because top performers are paid (in part) based on the amount of value that they generate for their employer. If a top performer generates a lot of value, they will be paid more. This explains much of the rise in earnings over time for top sportspeople or entertainers - as television (and more recently internet) viewership has grown, the value generated by a top sportsperson or entertainer (in terms of the number of viewers they attract) has grown, and their salaries or earnings have grown as a result.

Tournament effects, described by Rosen and Ed Lazear in the 1980s, occur when people are paid a 'prize' for their relative performance (that is, for winning the 'tournament'). The prize may take the form of a bonus, a raise, or a promotion. The point is that each worker only needs to be a little bit better than the second best worker in order to 'win' the tournament.

These effects are nicely illustrated in the case of social media influencers, as described in this article in The Conversation by Natalya Saldanha (RMIT University):
As people consume less traditional media and spend more time on social platforms, advertisers are increasingly using these influencers to spruik their products. A mega-influencer like Kylie Jenner, with 139 million followers on Instagram, can reportedly charge more than US$1 million for a single promotional post...
So far most of the indications are that the new economics of influencer marketing are not too different to the old economics of marketing.
As in the acting, modelling or music industry, there’s a tiny A-list of superstar influencers making millions. Then there’s a somewhat larger B-list making a handsome living. But the vast bulk of influencers would be better off getting an ordinary job.
In 2018 a professor at the Offenburg University of Applied Sciences in Germany, Mathias Bärtl, published a statistical analysis of YouTube channels, uploads and views over a decade. His results showed that 85% of traffic went to just 3% of channels, and that 96.5% of YouTubers wouldn’t make enough money to reach the US federal poverty line (US$12,140, or about A$17,900).
There are elements of both superstar effects and tournament effects here. If an influencer promoting a product can increase sales, then it makes sense that they will be paid more if they have more followers. So, a superstar with millions of followers will be paid substantially more than one with just hundreds or thousands.

And, influencers are competing for a scarce advertising spend, where successful influencers will attract paid work from many willing advertisers. Being slightly better than the second best influencer is likely to result in a disproportionate number of advertising contracts, increasing their earnings by a lot (and 'winning' the tournament). In contrast, slightly less successful influencers could end up earning less than the poverty line. This probably plays out separately in 'markets' for influencers with a broad appeal, and those whose followers are in a particular niche that advertisers want to target. Interestingly, the tournament effects here are little different to the effects for drug dealers, as Steven Levitt and Stephen Dubner describe in the excellent book Freakonomics (also described in this LA Times article from 2005).

Being a social media influencer isn't going to be a path to riches for the majority of aspiring wannabe Kylie Jenners. The best advice might be to try and exploit a very particular niche audience that advertisers are seeking and one that is not already occupied by one (or many) successful influencers. However, most of these wannabes are going to need a day job.


[*] This was not a new insight, as Alfred Marshall had made a similar point as early as 1875.

Wednesday, 13 November 2019

Bucks as money on the American frontier

This week I'm in Pittsburgh, and yesterday I got the opportunity to do a bit of sightseeing, including the excellent Fort Pitt Museum. It was very enlightening in terms of the early history of the city as a frontier fort town, including its role as a trading post in the fur trade. This exhibit in particular caught my attention:

In ECONS101, we talk about the roles of money, as: a medium of exchange (you give it up when you buy goods or services, and you can receive it when you sell goods or services); a unit of account (you can measure the value of something using the amount of money it is worth); and a store of value (you can keep it and it will retain its value into the future). The exhibit caught my attention because of this note on the wall:

It shows the use of deer skins as a unit of account. Notice that, on the left, it shows how much skins of different animals are worth, measured in "bucks", where one buck is one deer skin. So, six raccoon skins is equal to one buck, or two otter skins is equal to one buck. On the right, it shows what you can buy, again measured in "bucks". So, one pound of gunpowder is one buck, and 12 flints is one raccoon skin (which is 1/6 of a buck).

Of course, money existed in the 18th Century. But coins and other money were in short supply on the American frontier, so deer skins were a useful alternative. Interestingly, this also shows the origin of our use of the term "bucks" to refer to money!

Thursday, 7 November 2019

Fire protection as a private good, rather than a club good or public good

Two years ago, I wrote a post entitled "Why fire protection is (or was) a club good":
Some goods or services that are categorised as club goods may be contentious. For instance, according to the table fire protection is a club good - it is non-rival and excludable. Provided there aren't large numbers of fires, if the fire service attends one fire, that doesn't reduce the fire protection available to everyone else... So, fire protection is non-rival. Is fire protection excludable? In theory, yes. People can be prevented from benefiting from fire protection. Say there was some sort of fire service levy, and the fire service decided to only respond to fires at homes or businesses that were fully paid up.
Although my earlier post made the case that firefighting could be a club good, public firefighting is usually a public good - a good that is non-rival (one person’s use of the good doesn't diminish the amount of the good that is available for other peoples' use) and non-excludable (a person can't be prevented from using or benefiting from the service). However, now it turns out that some fire protection may be a private good - a good that is rival and excludable. According to this article from the AFP:
Kris Brandini and his crew had just returned from four intense, non-stop days battling fires in western Los Angeles.
They dashed to the neighborhood where wealthy residents like Arnold Schwarzenegger were fleeing their homes, then to the inferno that threatened the Ronald Reagan Presidential Library, then back again.
But unlike state firefighters, Brandini was not concerned with protecting most of the exclusive residences lining these valleys.
He and his team are private firefighters.
"I only protect the houses that are on my list," he told AFP. "I don't just go there randomly -- that's the difference between me and the state firefighters.
"They go out and protect every house. I protect the houses that are actually enrolled in the program."
If private firefighters will only protect houses that "are actually enrolled in the program", then that makes private fire protection an excludable good. Of course, private firefighting is excludable on the basis of price - not everyone can afford to pay for their own private firefighters. It's not time to do away with public firefighters just yet, because I don't think we would be willing as a society to price some people out of the market for receiving fire protection.

Unlike public firefighting, private firefighting is also a rival good, since there are only a limited number of houses that a private firefighter can protect (so, if they are protection House A, they may not have enough time or resources to also protect House B). However, in the case of large wildfires like those in the AFP article, even public firefighting becomes a rival good, since public firefighters also can't be in more than one place at a time. A good that is non-excludable but rival is a common resource.

That makes firefighting an interesting case study for my ECONS102 class - it is a good that can be characterised as all four classes of good - private good, public good, common resource, or club good - depending on the circumstances.

[HT: Marginal Revolution]

Thursday, 31 October 2019

You won't find meth labs in places where you're not looking for them

I just read this 2018 article by Ashley Howell, David Newcombe, and Daniel Exeter (all University of Auckland), published in the journal Policing (gated, but there is a summary of some of it here). The authors report on the locations of clandestine methamphetamine labs in New Zealand, based on data from police seizures between 2004 and 2009.

It's an interesting dataset and paper, and they report that:
In the unadjusted spatial scan, there were five locations in the study area with significantly high clandestine methamphetamine laboratory rates (Fig. 2). The ‘most likely’ cluster, centred in Helensville (north-west of Auckland), had a RR of 4.14 with 59 observed CLRT incidents compared to 15.1 expected incidents. A similarly high cluster (RR = 4.09, P = 0.000) was found in the Far North TA.
In other words, there were four times as many lab seizures in Helensville and the Far North than would be expected, if lab seizures were randomly distributed everywhere. The other locations were Hamilton, West Auckland, Central Auckland, and there was a sixth cluster centred on Papakura in some of their analyses. This bit also caught my eye (emphasis mine):
In addition, 26 laboratories (2%) were found at storage units, 21 (2%) discovered in motel or hotel rooms, and another 27 were abandoned in public areas, including cemeteries, parks, roadsides to school yard dumpsters and even the parking lot of a police station.
I wonder how much effort it took for police to find that last one? The paper gives some insights into where the most meth labs have been seized by police. However, we should be cautious about over-interpreting the results, because by definition, you can only seize labs in locations where you are looking for them. So, if police are more diligent or exert more effort in searching for meth labs in Hamilton or the Far North, we would expect to see more lab seizures there, even if there were actually fewer labs than in other locations.

To be fair, the authors are aware of this, and in the Discussion section they note that:
Reports of clandestine laboratory seizures may also be prone to subjectivity. There is no way to be certain that CLRT incident density is not a symptom of a greater police presence or different policing priorities.
However, that doesn't stop them from noting in the abstract that:
Identifying territorial authorities with more clandestine laboratories than expected may facilitate community policing and public health interventions.
It is true that identifying areas with more meth labs than expected would give information about resource allocation. The problem is that this paper doesn't tell us where meth labs are, it only tells us where police have found them.

[HT: The inimitable Bill Cochrane]

Wednesday, 30 October 2019

Book review: Economic Fables

One of the things I tell my first-year economics students, in the very first week of class, is that economics is about telling stories. And when there is a diagram involved, then it is an illustrated story. That idea comes through strongly in Ariel Rubenstein's 2012 book, Economic Fables. Here's what Rubenstein has to say in the Introduction:
Economic theory formulates thoughts via what we call "models". The word model sounds more scientific than the word fable or tale, but I think we are talking about the same thing.
The author of a tale seeks to impart a lesson about life to his readers. He does this by creating a story that hovers between fantasy and reality. It is possible to dismiss any tale on the grounds that it is unrealistic, or that it is too simplistic. But this is also its advantage. The fact that it hovers between fantasy and reality means that it can be free from irrelevant details and unnecessary diversions. This freedom can enable us to broaden our outlook, make use aware of a repressed emotion and help us learn a lesson from the story. We will take the tale's message with us when we return from the world of fantasy to the real world, and apply it judiciously when we encounter situations similar to those portrayed in the tale.
Rubinstein would have us treat economic models in this way, which I think is a fair goal to have. The book itself is partly a memoir, partly an exposition of some economic fables that are clearly favourites of Rubinstein's, and partly a discussion of some interesting interdisciplinary research that Rubinstein has been involved in. At times, the fables become more mathematical than is probably necessary, making them into more abstract models. The real highlights of the book are Rubinstein's linking to his own experiences, and then his discussion of interdisciplinary research in, surprisingly, linguistics.

Rubinstein starts this latter part of the book by describing interdisciplinary work, and especially the 'colonisation' of other disciplines by economists and the tools of economics. I particularly appreciated this bit:
But, in general, it seems to me that the spread of economics to other areas derives from the view expressed by the economist Steven Levitt: "Economics is a science with excellent tools for gaining answers, but a serious shortage of interesting questions."
The interdisciplinary research (in linguistics) that Rubinstein describes relates to persuasion - how one person persuades another as to the truth of something. It makes for interesting reading, but is difficult to excerpt here. Let's just say that it involves a lot of applied game theory, but thankfully is not too math-centric.

The book has lots of interesting asides, and I made a number of notes that will come in handy in both of my first-year papers next year. One bit got me thinking about the difference between income taxes and inheritance taxes (emphasis mine):
Nonetheless, and despite the fact that inheritance tax is imposed in nearly all of the countries that we envy, there is enormous opposition to instituting this tax in Israel. The tax is perceived by most people, including those who are not affluent, as crueler than income tax. This is because income tax takes something that is not yet owned, while inheritance tax takes a bite out of something that has already found a home among a person's assets.
I had never thought about inheritance taxes in this way, but it makes sense. The opposition to an inheritance tax is an endowment effect - we are much more willing to give up something we don't yet own (some of our income, as income tax), than to give up something we already have (some of our wealth). Unlike income tax, an inheritance tax is a loss to us, and we are loss averse - we are very motivated to avoid losses, and that would be expressed in an unwillingness to have an inheritance tax (and, like Israel, New Zealand currently has no inheritance tax).

This is an interesting book, and well worth reading. I can see why Diane Coyle recommended it as "a great book for economics students", and I would share that recommendation.

Tuesday, 29 October 2019

Hamilton won't be our second largest city any time soon

I was interviewed for the Waikato Times last week, and the story appeared on the Stuff website yesterday:
Hamilton city might be growing, but not as explosively as some people might dream. 
Commentators have recently said Waikato is 'ready to go pop' with development, citing the hundreds of millions of dollars spent on community facilities and transport infrastructure. 
Labour list MP Jamie Strange said he believes Hamilton's growth is so significant it could become New Zealand's second biggest city in the next 30 years. 
But a Waikato University professor said it would take drastic change for Hamilton to surpass Wellington and Christchurch in three decades...
"If you are talking about is Hamilton going to be a city of 200,000 - sure it's going to be that.
"But Wellington is certainly going to be big and bigger and Christchurch is also going to be big and bigger."
Read the full story for more from me. The overall point is that the population of Hamilton is growing. But the populations of Wellington and Christchurch are growing as well, and they have a large head start. How long will it take Hamilton to catch up? I gave the reporter (Ellen O'Dwyer) some quick back-of-the envelope calculations.

Based on Census Usually Resident Population counts (available here), Hamilton City grew from 129,588 in 2006 to 141,612 in 2013 and 160,911 in 2018. Wellington City grew from 179,466 in 2006 to 190,956 in 2013 and 202,737 in 2018. Christchurch City declined from 348,456 in 2006 to 341,469 in 2013, then grew to 369,006 in 2013. Hamilton grew faster than Wellington and Christchurch (in both absolute and relative terms) between each of the last two Censuses.

If the absolute rates of growth of both Hamilton and Wellington (from the previous paragraph) between 2013 and 2018 continued, Hamilton would catch up to Wellington in the early 2040s. Based on the absolute rates of growth between 2006 and 2018, this wouldn't happen until the 2080s. As for Christchurch, forget it - the comparable catch-up time is measured in centuries.

None of those calculations take into account the fact that Wellington City is only one part of a larger urban conglomeration that includes Porirua City, Lower Hutt City, and Upper Hutt City. Once you factor those areas in as well, the Wellington urban area is far larger than Hamilton and it would take something spectacular for the Hamilton urban area (even if you include fast-growing Te Awamutu, Cambridge, and Ngaruawahia) to catch up.

Sorry Jamie. Hamilton isn't going to catch Wellington (and definitely not Christchurch) any time soon.

Monday, 28 October 2019

The latest NZ research on social media and mental health is nothing special

If there are two lessons that I wish journalists could learn, they are:

  1. That the latest published research is not necessarily the best research, and just because it is newer, it doesn't overturn higher quality older research; and
  2. That correlation is not the same as causation.
This article from the Jamie Morton of the New Zealand Herald last week fails on both counts:

We blame friends' posts about weddings, babies and holidays for driving "digital depression" - but is social media really that bad for mental health?
A new study that dug deep into how platforms like Facebook, Twitter and Instagram influence our psychological wellbeing suggests not.
In fact, the weak link the Kiwi researchers found was comparable to that of playing computer games, watching TV or just minding kids.
The study is by Samantha Stronge (University of Auckland) and co-authors, and was published in the journal Cyberpsychology, Behavior, and Social Networking (sorry, I don't see an ungated version, but it appears to be open access). The authors used data from one wave of the New Zealand Attitudes and Values Survey, which is a large panel study that is representative of the New Zealand population. Using a sample of over 18,000 survey participants, they found that:
After adjusting for the effects of relevant demographic variables, hours of social media use correlated positively with psychological distress... every extra hour spent using social media in a given week was associated with an extra .005 units on the psychological distress scale. Notably, social media use was the second strongest predictor of psychological distress out of the other habitual activities, at approximately half the effect size of sleeping...
The coefficient is tiny. However, there are a couple of problems with this study. The first is pretty non-technical - this study is pure correlation. It tells you nothing about causation at all. That might not be a problem if the effect is zero.

However, the second problem with this study is that the authors simply dump a whole bunch of variables into their regression, without considering that many of these variables are correlated with each other. That leads to a high risk of multicollinearity, and the consequence of multicollinearity is that the coefficients are biased towards zero. In other words, you are more likely to observe a tiny effect simply because of the way they have analysed their data.

This research paper basically adds nothing to our understanding of whether social media is good or bad or otherwise for mental health. There are much higher quality studies available already (such as this one and this one).

This seems to me to be a real problem with the NZAVS research. They have a very large panel of data, and a huge number of measures covering lots of different domains. However, their approach to using this data appears to be 'throw everything at the wall and see what sticks'. However, that approach does not lead to high quality research, and even if it does lead to a large number of publications, they are mostly of dubious value, like this one.

Journalists could do with a bit more understanding on what constitutes a genuine contribution to the research literature.

Read more:

Saturday, 26 October 2019

A sobering report on the culture in the economics profession

Long-time readers of this blog will know that I have written many times on the gender gap in economics (see the bottom of this post for a list of links). However, I haven't written anything since this post in February. That's not because there wasn't anything to say - the news was all bad up to that point. The latest news doesn't get any better, with the report from the Committee on Equity, Diversity and Professional Conduct of the American Economic Association on the professional climate in economics released last month. It was picked up by the New York Times:
This month the American Economic Association published a survey finding that black women, compared to all other groups, had to take the most measures to avoid possible harassment, discrimination and unfair or disrespectful treatment. Sixty-two percent of black women reported experiencing racial or gender discrimination or both, compared to 50 percent of white women, 44 percent of Asian women and 58 percent of Latinas. Twenty-nine percent and 38 percent of black women reported experiencing discrimination in promotion and pay, respectively, compared to 26 percent and 36 percent for whites, 28 percent and 36 percent for Asians and 32 percent and 40 percent for Latinas.
“I would not recommend my own (black) child to go into this field,” said one of the black female respondents. “It was a mistake for me to choose this field. Had I known that it would be so toxic, I would not have.”
The report is available here, and it makes for sobering reading. It was based on a survey sent to all current and recent (within nine years) members of the American Economic Association, and received over 10,000 responses (a response rate of 22.9%). It collected responses to a mixture of closed-ended and open-ended questions about the general climate in economics, experiences of discrimination, avoidance behaviour, exclusion and harassment. Here's some highlights (actually, they're more like lowlights):
Women very clearly have a different perception of the climate in the economics profession... It is particularly notable that, when asked about satisfaction with the overall climate within the field of economics, men were twice as likely as women to agree or strongly agree with the statement “I am satisfied with the overall climate within the field of economics” (40% of men vs. 20% of women). This large gender disparity is consistent across a variety of related statements about the field broadly: women are much less likely to feel valued within the field, much less likely to feel included socially, and much more likely to have experienced discrimination in the field of economics...
Female respondents are also much more likely to report having experienced discrimination or unfair treatment as students with regard to access to research assistantships, access to advisors, access to quality advising, and on the job market...
When we examine experiences of discrimination in academia... we see that, again, women face significantly more discrimination or unfair treatment than men along all dimensions (again, this gap is larger than the gap in discrimination faced by non-whites relative to whites). Most notably, women are much more likely to report personal experiences of discrimination or unfair treatment in promotion decisions and compensation, 27% and 37% respectively, compared to only 11% and 12% for men. Women are also significantly more likely to report personal experiences of discrimination or unfair treatment in teaching assignments and service obligations, course evaluations, publishing decisions and funding decisions...
Personal experiences of discrimination are also quite common among women working outside of academia...
...close to a quarter of female respondents report not having applied for or taken a particular employment position to avoid unfair, discriminatory or disrespectful treatment, compared to 12% of male respondents...
And that's just from the bits relating the women. This bit also caught my attention, as it is both negative and affects everyone:
Experiences of exclusion are strikingly common in economics, both among male and female respondents... For example, 65% of female respondents report feeling of social exclusion at meetings or events in the field; 63% report having felt disrespected by economist colleagues; 69% report feeling that their work was not taken as seriously as that of their economist colleagues; 59% report feeling that the subject or methodology of their research was not taken seriously. The corresponding shares among men are smaller but still strikingly large: 40%, 38%, 43% and 40%, respectively.
It's also not entirely bad (depending on how you look at it):
More than 80% of female respondents and 60% of male respondents agree that economics would be a more vibrant discipline if it were more inclusive...
As the New York Times article suggests, there are also a lot of intersectional issues, with minority ethnic groups, non-heterosexual and non-binary genders, all facing similar and overlapping issues of discrimination and exclusion.

The problems seem larger than other disciplines, and the report provides some comparisons. However, it is worth noting that the issues have been made very visible of late, and that might affect people's responses to questions such as those in this survey. It was interesting to note, for example, that both self-identified liberals and self-identified conservatives reported discrimination on the basis of their political beliefs.

Notwithstanding the issues with the survey though, it does highlight that there is a problem (as if we didn't know), and provides a baseline to which we can compare as we try to improve the culture within the profession. Read the report though, and you'll get a sense of just how much work needs to be done.

Read more: