Wednesday 30 May 2018

It seems it's better to publish economics papers with U.S. data

This post follows on from yesterday's post about the length of article titles, which showed a strong negative correlation between title length and research quality (so papers with shorter titles were more likely to be published in better journals, and attracted more citations). There are, of course, a lot of other factors that affect where journal articles get published. One gripe for many researchers outside the U.S. or the U.K. is how hard it is to get published in top journals using data from outside the U.S. or the U.K. Until relatively recently, that gripe was based on purely anecdotal evidence. However, a 2013 article by Jishnu Das, Quy-Toan Do, Karen Shaines (all from the World Bank), and Sowmya Srikant (FI Consulting), published in the Journal of Development Economics (ungated version here) provides some empirical evidence on this.

Das et al. use data on over 76,000 papers published in 202 economics journals over the period from 1985 to 2005, and the disparity in data sources for published economics papers is clear:
Over the 20 year span of the data, there were 4 empirical economics papers on Burundi, 9 on Cambodia and 27 on Mali. This compares to the 37,000 or so empirical economics papers published on the U.S. over the same time-period. This variation is also reflected among the highly selective top-tier general interest journals (henceforth top-tier journals) of the economics profession (American Economic Review, Econometrica, The Journal of Political Economy, The Quarterly Journal of Economics and The Review of Economic Studies). American Economic Review has published one paper on India (on average) every 2 years and one paper on Thailand every 20 years. The first-tier journals together published 39 papers on India, 65 papers on China, and 34 papers on all of Sub-Saharan Africa. This compares to 2383 papers on the U.S. over the same time period.
They then go on to show about 75 percent of the cross-country variation in the geographical focus of research is explained by GDP per capita and by population. Countries that have higher GDP are more likely to be the focus of research. This is a disappointing result if you are interested in developing countries (as the authors of the paper clearly are). Surprisingly though:
...the U.S. is not an outlier in the volume of research that is produced on it... the volume of research for the U.S. lies well within the predicted confidence interval and excluding the U.S. leads to the same coefficient estimates as its inclusion. In other words, a lot more is produced on the U.S. because it is rich and it is big; the natural comparator for the U.S. would be all of Europe and here, the volume of research is very similar.
However, when it comes to the elite journals, the U.S. is a clear outlier:
The difference between the U.S. and the rest of the world is substantial — 6.5% of all papers published on the U.S. are in the top tier journals relative to 1.8% of papers from other countries.
For comparative purposes, over their 20-year period there were 2,383 publications on the U.S. published in the top five economics journals, and one on New Zealand (yes, you read that right, just one - I don't know which article it was, sorry).

So, is this discrimination against non-U.S. research? Perhaps. Or, it could be as simple as the U.S. having a greater density of top-quality researchers, who are more likely to publish in top-quality journals, and who, because they are located in the U.S., have readier access to U.S. data. Or, perhaps the quality of U.S. data is higher. Das et al. point out that data quality has a superstar effect to it (similar to the superstar effects in the labour market that I have written about before):
Researchers converge on the “best” dataset even if it is 1% better than other data available, and the initial work creates a focal point for further research with the same data.
Again, like the paper I discussed yesterday, there isn't necessarily a causal interpretation to these results (doing research on the U.S. doesn't necessarily cause papers to be accepted into top journals). But it is disappointing, particularly given the quality of linked administrative data that we have in New Zealand through the Integrated Data Infrastructure, which (I think) should be attractive for publication in top journals.

Tuesday 29 May 2018

Title length and research quality

There are a number of quirks that seem to be related to research quality, perceived research quality, or the academic rewards from research. In a new paper published in the Journal of Economic Behavior and Organization (sorry I don't see an ungated version online), Yann Bramoullé (Aix-Marseille University) and Lorenzo Ductor (Middlesex University London) explore the relationship between the length of a journal article's title and measures of its quality. Using a dataset of half a million articles in economics journals over the period from 1970 to 2011, they demonstrate a number of interesting things:
Articles with shorter titles tend to be published in better journals. They tend to be more cited and to get higher novelty scores. Moreover, these tendencies are more pronounced in better journals... Moreover, including novelty in the regressions on journal quality and citations has essentially no impact on the estimates. This means that the observed relation between title length and citations is not explained by the fact that novel articles tend to have shorter titles and to be more cited. Together, these results show that title length correlates well with the overall scientific quality of a paper.
So, there is a strong negative relationship between title length and the quality of the article, even after controlling for the quality of the authors, and the quality of the publication. Bramoullé and Ductor have obviously taken this on board, as their article title ("Title length") is as short as possible. But why would title length be related to research quality? Bramoullé and Ductor suggest a couple of possible explanations:
On one hand, title length could have a causal impact on journal quality or citations. A short title could make an article easier to memorize, affecting citations and, possibly, editorial decisions... On the other hand, title length could proxy for the true, unobserved qualities of the article. Articles with a strong potential to influence subsequent research could thus both generate more citations and have shorter titles.
The second explanation seems more intuitive. Many (maybe all?) highly cited papers are those that steer research in new directions. For example, the Kahneman and Tversky paper that I mentioned yesterday, entitled "Prospect Theory: An Analysis of Decision under Risk" (note: 51 characters including spaces and the colon), is the most cited paper in economics (with over 8000 citations in these author's dataset, and over 50,000 in Google Scholar). Subsequent papers in the new research area opened up by the highly-cited papers are likely to have slightly longer titles, as they build on the original theory, refute it, apply it to new areas or new sub-fields, and so on. However, that is of course a causal interpretation, and the authors' results don't establish causality here. Nevertheless, an interesting quirk - I wonder if it extends to blog posts?

[Update]: I forgot to mention that the authors do attempt to control for novelty in their analysis, which would seem to mitigate against the intuitive explanation in my last paragraph. However, I don't find their measure of novelty (the number of 'atypical' keywords the article uses) to be particularly convincing, since novel articles may well use keywords that themselves are common, even though the content is not.

[HT: Marginal Revolution]

Monday 28 May 2018

Book Review: The Undoing Project

I've never read a Michael Lewis book before, not even Flash Boys or Moneyball (though I have seen the movie of the latter). To be clear, I haven't been actively avoiding his writing but like Malcolm Gladwell, his books get so much press that I feel like I've read them without actually reading them. The Undoing Project is different. I have read a few short reviews, but not enough to give away the bulk of the content. In it, Lewis tells the story of two psychologists who have had a substantial impact on economics - 2002 Nobel Prize winner Daniel Kahneman, and Amos Tversky, who surely would have shared Kahneman's Nobel if not for his untimely passing in 1996.

I really enjoyed Lewis's writing style, which I would describe as a flowing biographical narrative. It is easy to see why so many of his books have been made into movies, and this one would also lend itself to the screen. There are a lot of quirky stories embedded within it. For instance, take this bit:
Apart from that short note, Amos seldom mentioned his army experiences, in print or conversation, unless it was to tell a funny or curious story - how, for instance, during the Sinai campaign, his battalion captured a train of Egyptian fighting camels. Amos had never ridden a camel, but when the military operation ended, he won the competition to ride the lead camel home. He got seasick after fifteen minutes and spent the next six days walking the caravan across the Sinai.
Similarly, Lewis paints a hilarious picture of Kahneman and Tversky tearing around the Sinai desert in a jeep, surveying Israeli troops during the war. The anecdotes add a lot of value to the story.

However, the book didn't add much, if anything, to my understanding of behavioural economics. If you want a good treatment of that, you would be much better off with Richard Thaler's excellent Misbehaving (which I reviewed earlier this year). However, Lewis does offer a really clear and thorough description of the development of Kahneman and Tversky's collaboration, from how it began, through its peak with the development of Prospect Theory, and onto the decline in their relationship after they both moved from Hebrew University in Israel to North American institutions. I especially liked this bit, on their choice to move on from regret minimisation as a theory:
Amazingly, Danny and Amos did not so much as pause to mourn the loss of a theory they'd spent more than a year working on. The speed with which they simply walked away from their ideas about regret - many of them obviously true and valuable - was incredible. One day they are creating the rules of regret as if those rules might explain much of how people made risky decisions; the next, they have moved on to explore a more promising theory, and don't give regret a second thought.
That new theory was Prospect Theory, which Lewis notes was named as such "purely for marketing purposes", so that it would be distinct (they originally labelled it "value theory"). Interestingly, Lewis avoided the temptation to talk about their lack of regrets when they moved on from regret as a theory (but see, I couldn't resist - that's lazy blog writing, that is).

One thing that comes clearly through in the book is how surprising (as well as how intense) the partnership between Kahneman and Tversky was. Take this bit on their differences:
Danny was a holocaust kid; Amos was a swaggering Sabra - the slang term for a native Israeli. Danny was always sure he was wrong. Amos was always sure he was right. Amos was the life of every party; Danny didn't go to parties. Amos was loose and informal; even when he made a stab at informality, Danny felt as if he had descended from some formal place. With Amos you always just picked up where you left off, no matter how long it had been since you last saw him. With Danny there was always a sense you were starting over, even if you had been with him just yesterday. Amos was tone-deaf but would nevertheless sing Hebrew folk songs with great gusto. Danny was the sort of person who might be in possession of a lovely singing void that he would never discover. Amos was a one-man wrecking ball for illogical arguments; when Danny hear an illogical argument, he asked, What might that be true of? Danny was a pessimist. Amos was not merely an optimist; Amos willed himself to be optimistic, because he had decided pessimism was stupid. When you are a pessimist and the bad thing happens, you live it twice, Amos liked to say. Once when you worry about it, and the second time when it happens.
That last bit made me laugh out loud, because I've had that exact conversation with my wife on more than one occasion, and my view accords with Amos's. The book does jump around a little bit in time, which I found a little disconcerting. However, that betrays my preference for linearity in the storyline. Overall, this was an excellent read, and I'm looking forward to cracking open some other Michael Lewis books in the future (in spite of thinking I already know what they say).

Sunday 27 May 2018

Could closing the gender gap in economics be as simple as providing students with information?

In a new paper published in the journal Economics of Education Review (sorry I don't see an ungated version anywhere), Hsueh-Hsiang Li (Colorado State University), reports on a randomised controlled trial that she ran in the introductory economics classes at Colorado State. Specifically:
During the semester, treatments such as the provision of information on career prospects, average earnings, and grade distributions were provided to women in the treatment group. A nudging message was also sent to female students in the treatment group with a midterm grade above the median. Additionally, half of the treated female students were invited to attend mentoring activities throughout the semester.
Few eligible female students (around 5%) took up the mentoring, so that doesn't explain the effects, which were that:
The treatment effect of interventions on female students with grades above the median is substantial. The treatments increase the probability of these female students majoring in economics by 5.41 – 6.27 percentage points. The effects are even larger for freshmen and sophomores among these high-performing female students, who are 11.2 – 12.6 percentage points more likely to declare economics as their major.
To summarise, there was no treatment effect for below-median-grade female students, in terms of whether they went on to major in economics. The effect was entirely concentrated among above-median-grade female students. Interestingly, the effect of the intervention was actually negative for male students, which Li explains:
 Conversely, the information treatment appears to reduce male students’ likelihood of declaring economics as their major by 2.67 percentage points. The effect is larger (−5 percentage points) among male students in the lower classes (i.e., freshmen and sophomores). Because male students in the treated group received information about careers in the economics profession as well as the grade distribution information with no nudges, the negative effect is likely attributable to their reaction to the grade information... students were overly optimistic about their grade performance upon entering the class. The overconfidence is particularly pronounced among male students.
Overconfident male students obviously get the message that they aren't nearly as good at economics as they thought, when given more information about how the rest of the class is performing, and respond by being less likely to take an economics major. Unsurprisingly, this effect was concentrated among male students below the median grade (and the effect disappeared once Li controlled for students' GPA). On the other hand, female students (at least, those above the median grade) respond to the information by being more likely to take an economics major.

The results are interesting. However, I was more interested that prior to the intervention the information on grade distribution was not available to students. At Waikato, until we shifted to Moodle this year, students in economics papers could previously see detailed information on the grade distributions for each assessment (Moodle doesn't have this functionality, but for the tests and exam I routinely provide the mean, median, top mark, and pass rate to students, so at least there is some information). Encouraging top students in the introductory economics class to enrol in an economics major also seems like a no-brainer, and something that most schools would already routinely do.

So, in spite of the large and statistically significant effects in this study, it seems to me that the results are not generalisable to other settings because the intervention is so routine. However, if your university isn't already doing these things, then now is a good time to start!

Read more:


Saturday 26 May 2018

Did Japan or the U.S. lose more lives to atmospheric nuclear blasts in the 20th Century?

The answer may surprise you. At least, according to a recent paper by Keith Meyers (a PhD candidate last year from the University of Arizona). Meyers notes that earlier studies have focused on radioactive fallout exposure, but ignored a potentially more important channel to human health:
Fallout deposition may approximate the presence of fallout in the local food supply, but radiation exposure proxied through deposition becomes more inaccurate if local deposition fails to enter the local food supply. The National Cancer Institute (1997) finds that the consumption of irradiated dairy products served as the primary vector through which Americans ingested large concentrations of radioactive material. During the 1950's most milk was consumed in the local area it was produced. It is through this channel where local fallout deposition would enter the local food supply...
Using data on I-131 concentrations in locally produced milk supplies, and county-level panel data from 1940 to 1988 (noting that atmospheric nuclear testing at the Nevada Test Site occurred from 1951 to 1963), Meyers finds that:
Depending on the regression specified, I-131 in milk contributed between 395,000 and 695,000 excess deaths from 1951 to 1973. The average increase in mortality across counties is between 0.65 and 1.21 additional deaths per 10,000 people for this same period. The estimates from deposition suggest that fallout contributed between 338,000 and 692,000 excess deaths over the same period...
From 1988 to 2000, valuations of human life by U.S. Federal Government agencies ranged between $1.4 million and $8.8 million in 2016$. These values and my estimates from the preferred specification place the value of lost life between $473 billion and $6,116 billion in 2016$.
The range of estimated social costs is very wide, but regardless this social cost is substantial and Meyers notes in his conclusion:
These losses dwarf the $2 billion in payments the Federal Government has made to domestic victims of nuclear testing through the Radiation Exposure Compensation Act and are substantial relative to the financial cost of the United States' nuclear weapons program.
Meyers's paper is an excellent example of evaluating population-level mortality differences due to environmental differences. I have a paper with Matthew Roskruge in progress where we similarly look at the mortality impacts of climate change in New Zealand, which adopts a similar approach (more on that in a later post).

However, coming back to the question in the title to this post, it appears that based on Meyer's models, the number of deaths from atmospheric nuclear blasts at the Nevada Test Site in the U.S. far exceeds the number who died in Japan from the Hiroshima and Nagasaki bombings.

[HT: Marginal Revolution, last December]

Wednesday 23 May 2018

Finland is ending its universal basic income experiment

I've been meaning to write about universal basic incomes for a while, particularly since it is something that we discuss in my ECONS102 class each year (and something I cover in regular courses for officials from the Vietnamese Social Security Administration). There are several current experiments in universal basic income (including GiveDirectly's experiment in Kenya, some pilot programmes in India, and several others). However, one of the headline experiments in Finland may be coming to a close, as the New York Times reported last month:
For more than a year, Finland has been testing the proposition that the best way to lift economic fortunes may be the simplest: Hand out money without rules or restrictions on how people use it.
The experiment with so-called universal basic income has captured global attention as a potentially promising way to restore economic security at a time of worry about inequality and automation.
Now, the experiment is ending. The Finnish government has opted not to continue financing it past this year, a reflection of public discomfort with the idea of dispensing government largess free of requirements that its recipients seek work.
The biggest argument that people make against a universal basic income (UBI) is cost. For instance, paying every adult in New Zealand a weekly income of $200 (note: that's not exactly a generous basic income at all!) would cost around $36 billion per year, which is about 7% of GDP and would more than double government transfers (currently total social security and welfare payments are about $30 billion).

However, an under-appreciated argument against a UBI is how taxpayers (who would foot the bill for the UBI) would feel about it. And it appears that disquiet among taxpayers (and importantly, voters) is at the heart of the reconsideration of the Finland experiment:
Many people in Finland — and in other lands — chafe at the idea of handing out cash without requiring that people work.
“There is a problem with young people lacking secondary education, and reports of those guys not seeking work,” said Heikki Hiilamo, a professor of social policy at the University of Helsinki. “There is a fear that with basic income they would just stay at home and play computer games.”
The Finnish data on the experiment is supposed to come out next year. It will be interesting to see what they find, even though the experiment itself is over. Importantly, are the work disincentive effects as large as people are worried about? Or do the recipients get jobs even though they're being given free money? It's especially important that these experiments are rigorously evaluated, given that many are arguing that a UBI might be a solution to the loss of our jobs to robots. We need to have a good idea of their effects in order to make an informed decision about whether a wider roll-out is appropriate.

Monday 21 May 2018

Book Review: What Money Can't Buy

This is going to be an unusual book review. I don't think I've ever read a book before where I simultaneously disagreed with almost everything the author wrote and still thought that the book was a good read. But that was the case for What Money Can't Buy, by Michael Sandel. The subtitle is "The moral limits of markets", and Sandel essentially spends 203 pages trying to convince the reader that market reasoning has gone too far. Take this quote from the introduction:
The most fateful change that unfolded during the past three decades was not an increase in greed. It was the expansion of markets, and of market values, into spheres of life where they don't belong.
Sandel sees two main objections to the expansion of markets: (1) fairness; and (2) corruption. He provides a useful example of the market for prostitution services that illustrates both objections:
Some people oppose prostitution on the grounds that it is rarely, if ever, truly voluntary. They argue that those who sell their bodies for sex are typically coerced, whether by poverty, drug addiction, or the threat of violence. This is a version of the fairness objection. But others object to prostitution on the grounds that it is degrading to women, whether or not they are forced into it. According to this argument, prostitution is a form of corruption that demeans women and promotes bad attitudes toward sex. The degradation objection doesn't depend on tainted consent; it would condemn prostitution even in a society without poverty, even in cases of upscale prostitutes who liked the work and freely chose it.
The book collects many examples of where market reasoning has gone wrong, such as the case (reported in Uri Gneezy's The Why Axis, which I reviewed back in 2015) of the Israeli childcare centre that began fining parents for late pickups of their children, only to see the number of pickups increase. When we increase the cost of something, we usually expect less of it to happen, but in this case placing a price on late pickups undermined the moral incentives for parents to pick up their children on time, essentially enabling them to avoid the moral consequences with a small payment. I was also interested that Sandel included several examples that I have blogged about before, including a market for refugee obligations, and farming or licensed hunting of endangered species to save them.

Sandel does make a strong case for why economists need to understand some moral psychology, and importantly how the impact that creation of new markets or prices where they previously did not exist will interact with previous norms and expectations (as in the childcare example) to lead to unintended consequences. However, despite the impression that Sandel gives, economists do not routinely ignore unintended consequences. In fact, as one example, unintended consequences are a recurring theme on this blog.

I was also surprised that Sandel didn't mention at all the important work of Karl Polanyi, in particular The Great Transformation, where Polanyi makes the point that the domain of markets is socially determined. Perhaps Sandel prefers that the social determination of markets has already happened, some time in the past rather than recently, and that the domain of markets cannot be further refined over time?

Similarly, in spite of the number of objections to market reasoning that Sandel provides, he fails to engage with some of the most compelling reasons for markets, one of which is transparency. To draw on one of the examples that Sandel himself uses, an open market for bribery would at least provide citizens with information on bribe-recipients and bribe-givers, where those transactions behind closed doors are much more insidious. Of course, neither Sandel nor I are suggesting that a market for bribery would be a good thing, but price transparency is one virtue of markets that Sandel overlooks.

In the last two sections of the book, Sandel rails against the rampant commercialisation of modern times, including secondary markets for life insurance, naming rights on stadiums, markets for sports memorabilia, and so on. Those two sections are much less convincing and, while the whole book is quite moralistic, I found these sections to be far too 'small-c' conservative for my tastes. While the first sections of the book were quite thought-provoking, and prompted me to re-examine some of my assumptions about the primacy of markets, these later sections struck me as the philosophical equivalent of an old man shouting at the neighbourhood youngsters to "Get off my lawn!". Just because some market innovations have been negative, that doesn't mean that all market innovations are to be resisted.

This is certainly a book that will make you think. I really enjoyed the mix of examples that Sandel uses to illustrate his points, even though I didn't agree with him in most cases. Whether you are a market evangelist or a market sceptic, I suspect there is something of value in this book for all. Highly recommended!

Sunday 20 May 2018

World has just destroyed any premium value for their 'Made in New Zealand' clothing

Following the fallout from the World Made in New Zealand saga (where the New Zealand fashion label World was caught out selling t-shirts with "Fabriqué en Nouvelle Zélande" labels when those shirts were actually made in Bangladesh and Hong Kong), the New Zealand Herald ran a good story on why we're willing to pay for 'Made in New Zealand':
But why would anyone pay $99 for a T-shirt which, it turned out, was materially no different from one sold for a fraction of that price?
When we pay a premium for retail items it's because the branding for that item convinces us it contains some intangible benefit, says Dr Sommer Kapitan, a senior marketing lecturer at Auckland University of Technology.
"Before we knew there was a question about that brand we had this quirky, artistic premium New Zealand fashion brand."
A World shirt was not just a shirt for someone who values artistry or quirkiness, but an expression of those values, Kapitan said.
And the target market for World and most other designer labels were people willing to pay a premium to stand out.
"I [a designer label fan] might not want to see myself as someone who wears $30 stuff. Whether you can tell or not, I want to know that I wear a $100 T-shirt," Kapitan said.
World's decision to write Made in New Zealand in French on its swing tags was an example of the way brands would use symbolic cues to communicate worth to consumers, Kapitan said.
This story, and the explanation, calls to mind two important (and related) concepts from economics. The first concept is hedonic demand theory (or hedonic pricing, which I have written about earlier here in the context of education). Hedonic pricing recognises that when you buy some (or most?) goods you aren't so much buying a single item but really a bundle of characteristics, and each of those characteristics has value. The value of the whole product is the sum of the value of the characteristics that make it up. In the case of a t-shirt from World, you are not only buying a t-shirt, but you are buying something else. As well as the garment, you are buying: (1) the 'warm glow' feeling that you are supporting New Zealand clothing manufacturers; and/or (2) an image that wearing that t-shirt allows you to project to the world ('Look at me! I'm wearing this t-shirt that was made in New Zealand. Aren't I a great person?'). So, people are willing to pay a premium for a World t-shirt that has a 'Made in New Zealand' tag for one or both of those reasons.

That brings me to the second concept - conspicuous consumption (which I've written about earlier here). People engage in conspicuous consumption as a form of signalling - they want to signal to other people the type of person that they are (or the type of person that they want other people to think they are). A signal is only effective if it has two characteristics: (1) it is costly; and (2) it is costly in a way that makes it unattractive for those with 'low-quality' attributes to attempt (in this case, it would have to be unattractive for people who aren't the type of person who buys New Zealand-made to pretend to be that type of person).

With a $100 World t-shirt (with a 'Made in New Zealand' tag), the first characteristic is assured. There is a premium for the 'Made in New Zealand' label, which makes the t-shirt more expensive than a regular t-shirt made in Bangladesh. What about the second characteristic? Assume that there are two types of people: (1) those who really do buy New Zealand-made because they want to support local industry; and (2) those who buy things with 'Made in New Zealand' labels because they want to be associated the first group, even though they don't feel that strongly about it. The first group is (probably) willing to pay a greater premium for 'Made in New Zealand' than the second group, but this can't be assured. Even if the extra cost of buying 'real New Zealand-made' clothing is high, at least some of the second group will not drop out of the market. The signalling value of 'Made in New Zealand' is therefore pretty weak.

The weakness of the signal (and therefore the conspicuous consumption value) of buying 'Made in New Zealand' is also clear because the 'Made in New Zealand' tag is hidden inside the t-shirt. If you wanted people to know that you are the type of person that buys New Zealand-made, you'd want to project that to the world, which is difficult to do if the tag is hidden. So the only way you could present that signal is therefore to buy from a designer where all of their clothing is New Zealand-made (then you don't have to show the label). And this is where World has clearly gotten things wrong. Any signalling (or faux-signalling, given the weakness of the signal) value is going to be lost from World clothing, now that we know they aren't really selling New Zealand-made t-shirts.

Although, that does still leave the 'warm glow' from buying New Zealand-made. But in order to be willing to pay a premium for the 'warm glow', customers have to believe that the 'Made in New Zealand' tag is credible. And World's credibility has surely taken a huge hit. Why would you believe that their clothing is New Zealand-made based on the tag, when the tag has been shown to be worthless in this case? And this is a seriously weak response from World:
But World co-owner Dame Denise L'Estrange-Corbet told Newstalk ZB tags sewn into the garments said "Made in Bangladesh", and stated they were sourced from AS Colour, so it was not misleading customers.
L'Estrange-Corbet said only a small percentage of her products were manufactured overseas.
She said: "99 per cent of our clothing is made here."
So, if the signalling value (which was limited anyway) of World t-shirts has declined, and the 'warm glow' value has declined due to a loss of credibility, that leaves World in a seriously difficult position.

Saturday 19 May 2018

The shake-out of legalised marijuana growers is underway

The New Zealand Herald reported this week:
A glut of legal marijuana has driven Oregon pot prices to rock-bottom levels, prompting some nervous growers to start pivoting to another type of cannabis to make ends meet — one that doesn't come with a high.
Applications for state licences to grow hemp — marijuana's non-intoxicating cousin — have increased more than twentyfold since 2015, and Oregon now ranks No 2 behind Colorado among the 19 states in America with active hemp cultivation...
It's a problem few predicted when Oregon voters opened the door to legal marijuana four years ago.
The state's climate is perfect for growing marijuana, and growers produced bumper crops.
Under state law, none of it can leave Oregon.
That, coupled with a decision to not cap the number of licences for growers, has created a surplus.
Oregon's inventory of marijuana is staggering for a state its size. There are nearly 1 million pounds (450,000kg) of usable flower in the system, and an additional 350,000 pounds (159,000kg) of marijuana extracts, edibles and tinctures.
The legalisation of marijuana in Oregon (and many other U.S. states) was supposed to bring on a boom time for marijuana growers. But, the removal of barriers to entry into the market caused a massive increase in the number of growers (and the quantities they were growing). Absent the legalisation aspect, this sort of boom-and-bust is common in agricultural markets, where barriers to entry are low (agricultural markets are among the closest to perfectly competitive).

Consider a perfectly competitive market for marijuana, as shown in the diagram on the left below. It's probably not quite perfectly competitive, because growers require licences, but it is pretty close. The diagram on the right tracks changes in growers' profits over time. Initially (at Time 0) the market is at equilibrium (where demand D0 meets supply S0) with price P0, and firms are making profits π0. Now say there is a permanent increase in demand at Time 1, to D1 (this increase in demand derives from the legalisation of the consumption of marijuana). Prices increase to P1, and growers' profits also increase (to π1). There are no barriers to entry (other than growers requiring a licence and some land, so this is a perfectly competitive market), so the higher profits encourage new growers to enter this market. Supply increases to S2 (more growers) at Time 2. Price falls to P2, and growers' profits also fall (to π2).


At this point in the story of the market shown in the diagram, profits are low so growers start to exit the market. And indeed, that is what appears to be happening:
"Word on the street is everybody thinks hemp's the new gold rush," said Jerrad McCord, who grows marijuana in southern Oregon and just added 12 acres of hemp.
"This is a business. You've got to adapt, and you've got to be a problem-solver."...
In Oregon, the number of hemp licenses increased from 12 in 2015 to 353 as of last week, and the state now ranks No 2 nationally in licensed acreage.
Growers are exiting the marijuana market and entering the hemp market instead, since most of the growing techniques are the same (so that makes the barriers to exit the marijuana market low, and barriers to entry into the hemp market also low). Once there are fewer growers growing marijuana, the price (and profits) should recover for the remaining growers. In the market diagram above, supply decreases to S3, with prices increasing to P3 and profits increasing to π3. But we should expect the market price for hemp to come under pressure shortly, from the increase in the number of growers in that market!

I've previously argued that this sort of cycle is likely happening in the New Zealand craft beer market, though I'm still waiting for the large-scale exits to happen. In that case, demand increases appear to be just high enough to keep the small craft brewers in the market. For now.

Wednesday 16 May 2018

Maybe the world's shortest published research article

Now and again, I review an article for a journal and my review ends up being quite long. I have joked that the review rivals the article that is being reviewed in word count. This 1974 article by Dennis Upper (Veterans Administration Hospital, Brockton, Massachusetts), published in the Journal of Applied Behavioral Analysis, actually does fit that bill. It is titled "The unsuccessful self-treatment of a case of “writer's block”", and at the risk of copyright violation, I'm going to reproduce the article in full here:
 
Yes. That's it. You can read it in its entirety at this link, and the reviewers comments at the bottom of the page, which in relative terms are approximately infinitely longer than the paper itself. The paper even has its own Wikipedia entry, which is also infinitely longer than the paper. There are even three replication studies (see here, here, and here). Yes, two of the replications are hidden behind paywalls, which might be the funniest aspect of this whole exercise!

[HT: Matt Roskruge]

Tuesday 15 May 2018

Cost-benefit analysis and cost-effectiveness analysis are different

In yesterday's New Zealand Herald, Jamie McKay wrote about an interview with Environment Minister David Parker. This bit caught my eye (emphasis mine):
DP: Huh! The industry has been consulted for over a decade here! In terms of cost-benefit you don't actually do an analysis on whether you should have clean rivers, that's a value judgement, and the vast majority of New Zealanders think we should have rivers clean enough to swim in. What you use cost-benefit analysis for is to look at what is the most cost effective way of getting there.
No, that's NOT what you use cost-benefit analysis for. At least, it is not helpful to conflate cost-benefit analysis with cost-effectiveness analysis in this way.

Cost-effectiveness analysis evaluates the cost per unit of benefits for some option, where the benefits need not be measured in dollars (e.g. a reduction in nutrients in a stream). Cost-effectiveness analysis is useful when there is more than one way of obtaining benefits, because it tells you that whichever option achieves a unit of benefits at the lowest total cost is the more cost-effective approach.

Cost-benefit analysis is related, but different. It compares the costs of some activity with its benefits, where both the costs and benefits are measured in dollars (so that they are comparable). The outcome of a cost-benefit analysis is technically a measure of cost-effectiveness - it is a measure of the ratio of benefits to costs (in other words, it measures the value of benefits for each dollar of cost, which is the inverse of cost-effectiveness). If this ratio is greater than one, then the benefits of the activity are greater than the costs. If this ratio is less than one, then the benefits are less than the costs. Simple. Cost-benefit analysis is useful if you want to determine whether or not to undertake some action, or to choose between mutually exclusive alternatives.

Cost-benefit analysis for a single option (e.g. for cleaning up a stream) doesn't tell you what is cost-effective, because that is not its purpose. You need to be comparing multiple options to evaluate cost-effectiveness. In the case of clean streams, there probably are several options for clean-up to choose from. Of course, if you conducted multiple cost-benefit analyses for the different options, then you could argue that the option with the highest ratio of benefits to costs is most cost-effective. But cost-benefit analysis would likely be overkill for this purpose.

Cost-effectiveness analysis is easier to conduct than cost-benefit analysis, because for cost-effectiveness analysis you don't need to measure the value of the benefits (in dollars), which can be difficult as it requires non-market valuations of the benefits. Cost-benefit analysis will only be necessary if you have multiple benefits and you want to know the combined benefit (since converting everything to dollar values is a handy way to combine benefits in a single measure). But that is probably not the case for streams, where you can measure the benefits in terms of something like reduced nutrient loads, and converting the benefits to dollar terms would only add an additional source of error to the analysis.

Cost-effectiveness analysis is much more flexible than cost-benefit analysis if, as Parker implies, you've already made the decision to have clean rivers. If cleaning up streams is your sole goal (e.g. based on a measure of a single nutrient load or an index of several nutrient loads), then cost-effectiveness analysis is most likely what you would use to determine the most cost-effective way of getting there, NOT cost-benefit analysis.

Monday 14 May 2018

Money, sex and happiness

On Saturday, I wrote a post about some research that identified happiness researchers as being perceived as happier than economics Nobel Prize winners or other top economists. Maybe it's because they get to do research like this 2004 paper by David Blanchflower (Dartmouth College) and Andrew Oswald (Warwick University), published in the Scandinavian Journal of Economics (ungated version here). [*]

In the paper, Blanchflower and Oswald look at the relationship between sex and happiness, using data on around 16,000 Americans from the General Social Survey between 1989 and 2002 (as an aside the GSS data are available online for you to play with here!). The paper has lots of interesting findings, but here is a quick summary of the headline results:
Having sex at least four times a week is associated with approximately 0.12 happiness points, which is a large effect (it is, very roughly, about one-half of the size of the effect of marriage on happiness)... 
...sex may bring more happiness to the highly educated than to the less-educated...
How many sexual partners in the last year will maximize a person’s happiness?... the simple answer according to these GSS data is one sexual partner...
...people who say they have ever paid for sex are considerably less happy than others. Those who have ever had sex outside their marriage also report notably low happiness scores.
And finally:
We know from these equations that money does seem to buy greater happiness. But it does not buy more sex.
That last conclusion might seem unusual given that sexual services can be readily purchased. What they really found is that people with higher incomes report higher happiness, but they do not have more sex, than people with lower incomes do.

Of course, this research doesn't tell us anything about causality, as Blanchflower and Oswald acknowledge in the paper. Maybe sex makes people happier, or maybe happier people have more sex, or maybe there is some third variable (personality traits?) that makes people both happier and more inclined to have sex? Disentangling those possibilities requires further work. Google Scholar suggests that the Blanchflower and Oswald paper has been cited over 440 times - maybe the answer is in there somewhere?

*****

[*] Although, you may remember Andrew Oswald as being the least happy-looking of all the researchers from that research paper I referred to on Saturday!

Sunday 13 May 2018

The minimum wage, EITCs, and criminal recidivism

Much of the empirical literature on the minimum wage focuses on the employment effects. There is no strong consensus, though my reading of the latest research (see here) is that it broadly supports the dis-employment effects of the minimum wage. However, the minimum wage has a number of other effects. Last month I wrote a post on the effects (or lack thereof) on the cost of living.

On a similar theme of under-recognised effects of the minimum wage, a recent paper by Amanda Agan (Rutgers) and Michael Makowsky (Clemson) looks at the effect of minimum wages on criminal recidivism. This research is interesting, because the theoretical effect of a higher minimum wage is ambiguous, as Agan and Makowsky explain:
A change in the minimum wage could impact the labor market prospects of released prisoners, and thus recidivism, through a change in their likelihood of finding employment and/or through a change in the wage they can expect to earn if they succeed. The first of these, the employment effect, is at the heart of most economic studies of minimum wages... A reduction in labor demand and increase in the likelihood of unemployment stands to reduce the opportunity cost of returning to jail, increasing the probability of recidivism... This simple model also predicts a second wage effect that pushes in the opposite direction.
So, it isn't clear whether a higher minimum wage would decrease criminal recidivism (through higher wages making engaging in crime less attractive to working), or increase criminal recidivism (through jobs being harder to find, especially for ex-convicts).

Again and Makowsky also look at the effects of earned income tax credits (EITCs), which are paid to parents who are in work (in New Zealand, we have an EITC that is called the "in-work tax credit"). This is in effect a wage subsidy, so should increase low-skilled employment and wages. The effect of the EITC on criminal recidivism should theoretically be less ambiguous than for the minimum wage, but given that most female convicts are sole parents while most male convicts are not, the EITC effects should be concentrated among women.

The authors have data from 5.8 million prison releases in the U.S. (from 4 million prisoners) over the period 2000-2014. They find that, as expected:
...an 8% increase in the minimum wage (the average increase over our time period) corresponds to a 2.8% decrease in the probability an individual returns to prison within one year over the average, with no discernible difference in effect for men or women. That is, the increased incentive to substitute legal employment for criminal market activity, on net, appears to be greater than any employment effects of reduced labor demand resultant of minimum wage market distortions. While our results are agnostic regarding the debates over the magnitudes of the employment effects of minimum wages, they do serve as evidence that wage effects, on balance, dominate employment effects in the decisions made by would-be recidivists... we find that the availability of state top-ups to the federal EITC corresponds to a 1.6 percentage point (7.1%) lower rate of recidivism amongst women, while having no significant effect on men.
Interestingly, their results imply that the effects (of both higher minimum wages and higher EITCs) are larger for those with less education. So, in evaluating the costs and benefits of higher minimum wages and wage subsidies, we shouldn't focus only on the disemployment effects. Even if higher minimum wages reduce employment, they may also reduce crime.

[HT: Marginal Revolution]

Saturday 12 May 2018

Judge for yourself: do happiness researchers look happier than economists?

There is a large and growing academic literature on happiness (or subjective wellbeing, as it is more technically called). However, almost all of that literature uses self-reported measures of happiness. I recently read this 2008 paper (open access) by Benno Torgler, Nemanja Antic, and Uwe Dulleck (all QUT) and published in the journal Kyklos, which takes quite a different approach. Torgler et al. showed photographs of top international happiness researchers, economics Nobel Prize winners, and other top economists, to 554 on the streets of Brisbane in 2007, and asked them how happy the people in the photographs were (on a scale of one to four).

Judge for yourself. Here's the photographs they showed (I've blanked out the names, but if you know your famous economists you can probably pick many of them). One of the rows is top happiness researchers, one is economics Nobel Prize winners, and one is other top economists (two of whom have won the Nobel Prize since this research was conducted, and the other two will probably do so within the next few years). Which row is which, and which row is the happiest? (The answers are below, but no peeking until you've made your guesses!).


.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Ok, here's the answers. The top row is economics Nobel Prize winners (Edmund Phelps, Daniel Kahneman, Finn Kydland, and Joseph Stiglitz). The second row is other top economists (Paul Krugman, Jean Tirole, Robert Barro, and Paul Romer). Krugman and Tirole have both subsequently won Nobel Prizes. The last row is top happiness researchers (Bruno Frey, Ed Diener, Andrew Oswald, and Richard Easterlin). Which row did you guess was happiest? According to the people that Torgler et al. asked:
We observe a substantial difference between Edmund Phelps, who was perceived to be happiest (with a mean happiness of 3.744) and Andrew Oswald, who was judged least happy (2.045)...
In general, we can see that happiness researchers are perceived to be the most happy. Relative to top economists and Nobel Prize winners, happiness researchers have a higher probability of being adjudged very happy (by 11.1 and 1.8 percentage points, respectively).
So, it appears that happiness researchers are the happiest (at least, as perceived by others based on a single photo taken from their online profile). Seems appropriate. I'm less sure about Torgler et al.'s conclusion:
...the advice for young academics is: if you seek happiness, become a happiness researcher; a Nobel Prize does not make you happier...
Of course, an intrepid researcher could determine whether that last bit is supported by data. Do Krugman and Tirole look happier now in their online profiles than in those earlier photos? You be the judge.

Friday 11 May 2018

Are we reaching peak craft beer yet?

If you keep predicting that something bad will happen, eventually you will be right. Except maybe for those predicting the Nibiru cataclysm. Last year I wrote a post about the rise and fall of craft beer, in which I concluded:
Lots of investors got burned in the kiwifruit industry in the 1980s. Will craft beer burn investors next?
It hasn't happened yet, but there is still cause for concern that we may be approaching peak craft beer. Max Towle wrote in The Wireless this week:
There are now about 200 craft beer brands in New Zealand, with the number of beer companies in New Zealand up almost 300 percent on a decade ago, according to Figure.nz. The amount of high percentage alcohol that’s available (anything over 5 percent) has risen 220 percent in the past 10 years despite, in the same period, less beer overall being produced.
If craft beer is a bubble, it’s getting pretty damn big. Some say it’s ready to burst...
As I argued in my post last year, the craft beer industry is cyclical. We've seen a big increase in demand for craft beer. That pulled a lot of new suppliers into the market, which means that even though demand is higher, craft beer is pretty unprofitable. But clearly not so unprofitable that there are a lot of firms exiting the market. I certainly wouldn't be getting into craft brewing right now (read Towle's article and it's easy to see why), as it appears that profitability still has a little way to fall before the least financially secure and/or highest-cost brewers start to shut down. Once that happens, the dynamic supply and demand model (see my previous post) suggests that is probably a good time to think about getting into the market (a surprising and counter-intuitive result), as after the shake-out there will be fewer craft breweries and lower competition, which will raise prices and profitability for the industry.

Read more:

Thursday 10 May 2018

Binge drinking and earnings: health or social capital?

There is a well-established literature that demonstrates an inverted U-shaped relationship between alcohol consumption and earnings (which Eric Crampton touches on here). I just finished reading this 2010 paper, by Preety Srivastava (Monash University) and published in the journal The Economic Record, which is one of several papers that demonstrates this result (ungated earlier version here).

Effectively, the inverted U-shaped relationship means that heavy drinkers have lower earnings than moderate drinkers. It is easy to see why this would be so, due to the negative health and productivity impacts of being a heavy drinker. However, it also means that abstainers (those who don't drink) also have lower earnings than those who drink moderately.

There are several reasons that are proposed for why this might be. First, there is also an established literature that demonstrates a J-shaped curve of the relationship between drinking and health outcomes (Eric Crampton has many posts on this topic, but this one from 2010 is a good place to start). Moderate drinkers have better health than abstainers, so perhaps they have higher earnings because they are healthier than abstainers. Alternatively, perhaps there are positive networking effects (or improved social capital) associated with moderate drinking. Engaging in moderate drinking with colleagues and peers allows workers to deepen their social networks, allowing them to improve their earnings by having access to better jobs over time. Or perhaps the socialising allows them to signal their commitment to the workplace and their colleagues, which improves their chances of successfully receiving promotions or pay rises.

Srivastava uses data from the 2001 and 2004 waves of the National Drug Strategy Household Survey in Australia, and her results are that:
...an inverted U-shaped relationship is found between drinking and earnings across both male and female workers, with a premium for non-bingers and occasional bingers over abstainers, and an earnings penalty for frequent bingers.
The paper is a little questionable (for technical reasons [*]), but for me it left the biggest question unanswered. Is the wage premium for moderate drinkers a result of better health, or better social capital? This is an important question, because workplace drinking and after-work drinks are becoming less common over time. So, the relationship between drinking and social capital is plausibly weakening over time. So we might expect that, if the wage premium for moderate drinkers is primarily driven by health, the premium would remain robust over time, but if it is primarily driven by social capital, it will probably be declining over time. This is an interesting research question that, as far as I can see, no one has yet adequately addressed.

*****

[*] I also take issue with Srivastava's choice of instrumental variables (for more on instrumental variables, see my post here). She uses three instruments: (1) the price of alcohol (at the State level, which means only six observations for all of Australia, so this instrument is actually dropped from the analysis); (2) whether the person first smoked marijuana before age 16; and (3) whether the person has a tattoo. Instruments are supposed to affect the endogenous explanatory variable (in this case, binge drinking), but not have an effect on the dependent variable (in this case, earnings). However, people who smoke marijuana at a relatively young age or who have tattoos might be less risk averse, and less risk averse people may invest less in human or financial capital (e.g. see here), leading to lower income. So, it isn't clear to me that any of the instrumental variables are valid.

Tuesday 8 May 2018

Student performance and legal access to marijuana

This post might be viewed as a sequel to this 2016 post on access to alcohol. In a recent article by Olivier Marie (Erasmus University Rotterdam) and Ulf Zölitz (Maastricht University), published in the journal Review of Economic Studies (ungated earlier version here), the authors report on an interested natural experiment in the Netherlands:
We exploit a temporary policy change in the city of Maastricht in the Netherlands, which locally restricted legal access to cannabis based on individuals’ nationality.
Specifically, access was restricted so that only citizens of the Netherlands, Germany or Belgium could purchase marijuana, and citizens of other countries could not. The authors then look at the impact of that policy on student performance (for over 4400 students) in the business school at Maastricht University. The find that:
...the temporary restriction of legal cannabis access had a strong positive effect on course grades of the affected individuals. On average, students performed 10.9% of a standard deviation better and were 5.4% more likely to pass courses when they were banned from entering cannabis shops... Sub-group analysis reveals that these effects are somewhat stronger for women than men and that they are driven by younger and lower performing students.
This is a useful study because it is difficult to find natural experiments where the change in legal access affects some groups but not others - most studies compare differences between areas with and without legal access, where spillover effects are likely to be meaningful. I note that the results suggest an effect that is quite a bit larger than the Lindo et al. paper on the impacts of alcohol on student performance I discussed back in 2016, but not too much larger than this 2011 study by Carrell et al. (ungated earlier version here), also on alcohol.

Some of Marie and Zölitz's other results are interesting as well, such as:
...previous research has documented that cannabis consumption most negatively influences quantitative thinking and math-based tasks... Therefore, we split all courses depending on whether they are described as requiring numerical skills or not and test whether such course grades are differentially affected. We find that the policy effect is 3.5 times larger for courses requiring numerical/mathematical skills: a result in line with the existing evidence on the association between cannabis use and cognitive functioning.
In their other results, they show the good news that there are no teacher effects - that is, there was no difference in student outcomes between those whose teachers were unable to purchase marijuana after the policy change and those who were still able to. So I guess if you want to pass an economics paper, it pays to stay off drugs, but it doesn't matter if your lecturers do.

[HT: Marginal Revolution, last August]

Read more:

Saturday 5 May 2018

Floating plug-in power stations are finally here

When I was an undergraduate student, one of the examples that one of my economics lecturers (perhaps John Tressler?) used in class was that of a plug-in power station floating on a barge. This example was used to illustrate how 'specific capital' creates a barrier to entry, especially for natural monopoly firms. Or at least, that's what I remember (who knows? It was a long time ago. It might have been an example used to illustrate a totally different point).

Specific capital is capital that is specific to that particular market (as with many concepts in economics, there's no mystery to the naming of it). If the firm later leaves the market, they lose all of that capital (because it was specific to that market). Some types of capital can be moved or used in other markets, but specific capital cannot. So, if a firm needs to invest in specific capital in order to enter a market, then that will make the firm nervous. What will they do if there is a sudden decrease in demand and they start making a loss in the market, if they can't easily move their capital elsewhere? The nervous firm might decide that it is better not to enter the market if there is a chance that they will waste their specific capital.

Specific capital can be a serious problem for a natural monopoly firm. A natural monopoly firm has economies of scale, and those economies of scale usually arise because there is some large up-front cost that the firm faces (and so, as they produce more or service more customers, that large up-front cost can be spread over a larger market, lowering the firm's average costs - what economists refer to as economies of scale). If the large up-front cost is specific capital (which it often is), it is easy to see that firms will be nervous about making the investment. This is why it is not always a good idea for governments to heavily regulate natural monopolies - it reduces the incentive for the up-front investment.

Aside from the threat of regulation, specific capital is also a problem if the firm is worried that the government might expropriate their specific capital (and especially a problem for natural monopolies). This is a serious problem in many developing countries, where an autocratic leadership might expropriate foreign-owned assets at any time. This makes foreign firms, which have a lot of financial capital, reluctant to invest in large assets in developing countries. An example of a large asset conducive to natural monopoly that foreign firms would be unlikely to want to invest in is a power station. Once the foreign firm has made the investment, the government could simply expropriate the power station, and there is little the foreign firm could do about it. Given this risk, there is less foreign investment in natural monopolies (e.g. large infrastructure projects) in developing countries than would be optimal.

A solution to the problem of specific capital, at least in the case of power stations, is to put the power station on a barge and float it from country to country, simply plugging it into the electricity network. Then, the firm can move it elsewhere if it feels that the political climate becomes too risky. It's still a large up-front cost, so still a natural monopoly, but at least the capital is not specific to the market that the barge is parked in, because it can be relocated. In the past, this solution was almost purely theoretical. But no more, as the New Zealand Herald reported on Wednesday:
If a Russian state-owned company has its way, remote regions of the world will soon see giant, floating nuclear reactors pumping power to port cities and drilling platforms.
It would be a real-life version of the Soviet reversal joke: In Russia, 70-megawatt nuclear reactor comes to you.
The reactor in question is called Akademic Lomonosov. Once the barge is wired into the electrical grid in the Arctic town of Pevek in 2019, it will be the world's northernmost nuclear reactor, capable of powering a town of 100,000 people with what its manufacturer, Rosatom, calls "a great margin of safety" that is "invincible for tsunamis and natural disaster."...
By 2019, the first-of-its-kind rig will provide power for the port town and for oil rigs.
For Rosatom, it is buoyant proof of concept that a floating sea-based reactor can work. Rosatom is already in talks with potential buyers in Southeast Asia, Latin America and Africa, according to Russian television station RT, which estimates that 15 countries have shown interest in the floating plants.
Critics are focused on the potential environmental downside to a floating nuclear reactor, but there is an upside. This could be exactly what is needed to encourage investment in at least some of the infrastructure needs of developing countries.

Wednesday 2 May 2018

Book Review: Trillion Dollar Economists

I just finished reading Trillion Dollar Economists by Robert E. Litan. The subtitle is "How economists and their ideas have transformed business". The book is separated into three sections, and the first section really does look at how economists' ideas have transformed business. Sometimes. Litan takes a fairly broad view of who counts as an 'economist', and in many instances makes claims that because someone 'thinks like an economist' then they can be counted. So, the claim that economics and economists' ideas have had a trillion-dollar impact on business is probably fair, but not all of the illustrative cases used in the book are. Despite that issue, the first section of the book does a good job and it is easy to see by the end of the third chapter (on pricing) that the trillion-dollar claim is probably not overstated.

The second section of the book focuses more on policy, and especially on U.S. policy. This might be interesting to some readers, but I felt that this section took the book in an entirely different direction from where it started. To be fair, regulation and policy are Litan's forte, but the promise of this book (in the subtitle) was about business. And while policy certainly is important to business, the second section had an entirely different vibe from the first. In fact, at that point it almost felt like two separate books glued together. Moreover, the claims of the role of economists are even overplayed in the policy section. Consider this bit:
Likewise, had the build out of the Internet come several years later it is entirely possible that one or more of the Net-based companies that are now household names never would have launched. In some respects, then, these businesses and their founders may have the breakup of AT&T to thank. Economists who played some role in the events and decisions that ultimately led to AT&T's dismantling also deserve some of the credit.
No doubt, economists deserve credit, alongside the manufacturers of energy drinks, and the parents of those Net-based companies' founders, who let them work in their basements or garages. And countless other peripheral players.

In the final section, Litan talks about his views of the future of economics, but is fairly conservative in those views. He seems unwilling to go out on a limb and make any bold predictions. Which is fair - bold predictions do have a habit of coming back to bite the predictor. But if you're going to hold forth on the future of the discipline, why hold back?

Despite some good bits in the first section, I felt the book was distinctly below par. Even the first section had some weird moments, such as the long section in the fourth chapter that attempted (mostly unsuccessfully, I thought) to explain linear programming and the simplex algorithm (pretty ambitious for a book targeting a general audience!). And this bit that appears to conflate regression analyses with showing causality (when, except in specific circumstances, they only show correlation):
Regression analyses enables economists (or other social and physical scientists) to understand the relationships of different variables. In the farming example above, an economist or statistician would estimate an equation to understand how different independent factors (such as the special supplement, the fertilizer, the amount of rain, the days of sun, the temperature, and so on) cause an effect on crop output - the dependent factor. [emphasis mine]
This misunderstanding about causation vs. correlation was eventually cleared up, but not until five pages later, and even then it only drew a couple of sentences of comment. Litan was over-cautious perhaps in the final section of the book, but perhaps he could have re-distributed some of that caution into his description of regression analyses.

Overall, I wouldn't recommend this book to the general reader. Readers with a business focus (or business students) might find some value in the first section, but probably won't find it worthwhile to persist past there.

Tuesday 1 May 2018

The shocking truth about retail fuel pricing has been revealed!

Thanks to some outstanding investigative reporting (read: a leaked email), the shocking truth about fuel pricing was revealed yesterday:
A leaked email has exposed the pricing strategy underpinning the petrol industry.
In the email obtained by Stuff, BP pricing manager Suzanne Lucas outlined a plan to counter dwindling sales in Ōtaki, where the price of fuel was 20 cents more expensive than in nearby town Levin.
Instead of reducing the price in Ōtaki to make the station more competitive, Lucas proposed an increase of the fuel price across the entire region, with the expectation that competitors would match the new price.
"We have already increased all three sites mentioned by 5cpl [cents per litre] and have found that the Z [Energy station] in Paraparaumu has already matched our pricing," Lucas wrote.
Imagine that! Fuel pricing is not just based on costs of production, but competition makes a difference! Yawn. Anyone with a modicum of economic literacy could have pointed this out, but politicians and the media are not well known for their understanding of basic economics. And so we end up with stories like this:
A National Party MP and the Automobile Association (AA) are calling for the fuel industry to be more transparent about its pricing after a leaked email from BP lifted the lid on the company's petrol pricing strategy...
Jonathan Young, National's energy spokesman said the BP email was very concerning.
"I think the Commerce Commission would be very concerned about this as well."...
It's time for a reality check. The Commerce Commission has better things to do than follow up on this. To save them some time and effort, I can summarise BP's pricing strategy (not to mention the pricing strategy of every other fuel retailer in the world) in two bullet points:
  • If costs of supply go up, put prices up, and if costs of supply go down, put prices down; and
  • When there is less competition, put prices up, and when there is more competition, put prices down.
In every principles of economics course, students learn that the profit-maximising price for a firm with market power occurs at the quantity where the marginal revenue curve meets the marginal cost curve. In other words, costs are not the only thing that matters in pricing - revenue matters too. If there is less competition in the market, then consumers will have relatively less elastic demand for the good - they will react less if prices are increased. So, when there is relatively less elastic demand, firms can add a higher mark-up over their costs (and so the market price will be higher).

So, it is incredibly naive to believe that only costs matter in pricing decisions. Given that, this should come as no surprise:
Motorists are right to feel they are being unfairly treated after revelations about how BP sets its petrol prices, Prime Minister Jacinda Ardern says.
"Certainly what's been revealed today probably wouldn't surprise some motorists," Ardern said at her weekly press conference on Monday.
"But to hear so blatantly that pricing decisions are being made that sit outside of the price of crude oil, that sit outside the exchange rate, or that sit outside operating costs will no doubt be raising eyebrows with consumers."
Ardern is at least right about the fairness aspect though. Research by Nobel Prize winner Daniel Kahneman (and described in his book Thinking, Fast and Slow as well as Richard Thaler's Misbehaving: The Making of Behavioral Economics, which I reviewed here) shows that consumers are willing to pay higher prices when sellers face higher costs (consumers are willing to share the burden), but consumers are unwilling to pay higher prices when they result from higher demand - they see those price increases as unfair. The latter point almost certainly extends to consumers' views of the unfairness of higher prices that would result from less robust competition between firms.

So, we should not be surprised by the behaviour of fuel retailers and their pricing decisions. From the BP email, it doesn't seem that there was any overt cartel behaviour (or collusion between 'competing' retailers), so there was no illegal behaviour that the Commerce Commission would be interested in. However, we might rightly question the fairness of BP's pricing behaviour (especially if you live in Ōtaki!), and BP will probably face negative consequences (if only in terms of adverse publicity) for doing something that it is almost certain every fuel retailer would do in the same circumstances.