Monday, 31 December 2018

Scott Sumner on behavioural economics in introductory economics

In a recent blog post, Scott Sumner argues against a large role for behavioural economics in introductory economics:
The Atlantic has an article decrying the fact that economists are refusing to give behavioral economics a bigger role in introductory economics courses. I’m going to argue that this oversight is actually appropriate, even if behavioral economics provides many true observations about behavior...
Most people find the key ideas of behavioral economics to be more accessible than classical economic theory. If you tell students that some people have addictive personalities and buy things that are bad for them, they’ll nod their heads.  And it’s certainly not difficult to explain procrastination to college students. Ditto for the claim that investors might be driven by emotion, and that asset prices might soar on waves of “irrational exuberance.”  Thus my first objection to the Atlantic piece is that it focuses too much on the number of pages in a principles textbook that are devoted to behavioral economics.  That’s a misleading metric.  One should spend more time on subjects that need more time, not things that people already believe.
The whole post is worth reading. Although I don't agree with all of it, as I think behavioural economics does have a role to play in introductory economics. However, in my ECONS102 class I use it to illustrate the fragility of the rationality assumption, while pointing out that many of the key intuitions of economics (such as that people respond to incentives, or even the workhorse model of supply and demand) don't require that all decision-makers be acting with pure rationality. At the introductory level, I think it's much more important that students take away some economic intuition than a collection of mostly ad hoc anecdotes, which is essentially what behavioural economics is currently. That point was driven home by this article by Koen Smets (which I blogged about earlier this year).

Sumner argues in his blog post that we should focus on discouraging people from believing in eight popular myths. I don't agree with all of his choices there. For Myth #2 (Imported goods, immigrant labor, and automation all tend to increase the unemployment rate), I'd say it is arguable. For Myth #3 (Most companies have a lot of control over prices.  (I.e. oil companies set prices, not “the market”), it depends what you mean by "a lot". For Myth #7 (Price gouging hurts consumers), Exhibit A is the consumer surplus.

However, one popular myth that should be discouraged is the idea that behavioural economics will (or should) entirely supplant traditional economics. There is space for both, at least until there is a durable core of theory in behavioural economics.

[HT: Marginal Revolution]

Sunday, 30 December 2018

Book review: Prediction Machines

Artificial intelligence is fast becoming one of the dominant features of narratives of the future. What does it mean for business, how can businesses take advantage of AI, and what are the risks? These are all important questions that business owners and managers need to get their heads around. So, Prediction Machines - The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, is a well-timed, well-written and important read for business owners and managers, and not just those in 'technology firms'.

The title of the book invokes the art of prediction, which the books defines as:
[p]rediction takes information you have, often called "data", and uses it to generate information you don't have.
Students of economics will immediately recognise and appreciate the underlying message in the book, which is that:
[c]heaper prediction will mean more predictions. This is simple economics: when the cost of something falls, we do more of it.
So, if we (or to be more precise, prediction machines) are doing more predictions, then complementary skills become more valuable. The book highlights the increased value of judgment, which is "the skill used to determine a payoff, utility, reward, or profit".

The book does an excellent job of showing how AI can be embedded within and contribute to improved decision-making through better prediction. If you want to know how AI is already being used in business, and will likely be used in the future, then this book is a good place to start.

However, there were a couple of aspects where I was disappointed. I really enjoyed Cathy O'Neil's book Weapons of Math Destruction (which I reviewed last year), so it would have been nice if this book had engaged more with O'Neil's important critique. Chapter 18 did touch on it, but I was left wanting more:
A challenge with AI is that such unintentional discrimination can happen without anyone in the organization noticing. Predictions generated by deep learning and many other AI technologies appear to be created from a black box. It isn't feasible to look at the algorithm or formula underlying the prediction and identify what causes what. To figure out if AI is discriminating, you have to look at the output. Do men get different results than women? Do Hispanics get different results than others? What about the elderly or the disabled? Do these different results limit their opportunities?
Similarly, judgment is not the only complement that will increase in value. Data is a key input to the prediction machines, and will also increase in value. The book does acknowledge this, but is relatively silent on the idea of data sovereignty. There is an underlying assumption that businesses are the owners of data, and not the consumers or users of products who unwittingly give up valuable data on themselves or their choices. Given the recent furore over the actions of Facebook, some wider consideration of who owns data and how they should be compensated for their sharing of the data (or at least, how businesses should mitigate the risks associated with their reliance on user data) would have been timely.

The book was heavily focused on business, but Chapter 19 did pose some interesting questions with application to AI's role in wider society. These questions do need further consideration, but it was entirely appropriate that this book highlighted them while leaving the substantive answers to some other authors to address. These questions included, "Is this the end of jobs?", "Will inequality get worse?", "Will a few huge companies control everything?", and "Will some countries have an advantage?".

Notwithstanding my two gripes above, the book has an excellent section on risk. I particularly liked this bit on systemic risk (which could be read in conjunction with the book The Butterfly Defect, which I reviewed earlier this year):
If one prediction machine system proves itself particularly useful, then you might apply that system everywhere in your organization or even the world. All cars might adopt whatever prediction machine appears safest. That reduces individual-level risk and increases safety; however, it also expands the chance of a massive failure, whether purposeful or not. If all cars have the same prediction algorithm, an attacker might be able to exploit that algorithm, manipulate the data or model in some way, and have all cars fail at the same time. Just as in agriculture, homogeneity improves results at the individual level at the expense of multiplying the likelihood of system-wide failure.
Overall, this was an excellent book, and surprisingly free of the technical jargon that infest many books on machine learning or AI. That allows the authors to focus on the business and economics of AI, and the result is a very readable introduction to the topic. Recommended!

Saturday, 29 December 2018

The leaning tower that is PISA rankings

Many governments are fixated on measurement and rankings. However, as William Bruce Cameron wrote (and which has wrongly been attributed to Albert Einstein), "Not everything that counts can be counted, and not everything that can be counted counts". And even things that can be measured and are important might not be measured in a way that is meaningful.

Let's take as an example the PISA rankings. Every three years, the OECD tests 15-year-old students around the world in reading, maths, science, and in some countries, financial literacy. They then use the results from those tests to create rankings in each subject. Here are New Zealand's 2015 results. In all three subjects (reading, maths, and science), New Zealand ranks better than the OECD average, but shows a decline since 2006. To be more specific, in 2015 New Zealand ranked 10th in reading (down from 5th in 2006), 21st in maths (down from 10th in 2006), and 12th in science (down from 7th in 2006).

How seriously should we take these rankings? It really depends on how seriously the students take the PISA tests. They are low-stakes tests, which means that the students don't gain anything from doing them. And that means that there might be little reason to believe that the results are reflective of actual student learning. Of course, students in New Zealand are not the only students who might not take these tests seriously. New Zealand's ranking would only be adversely affected if students here are more likely not to take the test seriously, or where the students who don't take the test seriously are better students on average than those who don't take the test seriously in other countries.

So, are New Zealand students less serious about PISA than students in other countries? In a recent NBER Working Paper (ungated version here), Pelin Akyol (Bilkent University, Turkey), Kala Krishna and Jinwen Wang (both Penn State) crunch the numbers for us. They identify non-serious students as those where there were several questions left unanswered (by skipping them or not finishing the test) despite time remaining, or where students spent too little time on several questions (relative to their peers). They found that:
[t]he math score of the student is negatively correlated with the probability of skipping and the probability of spending too little time. Female students... are less likely to skip or to spend too little time. Ambitious students are less likely to skip and more likely to spend too little time... students from richer countries are more likely to skip and spent too little time, though the shape is that of an inverted U with a turning point at about $43,000 for per capita GDP.
They then adjust for these non-serious students, by imputing the number of correct answers they would have gotten had they taken the test seriously. You can see the more complete results in the paper. Focusing on New Zealand, our ranking would increase from 17th to 13th if all students in all countries took the test seriously, which suggests to me that the low-stakes PISA test is under-estimating New Zealand students' results. We need to be more careful about how we interpret these international education rankings based on low-stakes tests.

[HT: Eric Crampton at Offsetting Behaviour, back in August; also Marginal Revolution]

Friday, 28 December 2018

The beauty premium in the LPGA

Daniel Hamermesh's 2011 book Beauty Pays: Why Attractive People are More Successful (which I reviewed here) makes the case that more attractive people earn more (or alternatively, that less attractive people earn less). However, the mechanism that drives this beauty premium (or even its existence) is still open to debate. It could arise because of discrimination - perhaps employers prefer working with more attractive people, or perhaps customers prefer to deal with more attractive workers. Alternatively, perhaps more attractive workers are more productive - for example, maybe they are able to sell more products.

However, working out whether either of these two effects, or some combination of both, is driving the beauty premium is very tricky. A 2014 article by Seung Chan Anh and Young Hoon Lee (both Sogang University in Korea), published in the journal Contemporary Economic Policy (sorry I don't see an ungated version) provides some evidence on the second effect. Anh and Lee use data from the Ladies Professional Golf Association (LPGA), specifically data from 132 players who played in at least one of the four majors between 1992 and 2010. They argue that:
Physically attractive athletes are rewarded more than unattractive athletes for one unit of effort. Being rewarded more, physically attractive athletes devote more effort to improving their productivity. Consequently they become more productive than less attractive athletes with comparable natural athletic talents.
In other words, more attractive golfers have an incentive to work harder on improving, because they can leverage their success through higher earnings in terms of endorsements, etc. However, Anh and Lee focus their analysis on tournament earnings, which reflect the golfers' productivity. They find that:
...average performances of attractive players are better than those of average looking players with the same levels of experience and natural talent. As a consequence, attractive players earn higher prize money.
However, in order to get to those results they have to torture the data quite severely, applying spline functions to allow them to estimate the effects for those above the median level of attractiveness. The main effect of beauty in their vanilla analysis is statistically insignificant. When you have to resort to fairly extravagant methods to extract a particular result, and you don't provide some sense of robustness by showing that your results don't arise solely as a result of your choice of method, they will always be a little questionable.

So, the take-away from this paper is that more attractive golfers might work harder and be more productive. Just like porn actresses.

Wednesday, 26 December 2018

Late-night tweeting is associated with worse performance

In what is one of the least surprising research results of 2018, a new article in the journal Sleep Health (sorry I don't see an ungated version) by Jason Jones (Stony Brook University) and co-authors looks at the relationship between late-night activity on Twitter and next-day game performance of NBA players. Specifically, they had data from 112 players from 2009 and 2016, and limited their analysis to East Coast teams playing on the East Coast, and West Coast teams playing on the West Coast (to avoid any jetlag effects). They found that:
[f]ollowing late-night tweeting, players contributed fewer points and rebounds. We estimate that players score 1.14 fewer points (95% confidence interval [CI]: 0.56-1.73) following a late-night tweet. Similarly, we estimate that players secure 0.49 fewer rebound [sic] (CI: 0.25-0.74). They also commit fewer turnovers and fouls in games following a late-night tweet. We estimate the differences to be 0.15 fewer turnover (CI: 0.06-0.025) and 0.22 fewer foul (CI: 0.12-0.33). These results are consistent with a hypothesis that players are less active in a game following a late-night tweet but not that the general quality of their play necessarily deteriorates.
We noted that, on average, players spent 2 fewer minutes on the court following late-night tweeting (no late-night tweeting: 24.8 minutes, late-night tweeting: 22.8 minutes).
Presumably, coaches realise that the sleep-deprived players are not playing at their usual standard, so the players spend more time on the bench rather than on the court. That explains some of the other effects (fewer points, rebounds, turnovers, and fouls), but Jones et al. also found that shooting percentage was lower after late-night tweeting (and shooting percentage shouldn't be affected by the number of minutes on court). Interestingly:
...infrequent late-night tweeters who late-night tweeted before a game scored significantly fewer points; made a lower percentage of shots; and also contributed fewer rebounds, turnovers, and fouls as compared to nights when they did not late-night tweet. By contrast, these effects were not seen among frequent late-night tweeters...
So, those players who were regular late-night tweeters were less affected than those whose late-night tweeting was uncommon. Of course, the results don't tell us for sure whether those who regularly sleep less are less affected by lost sleep, or why. Late-night tweeting is an imperfect proxy for lack of sleep (at least in part because those not engaging in late-night tweeting need not necessarily be sleeping). However, the results are suggestive that late-night tweeting is associated with worse performance. Which should give us pause when we think about people in positions of power who have made a regular habit of tweeting at odd times.

[HT: Marginal Revolution]

Monday, 24 December 2018

Hazardous drinking across the lifecourse

I have a Summer Research Scholarship student working with me over the summer, looking at changes in drinking patterns by age, over time. The challenge in any work like that is disentangling age effects (the drinking patterns specific to people of a given age), cohort effects (the drinking patterns specific to people of the same birth cohort), and period effects (the drinking patterns specific to a given year). As part of that work though, I came across a really interesting report from September this year, by Andy Towers (Massey University) and others, published by the Health Promotion Agency.

In the report, Towers et al. use life history data on 801 people aged 61-81 years, from the Health, Work and Retirement Study, and look at how their self-reported pattern of alcohol consumption (hazardous or non-hazardous drinking [*]) changed over their lifetimes. They found that:
In terms of the nature of hazardous drinking levels across the lifespan of this cohort of older New Zealanders:
  • drinking patterns were largely stable across lifespan, with long periods of hazardous or non-hazardous drinking being the norm
  • one-third of participants (36%) became hazardous drinkers as adolescents or young adults, and remained hazardous drinkers throughout the lifespan
  • only a small proportion (14%) were life-long (i.e., from adolescence onwards) non-hazardous drinkers
  • transition into or out of hazardous drinking was not common (less than 10% in each decade); when it occurred, it was usually a singular event in the lifespan (i.e., no further transitions occurred).
Transitions into hazardous drinking were linked to spells of unemployment prior to mid-life (i.e. before they turned 40), and relationship breakdown in mid-life. Transitions out of hazardous drinking were linked to development of a chronic health condition in young adulthood or mid-life. However, these transitions (in either direction) were pretty uncommon - most hazardous drinkers in one decade remained hazardous drinkers in the following decade, and most non-hazardous drinkers remained non-hazardous drinkers.

The implication of these results is that it might be easier to reduce hazardous drinking among younger people, because it appears to be quite persistent once people start hazardous drinking. Also, interventions to reduce hazardous drinking could be usefully targeted at those facing spells of unemployment or relationship breakups. [**]

Of course, these results tell us a lot about drinking of people currently aged 60-80 years, but they don't tell us a whole lot about people in younger cohorts, whose drinking across the life course may well be on a different trajectory (see my point above about age effects vs. cohort effects). Also, there is a survivorship bias whenever you interview older people about their life history. Perhaps the heaviest drinkers are not in the sample because they have already died of cirrhosis or liver cancer, etc. So these results might be understating hazardous drinking at younger ages within these cohorts, if non-hazardous drinkers have greater longevity. There is also the problem of recall bias associated with asking older people about their drinking habits from up to six decades earlier - it wouldn't surprise me if the stability and persistence of hazardous drinking were at least partly artefacts of the retrospective life history data collection method. The measure of childhood educational performance they used in some analyses [***] seemed to be a bit dodgy (but it doesn't affect the results I highlighted above).

Still, in the absence of better prospective data from a cohort study, these results are a good place to start. And they raise some interesting questions, such as whether cohorts heavily affected by unemployment during the Global Financial Crisis have been shifted into persistent hazardous drinking, and whether recent cohorts of young people will continue to persist in higher rates of non-hazardous drinking that have been observed (e.g. see page 25 of the June 2016 edition of AlcoholNZ). More on those points in a future post perhaps.

*****

[*] Hazardous or non-hazardous drinking was measured using a slightly modified version of a three-item measure called the AUDIT-C, which you can find here. A score of 3 or more for women, or 4 or more for men, is defined as hazardous drinking.

[**] Of course, it's all very well to say that this targeting would be useful, but how to actually achieve it is another story.

[***] They asked research participants to rate their performance in English at age 10, compared with other children. I say this is a bit dodgy, because 87 percent rated themselves the same or better than their peers, and 13 percent rated themselves worse. I call this out as a Lake Wobegon effect.

Tuesday, 18 December 2018

Stalin and the value of a statistical death

The value of a statistical life (or VSL) is a fairly important concept in cost-benefit evaluations, where some of the benefits are reductions in the risk of death (and/or where the costs include increases in the risk of death). The VSL can be estimated by taking the willingness-to-pay for a small reduction in risk of death, and extrapolating that to estimate the willingness-to-pay for a 100% reduction in the risk of death, which can be interpreted as the implicit value of a life. The willingness-to-pay estimate can be derived from stated preferences (simply asking people what they are willing to pay for a small reduction in risk) or revealed preferences (by looking at data on actual choices people have made, where they trade off higher cost, or lower wages, for lower risk).

A recent working paper by Paul Castaneda Dower (University of Wisconsin-Madison), Andrei Markevitch and Shlomo Weber (both New Economic School, in Russia) takes quite a different approach. Castaneda Dower et al. use data from the Great Terror in Russia to estimate Stalin's implicit value of a statistical life for Russian citizens (or, more specifically, Russian citizens of German or Polish ethnicity). Before we look at how this worked, a little background on the Great Terror:
Coercion and state violence were important policy elements during the whole period of Stalin’s rule. In an average year between 1921 and 1953, there were about two hundred thousand arrests and twenty-five thousand executions... The years of 1937-38, known as the Great Terror, represent a clear spike in Stalin’s repressions. Approximately, one and a half million Soviet citizens were arrested because of political reasons, including about seven hundred thousand who were executed and about eight hundred thousand sent into Soviet labor camps...
In the late 1930s, the Soviet government considered Poland, Germany and Japan as the most likely enemies in the next war. After the first “national operation” against the Poles (launched by the NKVD decree No. 00485 on 11 August 1937), Stalin gradually expanded “national operations” against almost all ethnic minorities with neighboring mother-states.
Castaneda Dower et al. focus on Poles and Germans because they are the only ethnic groups for which data are available. The key aspect of this paper is the trade-off that Stalin faced:
We assume that Stalin’s objective in implementing the Great Terror was to enhance the chances of the survival of his regime in each region of the country, subject to the direct economic loss of human life. Therefore, we derive Stalin’s decisions from the presumed balance between the loss of economic output and enhancement of regime survival.
So, Stalin trades off the lost economic output of Russian citizens against the risk of regime change. More correctly then, this is Stalin's value of a statistical death (rather than life). The authors use this trade-off and comparisons between border regions (where the risk of regime change is higher because ethnic groups may have closer links with those outside the country) and interior regions, and find that:
...Stalin would have been willing to accept a little more than $43,000 US 1990 for the reduction in citizens’ fatality risk equivalent to one statistical life. This magnitude is stable across a wide variety of robustness checks. While this value is sizeable, it is far below what it could have been had Stalin cared more for the survival of his regime or would stop at nothing to ensure its survival. At the same time, the value is far from what it would be if he had not been willing to put his citizens lives at risk to improve the likelihood of his regime’s survival.
The authors show that VSLs in democracies at a similar level of development are substantially higher. This no doubt reflects Stalin's lack of concern for his people, and the people's lack of power to hold their leader to account.

[HT: Marginal Revolution, back in April]

Saturday, 15 December 2018

Pornography actress productivity

Certain research article titles can make you wonder, "how...?". For example, take this 2017 article by Shinn-Shyr Wang (National Chengchi University, in Taiwan) and Li-Chen Chou (Wenzhou Business College, in China), published in the journal Applied Economics Letters (sorry, I don't see an ungated version), and entitled "The determinants of pornography actress production". It certainly made me wonder, how did they collect the data for that?

It turn out that it wasn't as dodgy as you might first imagine. Wang and Chou used data between 2002 and 2013:
...released by the Japanese ‘Digital Media Mart’, which on a regular basis collects data regarding videos and personal information of the Japanese AV actresses.
Essentially, the study tests whether actresses' physical appearance (proxied by cup size and whether they a side job [*] as a model or entertainer outside the pornography industry), and engagement in risky sex, affect the number of movies the actresses appear in. They find that:
...the later an actress commences her career, the fewer video shots she produces. Cup sizes and experiences as models or entertainers have positive effects on the number of video shots... having acted in risky sex videos could increase the production of an AV actress by more than 60%, which implies that, if the actress is willing to perform risky sex, her production may be significantly increased.
I'm unconvinced by their proxies for physical appearance, and a more interesting study would have addressed this by rating the actresses appearance directly (and no, that wouldn't necessitate the researchers watching loads of porn). The finding in relation to risky sex might be the result of an upward-sloping supply curve (there may be greater demand for riskier sex, so riskier sex attracts higher pay, so the actress works more), but of course there would also be a compensating differential to overcome as well (since riskier sex is probably a negative characteristic of the work, the actress would want to be paid more to compensate them for engaging in it). It would be more interesting to know the results in relation to earnings, rather than production, but I suspect that data is particularly difficult to obtain.

*****

[*] It seems to me that the pornography might well be the side job, and the modelling or entertainment the main job.

Friday, 14 December 2018

The compensating differential for being an academic

It seems obvious that wages differ between different occupations. There are many explanations for why, such as differences in productivity between workers in different occupations. However, there are also less obvious explanations. Or at least, explanations that are less obvious until you start to recognise them (after which, you start to see them everywhere). One such explanation is what economists refer to as compensating differentials.

Consider the same job in two different locations. If the job in the first location has attractive non-monetary characteristics (e.g. it is in an area that has high amenity value, where people like to live), then more people will be willing to do that job. This leads to a higher supply of labour for that job, which leads to lower equilibrium wages. In contrast, if the job in the second area has negative non-monetary characteristics (e.g. it is in an area with lower amenity value, where fewer people like to live), then fewer people will be willing to do that job. This leads to a lower supply of labour for that job, which leads to higher equilibrium wages. The difference in wages between the attractive job that lots of people want to do and the dangerous job that fewer people want to do is the compensating differential.

So to summarise, jobs that have highly positive characteristics will have lower wages than otherwise identical jobs with less positive characteristics. Now let's consider a specific example. Academics in many fields could probably earn substantially more if they worked in the private sector, than what they earn as an academic. For some fields (like finance, economics, accounting, computer science, or engineering), the differential is much higher than others (like education or anthropology). Why don't academics simply shift en masse into the private sector? Is that partially a result of compensating differentials?

A NBER Working Paper from earlier this year (ungated version here) by Daniel Hamermesh (Barnard College, and author of the excellent book Beauty Pays: Why Attractive People are More Successful, which I reviewed here), goes some way towards addressing the latter question. Using data from the U.S. Current Population Survey for 2012-2016, Hamermesh finds that:
...the adjusted pay difference between professors and other advanced-degree-holders shows a disadvantage of about 13 percent.
In other words, holding other factors (like demographic and economic characteristics) constant, academics earn 13 percent less than holders of a PhD (or EdD) degree in other jobs. At the median (the middle of the wage distribution), the difference is 16 percent.

Many people argue that academics like their jobs because they have more flexible use of their time (or, as I have fallaciously heard, that we have a sweet job because we only have to work when students are on campus). Using data from the American Time Use Survey 2003-2015, Hamermesh finds that:
...professors do much more of their work on weekends than do other advanced-degree-holders, and they do less during weekdays. They put in nearly 50 percent more worktime on weekends than other highly-educated workers (and 50 percent more than less-educated workers too). Professors spread their work effort more evenly over the week than other advanced-degree-holders...
...the spreading of professors’ work time across the week can account for nearly five percentage points of the wage differential, i.e., almost one-third of the earnings difference at the median...
So, more flexibility in work schedules accounts for part of the difference, but not all. Hamermesh also supports this with the results from a survey of 289 academics who specialise in the study of labour markets (I guess this was Hamermesh surveying people he knew would respond to a survey from him!). The survey shows that:
[f]reedom and novelty of research, and the satisfaction of working with young minds, are by far the most important attractions into academe. Only 41 percent of respondents listed time flexibility as a top-three attraction, slightly fewer than listed enjoying intellectual and social interactions with colleagues.
Clearly there are a number of characteristics that make academia an attractive job proposition for well-educated people who could get a higher-paying job in the private sector. Academics are willing to give up some income (and in some cases substantial income) for those characteristics - what they are giving up is largely a compensating differential.

[HT: Marginal Revolution, back in January]

Wednesday, 12 December 2018

Book review: How Not to Be Wrong

As signalled in a post a few days ago, I've been reading How Not to Be Wrong - The Power of Mathematical Thinking by Jordan Ellenberg, which I just finished. Writing an accessible book about mathematics for a general audience is a pretty challenging ask. Mostly, Ellenberg is up to the task. He takes a very broad view of what constitutes mathematics and mathematical thinking, but then again I can't complain, as I take a pretty broad view of what constitutes economics and economic thinking. The similarities don't end there. Ellenberg explains on the second page the importance of understanding mathematics:
You may not be aiming for a mathematically oriented career. That's fine - most people aren't. But you can still do math. You probably already are doing math, even if you don't call it that. Math is woven into the way we reason. And math makes you better at things. Knowing mathematics is like wearing a pair of X-ray specs that reveal hidden structures underneath the messy and chaotic surface of the world... With the tools of mathematics in hand, you can understand the world in a deeper, sounder, and more meaningful way.
I think I could replace every instance of 'math' or 'mathematics' in that paragraph with 'economics' and it would be equally applicable. The book has lots of interesting historical (and recent) anecdotes, as well as applications of mathematics to a variety of topics as broad as astronomy and social science (as noted in my post earlier in the week). I do feel that mostly the book is valuable for readers that have some sensible background in mathematics. There are some excellent explanations, and I especially appreciated what is easily the clearest explanation of orthogonality I have ever read (on page 339 - probably a little too long to repeat here). Just after that is an explanation of the non-transitivity of correlation that provides an intuitive explanation for how instrumental variables regression works (although Ellenberg doesn't frame it in that way at all, that was what I took away from it).

There are also some genuinely funny parts of the book, such as this:
The Pythagoreans, you have to remember, were extremely weird. Their philosophy was a chunky stew of things we'd now call mathematics, things we'd now call religion, and things we'd now call mental illness.
However, there are some parts of the book that I think Ellenberg doesn't quite get right. For instance, there is a whole section on geometry in the middle of the book that I found to be pretty heavy going. Despite that, if you remember a little bit of mathematics from school, there is a lot of value in this book. It doesn't quite live up to the promise in the title, of teaching the reader how not to be wrong, but you probably wouldn't be wrong to read it.

Monday, 10 December 2018

When the biggest changes in alcohol legislation aren't implemented, you can't expect much to change

Back in October, the New Zealand Herald pointed to a new study published in the New Zealand Medical Journal:
New laws introduced to curb alcohol harm have failed to make a dent on ED admissions, new research has found.
The study, released today, showed that around one in 14 ED attendances presented immediately after alcohol consumption or as a short-term effect of drinking and that rate had remained the same over a four-year period.
Here's the relevant study (sorry I don't see an ungated version, but there is a presentation on the key results available here), by Kate Ford (University of Otago) and co-authors. They looked at emergency department admissions at Christchurch Hospital over three-week periods in November/December 2013 and in November/December 2017, and found that:
...[t]he proportion of ED attendances that occurred immediately after alcohol consumption or as a direct short-term result of alcohol did not change significantly from 2013 to 2017, and was about 1 in 14 ED attendances overall.
The key reason for doing this research was that the bulk of the changes resulting from the Sale and Supply of Alcohol Act 2012 had been implemented in between the two data collection periods. The authors note that:
...[a] key part of the Act was a provision to allow territorial authorities to develop their own Local Alcohol Policies (LAPs). The Act was implemented in stages from December 2012 onwards and subsequently, many local authorities attempted to introduce LAPs in their jurisdictions.
However, here's the kicker:
In many cases these efforts met legal obstacles, particularly from the owners of supermarket chains and liquor stores... For example, a provisional LAP in Christchurch was developed in 2013 but by late 2017 it still had not been introduced. 12 This provisional LAP was finally put on hold in 2018... Similar problems have been encountered in other regions... 
If you are trying to test whether local alcohol policies have had any effect on alcohol-related harm, it's pretty difficult to do so if you're looking at a place where a local alcohol policy hasn't been implemented. Quite aside from the fact that there is no control group in this evaluation, and that the impact of the earthquakes makes Christchurch a special case over the time period in question, it would have been better to look at ED admissions in an area where a local alcohol policy has actually been implemented (although, there have been too few local authorities that have been successful in this). To be fair though, the authors are well aware of these issues, and make note of the latter two in the paper.

However, coming back to the point at hand, whether legislation is implemented as intended is a big issue in evaluating the impact of the legislation. A 2010 article by David Humphreys and Manuel Eisner (both Cambridge), published in the journal Criminology and Public Policy (sorry I don't see an ungated version), makes the case that:
Policy interventions, such as increased sanctions for drunk-driving offenses... often are fixed in the nature in which they are applied and in the coverage of their application... To use epidemiological terminology, these interventions are equivalent to the entire population receiving the same treatment in the same dose...
However, in other areas of prevention research... the onset of a policy does not necessarily equate to the effective implementation of evidence-based prevention initiatives...
This variation underlines a critical issue in the evaluation of effects of the [U.K.'s Licensing Act 2003]...
In other words, if you want to know the effect of legislative change on some outcome (e.g. the effect of alcohol licensing law changes on emergency department visits), you need to take account of whether the legislation was fully implemented in all places.

Read more:


Sunday, 9 December 2018

Slime molds and the independence of irrelevant alternatives

Many people believe that rational decision making is the sole preserve of human beings. Still others recognise that isn't the case, as many studies in animals as varied as dolphins (e.g. see here), monkeys (e.g. see here) or crows show. How far does that extend though?

I've been reading How Not to Be Wrong - The Power of Mathematical Thinking by Jordan Ellenberg (book review to come in a few days). Ellenberg pointed me to this 2011 article (open access) by Tanya Latty and Madeleine Beekman (both University of Sydney), published in the Proceedings of the Royal Society B; Biological Sciences. You're probably thinking that's a weird source for me to be referring to on an economics blog, but Ellenberg explains:
You wouldn't think there'd be much to say about the psychology of the plasmodial slime mold, which has no brain or anything that could be called a nervous system, let along feelings or thoughts. But a slime mold, like every living creature, makes decisions. And the interesting thing about the slime mold is that it makes pretty good decisions. In the slime mold world, these decisions more or less come down to "move toward things I like" (oats) and "move away from things I don't like (bright light)...
A tough choice for a slime mold looks something like this: On one side of the petri dish is three grams of oats. On the other side is five grams of oats, but with an ultraviolet light trained on it. You put a slime mold in the center of the dish. What does it do?
Under those conditions, they found, the slime mold chooses each option about half the time; the extra food just about balances out the unpleasantness of the UV light.
All good so far. But this isn't a post about the rationality of slime mold decision-making. It's actually about the theory of public choice. And specifically, about the independence of irrelevant alternatives. Say that you give a person the choice between chocolate and vanilla ice cream, and they choose chocolate. Before you hand over the ice cream though, you realise you also have some strawberry as well, so you offer them that instead. The person thinks for a moment, and says they would like vanilla instead. They have violated the independence of irrelevant alternatives. Whether strawberry is available or not should not affect the person's preference between chocolate or vanilla - strawberry is irrelevant to that choice. And yet, in the example above, it made a difference.

Ok, back to slime molds. Ellenberg writes:
But then something strange happened. The experimenters tried putting the slime mold in a petri dish with three options: the three grams of oats in the dark (3-dark), the five grams of oats in the light (5-light), and a single gram of oats in the dark (1-dark). You might predict that the slime mold would almost never go for 1-dark; the 3-dark pile has more oats in it and is just a dark, so it's clearly superior. And indeed, the slime mold just about never picks 1-dark.
You might also guess that, since the slime mold found 3-dark and 5-light equally attractive before, it would continue to do so in the new context. In the economist's terms, the presence of the new option shouldn't change the face that 3-dark and 5-light have equal utility. But no: when 1-dark is available, the slime mold actually changes its preferences, choosing 3-dark more than three times as often as it does 5-light!
What's going on here? The slime mold is essentially making collective decisions (which is why I said this was a post about the theory of public choice). And with collective decisions, the independence of irrelevant alternatives can come into play. As Ellenberg notes, in the 2000 U.S. presidential election, the availability of Ralph Nader as a candidate has been credited with George W. Bush's victory over Al Gore. Nader took just enough votes from Gore supporters (who would have probably voted Gore if Nader was not available) to ensure that Bush won the critical state of Florida, and ultimately, the election. Something similar is going on with the slime molds, as Ellenberg explains:
...the slime mold likes the small, unlit pile of oats about as much as it likes the big, brightly lit one. But if you introduce a really small unlit pile of oats, the small dark pile looks better by comparison; so much so that the slime mold decides to choose it over the big bright pile almost all the time.
This phenomenon is called the "asymmetric domination effect," and slime molds are not the only creatures subject to it. Biologists have found jays, honeybees, and hummingbirds acting in the same seemingly irrational way.
Except, it's not irrational. In the case of the slime molds at least, it's a straightforward consequence of collective decision-making.

Friday, 7 December 2018

Arnold Kling on public choice theory and lobbying

In my ECONS102 class, we discuss the lobbying activities of firms with market power. The motivation for that discussion is that firms with market power make a large profit (how large the profit is depends in part on how much market power they have), so they have an incentive to use some of the profits (their economic rent) to maintain their market power. They can do this by lobbying government to avoid excess regulation. However, that simple exposition doesn't explain the full range of lobbying activities that firms engage in, and it doesn't explain why consumers don't engage in lobbying (e.g. for lower prices) to the same extent that producers do.

On Medium last week, Arnold Kling wrote an interesting article on why costs increase in some industries faster than others. However, on the above point it was this bit that caught my attention:
In reality, you do not produce everything in the economy. You are much more specialized in production than in consumption. This makes you much more motivated to affect public policy in the sector where you produce than in the sector where you consume.
In theory, government policy is supposed to promote the general welfare. But as a producer, your goal for government policy is to increase demand and restrict supply in your industry. If you are in the field of education, you want to see more government spending devoted to education, tuition grants and subsidies for student loans, in order to increase demand. You want to make it difficult to launch new schools and colleges, in order to restrict supply. If you run a hospital, you want the government to subsidize demand by providing and/or mandating health insurance coverage. But you want to restrict supply by, for example, requiring prospective hospitals to obtain a “certificate of need.” If you are a yoga therapist, you want the government to mandate coverage for yoga therapy, but only if it is provided by someone with the proper license.
Think about an average consumer (and worker), considering how much effort to put into lobbying the government for a policy change. They might be quite motivated to engage in lobbying government in terms of their employment or wages (e.g. subsidising wages, or introducing occupational licensing), where they are a producer, and can capture a lot of the gains from a policy change. In that case, the benefit to be gained from the policy change will affect them a lot, and may offset the cost of the effort of lobbying. However, they will be much less motivated to engage in lobbying government in terms of consumption goods. In the latter case, the benefit is much lower than lobbying in terms of employment or wages, while the cost is likely to be about the same.

This explanation also relates to the idea of rational ignorance. Consumers individually face only a small cost of a policy (like a subsidy on sugar farmers) that provides a large benefit to producers. The producers have a great incentive to lobby against losing the policy (or in favour of gaining it), but consumers have only a small incentive to lobby in favour of eliminating the policy (or against it being implemented in the first place).

There's a lot more of interest in Kling's article. Market power and lobbying is just one of many reasons why costs increase faster in some industries or sectors than others.

Tuesday, 4 December 2018

Summer heat and student learning

Many of us would appreciate the idea that it is difficult to concentrate on hot days, even without the distraction that we could be hanging out at the beach. So, that raises legitimate questions. Does students' learning suffer when it is hotter? Should students avoid summer school, if they are trying to maximise their grades? These questions aren't of purely academic interest. Climate change means that we will likely have increasingly hotter summers over time, which makes these questions increasingly relevant for the future.

A 2017 article by Hyunkuk Cho (Yeungnam University School of Economics and Finance in South Korea), published in the Journal of Environmental Economics and Management (sorry I don't see an ungated version), provides some indication. Cho used data from 1.3 million Korean students, from 1729 high schools in 164 cities, and looked at the relationship between the number of hot summer days and the students' scores in the Korean college entrance exam, which is held in November (at the end of the northern autumn). They found that:
...an additional day with a maximum daily temperature equal to or greater than 34°C during the summer, relative to a day with a maximum daily temperature in the 28–30°C range, reduced the math and English test scores by 0.0042 and 0.0064 standard deviations. No significant effects were found on the reading test scores. When an additional day with a maximum daily temperature equal to or greater than 34°C reduces the math and English test scores by 0.0042 and 0.0064 standard deviations, ten such days reduce the test scores by 0.042 and 0.064 standard deviations, respectively. The effect size is equivalent to increasing class size by 2–3 students during grades 4–6...
If you're in a temperate climate like New Zealand, you might read that and think, 'we don't have that many days greater than 34°C, so probably there is no effect here'. But, Cho also found that:
...hot summers had greater effects on the test scores of students who lived in relatively cool cities. If cities with an average maximum daily temperature below 28.5°C have one more day with a maximum daily temperature equal to or greater than 34°C, relative to a day with a maximum daily temperature in the 28–30°C range, the reading, math, and English test scores decreased by 0.0073, 0.0124, and 0.0105 standard deviations, respectively...
Interestingly, the analysis was based on the summer temperatures, while the test was taken in the autumn. Cho found no effect of the temperature on the day of the test itself. The results aren't causal, so some more confirmatory work is still needed.

However, if we were to extrapolate from the main results a little, students exposed to temperatures higher than the norm for a particular area may well be made worse off in terms of their learning. If you're wanting to maximise your grades, the beach should be looking like an even more attractive option now.

Monday, 3 December 2018

Public health vs. competition in Australian alcohol regulation

In New Zealand, the Sale and Supply of Alcohol Act has a harm minimisation objective. Specifically, Section 4(1) of the Act specifies that:
The object of this Act is that - 
(a) the sale, supply, and consumption of alcohol should be undertaken safely and responsibly; and
(b) the harm caused by the excessive or inappropriate consumption of alcohol should be minimised.
Section 4(1)(b) clearly puts a public health objective within the legislation. In contrast, the corresponding Australian legislation has no such public health objective. So, I was interested to read this 2017 article (open access) by Janani Muhunthan (University of Sydney) and co-authors, published in the Australian and New Zealand Journal of Public Health. In the article Muhunthan et al. look at cases in the Australian courts between 2010 and 2015, of appeals of liquor licensing or planning decisions related to alcohol outlets. In total, they looked at 44 such cases, and found that:
Most decisions (n=34, 77%) resulted in an outcome favourable to the industry actor in the case (n=24 development applications and n=10 liquor licensing decisions). The majority of decisions involving liquor outlets were brought by liquor establishments owned by Australia’s two major grocery chains (n=11/19) and had a success rate of greater than 70% (n=8/11) in disputes. Governments and their agencies were successful in having appeals dismissed in less than a quarter (n=10) of the cases studied.
In the discussion, they note that:
Competition principles underpinned by legislation were highly influential in decisions and it is the presence of such legislation that enabled pro-competition decisions to be the default outcome. A consequence of the lack of explicit legislative support for preventive health arguments is that public health impact is relegated in practice below other considerations including market freedoms, amenity and the compatibility of industry’s proposal with existing planning controls.
The goal of increasing competition in Australia essentially overrides the public health considerations. One of the key problems they identified is that public health evidence was general, rather than specifically related to a given alcohol outlet, whereas industry data supporting an appeal was much more specific. They conclude that:
[t]he ability of government and community groups to better execute a public health case in the area of alcohol regulation can thus be addressed from the top down through the inclusion of explicit public health objectives in existing planning and liquor licensing controls.
As noted at the start of this post, New Zealand legislation already includes the public health objective. However, in spite of the explicit public health objective of the legislation, the problem appears to me to be essentially the same - it is difficult for the public health case to be made when the data do not speak specifically to a given application for a liquor licence. Now that the Sale and Supply of Alcohol Act has been in operation for a few years, it would be really interesting to do some comparative work on a similar basis to the Muhunthan et al. article, but based on New Zealand cases.

Saturday, 1 December 2018

Meta loss aversion

Back in August, this article in The Conversation pointed me to this article by David Gal (University of Illinois at Chicago) and Derek Rucker (Northwestern), published in the Journal of Consumer Psychology (sorry I don't see an ungated version anywhere). Loss aversion is the idea that people value losses much more than equivalent gains (in other words, we like to avoid losses much more than we like to capture equivalent gains). It is a central and defining idea in behavioural economics. In their article, Gal and Rucker present evidence that loss aversion is not real:
In sum, an evaluation of the literature suggests little evidence to support loss aversion as a general principle. This appears true regardless of whether one represents loss aversion in the strong or weak forms presented here. That is, a strong form suggests that, for loss aversion to be taken as a general principle, one should observe losses to generally outweigh gains and for gains to never outweigh losses. The weak form, as we have represented it, might simply be that, on balance, it is more common for losses to loom larger than gains than vice versa. This is not to say that losses never loom larger than gains. Yes, contexts exist for which losses might have more psychological impact than gains. But, so do contexts and stimuli exist where gains have more impact than losses, and where losses and gains have similar impact.
From what I can see Gal and Rucker's argument rests on a fairly selective review of the literature, and in some cases I'm not convinced. A key aspect of loss aversion is that people make decisions in relation to a reference point, and it is comparison with that reference point that determines whether they are facing a loss or a gain. Gal and Rucker even recognise this:
For example, one individual who obtained $5 might view the $5 as a gain, whereas another individual that expected to obtain $10, but only obtained $5, might view the $5 obtained as a loss of $5 relative to his expectation... 
It isn't clear to me that the following evidence actually takes account of the reference point:
The retention paradigm involves a comparison of participants’ [willingness-to-pay] to obtain an object (WTP-Obtain) to a condition where participants are asked their maximum willingness-to-pay to retain an object they own (WTP-Retain). The retention paradigm and its core feature—the WTP-retain condition— proposes to offer a less confounded test of the relative impact of losses versus gains...
...in the discrete variant of the retention paradigm, both the decision to retain one’s endowed option and the decision to exchange the endowed option for an alternative are framed as action alternatives. In particular, participants in one condition are informed that they own one good and are asked which of two options they would prefer, either (a) receiving $0 [i.e., the retention option], or (b) exchanging their endowed good for an alternative good. Participants in a second condition are offered the same choice except the endowed good and the alternative good are swapped.
It seems to me that in both the endowed option (where the research participant is paying to retain something they have been given) and the alternative option (where the research participant is paying to obtain that same item), the reference point is not having the item. So, it isn't a surprised when Gal and Rucker report that, in some of their earlier research:
Gal and Rucker (2017a) find across multiple experiments with a wide range of objects (e.g., mugs and notebooks; mobile phones; high-speed Internet, and train services) that WTP-Retain did not typically exceed WTP-Obtain. In fact, in most cases, little difference between WTP-Retain and WTP-Obtain was observed, and for mundane goods, WTP-Obtain exceeded WTP-Retain more often than not.
An interesting aspect of Gal and Rucker's paper is that they try to explain why, in the face of the competing evidence they have accumulated, loss aversion is so widely accepted across many disciplines (including economics, psychology, law, marketing, finance, etc.). They adopt a Kuhnian argument:
Kuhn (1962) argues that as a paradigm becomes entrenched, it increasingly resists change. When an anomaly is ultimately identified that cannot easily be ignored, scientists will try to tweak their models rather than upend the paradigm. They “will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict” (Kuhn, 1970, p. 78).
I want to suggest an alternative (similar to an earlier post I wrote about ideology). Perhaps academics are willing to retain their belief in loss aversion because, if they gave up on it, they would keenly feel its loss? Are academics so loss averse that they are unwilling to give up on loss aversion? If that's the case, is that evidence in favour of loss aversion? It's all getting rather circular, isn't it?

Friday, 30 November 2018

Beer prices and STIs

Risky sexual behaviour is, by definition, sexual behaviour that increases a person's risk of contracting a sexually transmitted infection (STI). Risky sex is more likely to happen when the participants are affected by alcohol. So, if there is less alcohol consumption, it seems reasonable to assume that there will be less risky sex. And if alcohol is more expensive, people will drink less (economists refer to that as the Law of Demand). Putting those four sentences together, we have a causal chain that suggests that when alcohol prices are higher, the incidence of sexually transmitted infections should be lower. But how much lower? And, could we argue that alcohol taxes are a good intervention to reduce STI incidence?

A 2008 article by Anindya Sen (University of Waterloo, in Canada) and May Luong (Statistics Canada), published in the journal Contemporary Economic Policy (ungated here) provides some useful evidence. Sen and Luong used provincial data from Canada for the period from 1981 to 1999, and looked at the relationship between beer prices and gonorrhea and chlamydia rates. Interestingly, over that time beer prices had increased by 10%, while gonorrhea incidence had decreased by 93% and chlamydia incidence had decreased by 28%.  They find that:
...higher beer prices are significantly correlated with lower gonorrhea and chlamydia rates and beer price elasticities within a range of -0.7 to -0.9.
In other words, a one percent increase in beer prices is associated with a 0.7 to 0.9 percent decrease in gonorrhea and chlamydia rates. So, if the excise tax on beer increased, then the incidence rate of STIs would decrease. However, it is worth noting that the effect of a tax change will be much less than that implied by the elasticities above. According to Beer Canada, about half of the cost of a beer is excise tax (although that calculation is disputed, I'll use it because it is simple). So, a 1% increase in beer tax would increase the price of beer by 0.5%, halving the effect on STIs to a decrease of 0.35 to 0.45 percent. Still substantial.

Of course, that assumes that Sen and Luong's results are causal, which they aren't (although they do include some analysis based on an instrumental variables approach, which supports their results and has an interpretation that is closer to causality). However, in weighing up the optimal tax on alcohol, the impact on STI incidence is a valid consideration.

Thursday, 29 November 2018

The economic impact of the 2010 World Cup in South Africa

The empirical lack of economic impact of mega sports events is reasonably well established. Andrew Zimbalist has a whole book on the topic, titled Circus Maximus: The Economic Gamble behind Hosting the Olympics and the World Cup (which I reviewed here; see also this 2016 post). So, I was interested to read a new study on the 2010 FIFA World Cup in South Africa that purported to find significant impacts.

This new article, by Gregor Pfeifer, Fabian Wahl, and Martyna Marczak (all University of Hohenheim, in Germany) was published in the Journal of Regional Science (ungated earlier version here). World-Cup-related infrastructure spending in South Africa between 2004 (when their hosting rights were announced) and 2010 was:
...estimated to have totaled about $14 billion (roughly 3.7% of South Africa’s GDP in 2010) out of which $11.4 billion have been spent on transportation...
Unsurprisingly, the spending was concentrated in particular cities, which were to host the football matches. To measure economic impact, Pfeifer et al. use night lights as a proxy. They explain that:
...[d]ata on night lights are collected by satellites and are available for the whole globe at a high level of geographical precision. The economic literature using high‐precision satellite data, also on other outcomes than night lights, is growing... The usefulness of high‐precision night light data as an economic proxy is of particular relevance in the case of developing countries, where administrative data on GDP or other economic indicators are often of bad quality, not given for a longer time span, and/or not provided at a desired subnational level.
They find that:
Based on the average World Cup venue on municipality level, we find a significant and positive short‐run impact between 2004 and 2009, that is equivalent to a 1.3 percentage points decrease in the unemployment rate or an increase of around $335 GDP per capita. Taking the costs of the investments into account, we derive a net benefit of $217 GDP per capita. Starting in 2010, the average effect becomes insignificant...
That is pretty well demonstrated in the following figure. Notice that the bold line (the treated municipalities) sits above the dashed line (the synthetic control, see below) only from 2004 up to 2010, where they come back together.


They also find that:
...the average picture obscures heterogeneity related to the sources of economic activity and the locations within the treated municipalities. More specifically, we demonstrate that around and after 2010, there has been a positive, longer‐run economic effect stemming from new and upgraded transport infrastructure. These positive gains are particularly evident for smaller towns, which can be explained with a regional catch‐up towards bigger cities... Contrarily, the effect of stadiums is generally less significant and no longer‐lasting economic benefits are attributed to the construction or upgrade of the football arenas. Those are merely evident throughout the pre‐2010 period. Taken together, our findings underline the importance of investments in transport infrastructure, particularly in rural areas, for longer‐run economic prosperity and regional catch‐up processes.
In other words, the core expenditure on the tournament itself, such as stadiums, had no economic impact after construction ended (which is consistent with the broader literature), while the expenditure on transport infrastructure did. South Africa would have gotten the same effect by simply building the transport infrastructure without the stadiums.

There were a couple of elements of the study that troubled me. They used a synthetic control method. You want to compare the 'treated' municipalities (i.e. those where new transport infrastructure or stadiums were built) with 'control' municipalities (where no infrastructure was built, but which are otherwise identical to the treatment municipalities). The problem is that control municipalities that are identical to the treatment municipalities is all-but-impossible. So, instead you construct a 'synthetic control' as a weighted average of several other municipalities, so that the weighted synthetic control looks very similar to the treated municipality. This is an approach that is increasingly being used in economics.

However, in this case basically all of the large cities in South Africa were treated in some way. So, the synthetic control is made up of much smaller municipalities. In fact, the synthetic control is 80.8% weighted to uMhlathuze municipality (which is essentially the town of Richards Bay, northeast of Durban). So, effectively they were comparing the change in night lights in areas with infrastructure development with the change in night lights for Richards Bay (and the surrounding municipality).

Second, they drill down to look at the impacts of individual projects, and find that some of the projects have significant positive effects that last beyond 2010 (unlike the overall analysis, which finds nothing after 2010). Given the overall null effect after 2010, that suggests that there must be some other projects that had negative economic impacts after 2010. Those negative projects are never identified.

The economic non-impact of mega sports events is not under threat from this study. The best you could say is that hosting the FIFA World Cup induced South Africa to invest in transport infrastructure that might not have otherwise happened. Of course, we will never know.

Wednesday, 28 November 2018

How many zombies are there in New Zealand?

Let's say there is some rare group of people and that you want to know how many people there are in the group. Say, people who own fifteen or more cats, or avid fans of curling. Conducting a population survey isn't going to help much, because if you survey 10,000 people and three belong to the group that doesn't tell you very much. Now, let's say that not only is the group rare, but people don't want to admit (even in a survey) that they belong to the group. Say, people who enjoyed the movie Green Lantern, or secret agents, or aliens, or vampires, or zombies. How do you get a measure of the size of those populations?

One way that you might be able to achieve this is an indirect method. If you survey a random sample of people, and you know how many people they know (that is, how many people are in their social network), you could simply ask each person in your survey how many Green Lantern lovers, or how many zombies, they know. You could then extrapolate from that how many there are in the population as a whole, if you make some assumptions about the overlaps between the networks of the people you surveyed.

It's not a totally crazy idea, but is sufficiently lampooned by Andrew Gelman (Columbia University) in this article published on ArXiv:
Zombies are believed to have very low rates of telephone usage and in any case may be reluctant to identify themselves as such to a researcher. Face-to-face surveying involves too much risk to the interviewers, and internet surveys, although they originally were believed to have much promise, have recently had to be abandoned in this area because of the potential for zombie infection via computer virus...
Zheng, Salganik, and Gelman (2006) discuss how to learn about groups that are not directly sampled in a survey. The basic idea is to ask respondents questions such as, "How many people do you know named Stephen/Margaret/etc." to learn the sizes of their social networks, questions such as "How many lawyers/teachers/police officers/etc. do you know," to learn about the properties of these networks, and questions such as "How many prisoners do you know" to learn about groups that are hard to reach in a sample survey. Zheng et al. report that, on average, each respondent knows 750 people; thus, a survey of 1500 Americans can give us indirect information on about a million people.
If you're interested, the Zheng et al. paper is open access and available here. So, how many zombies are there in New Zealand? To find out, someone first needs to do a random survey asking people how many zombies they know.

Read more:


Sunday, 25 November 2018

The law and economics (and game theory) of Survivor

As I mentioned in a post last year, I really love the reality show Survivor. One day I might even collate some of the cool economics-related examples from the show - comparative advantage, asymmetric information, risk and uncertainty, public goods, common resources, and lots and lots of game theory (coalitions, prisoners' dilemmas, repeated games, coordination games, and so on).

I recently ran across this 2000 working paper by Kimberley Mason and Maxwell Stearns (both George Mason University) on the law and economics of Survivor. It would probably be more correct if the title said it was about the game theory of Survivor, which is what it is. It was written soon after the conclusion of the first season of Survivor (which is currently showing its 37th season - David vs. Goliath). The paper is interesting in that it traces out all the strategic decision-making in the first season, and relates it to game theory and rational decision-making. Mason and Stearns also highlight the masterstroke of the eventual winner, Richard Hatch, in bowing out of the last immunity challenge:
The strenuous nature of the competition helped Richard to justify a decision that was ultimately a well disguised defection from his suballiance with Rudy. Recall that Richard withdrew from the competition, claiming that he knew he would not win. If one construes the Richard/Rudy suballiance as a commitment to do whatever they can to ensure that they emerge as the finalists... then by withdrawing, Richard defected. To see why consider how the game was necessarily played as a result of Richard’s decision. Had Rudy won the competition, he would have voted to keep Richard on as a finalist, consistent with his commitment to the suballiance. Because Kelly preferred Rudy to Richard (as shown in her first vote in cycle 13), this would have risked a 4 to 3 vote for Rudy by the jury. (This assumes that the remaining six jurors vote as they did.). But if Kelly won the game, then she would choose between Rudy and Richard. She knew that either of them would vote for the other as a juror. The only question from her perspective was who was more popular with the remaining jurors. As Richard likely knew, Rudy was more popular, meaning that if Kelly won, Richard would still be selected as a finalist. In contrast, if Richard stayed in the immunity contest and won, he faced another Catch-22. If he voted to keep Rudy, then Kelly would vote for Rudy as a juror, and as a result, Richard would lose (again assuming the other jurors voted as they did). And if he voted for Kelly, then he would violate the express terms of the suballiance with Rudy, and risk Rudy’s retribution. If Rudy also defected, then Kelly would win. The only way that Richard could reduce the likelihood of this result was to withdraw from the game. While he would remain a finalist regardless of whether Rudy or Kelly won, he hoped that Kelly would win because she would eliminate his toughest final competitor.
Kelly won the challenge, and Richard duly won Survivor by a vote of 4-3. Mason and Stearns conclude:
At the beginning of this essay, we posited that Survivor was played in a manner that was consistent with the predictions of rational choice theory. We certainly do not suggest that every player played in a manner that optimized his or her prospects for winning. Indeed, that is largely the point. At each step in the game, those who best positioned themselves to win were the ones who played in a rational and strategic manner.
Interestingly, the paper also contains a discussion of the optimal size of an alliance, based on theory from Gordon Tullock and Nobel Prize winner James Buchanan, which should be familiar to my ECONS102 students:
Professors Buchanan and Tullock present an optimal size legislature as a function of two costs, agency costs, which are negatively correlated with the number of representatives, and decision costs, which are positively correlated with the number of representatives... The optimum point, according to Buchanan and Tullock, is that which minimizes the sum of agency and decision costs...
These two conflicting costs, which are both a function of coalition size, pit the benefits of safety in numbers against the risks of disclosure to non-alliance members... As the size of the coalition increases, the members are increasingly protected against the risk that a member will defect in favor of an alternative coalition. Conversely, as coalition size increases, the members face an increased risk of disclosure, which could lead to a coalition breakdown.
The optimal size of an alliance is one that correctly balances the benefits of being large enough to be safe from the non-allied players voting you off (the marginal benefit of adding one more person to the alliance decreases as the alliance gets larger), against the costs of the alliance being revealed to all players (the marginal cost of adding one more person to the alliance increases as the alliance gets larger). The cost of having a large alliance also relates to the chance of defection - the chance that one or more members of the alliance switch sides and blindside someone. It is easier to maintain trust and cohesion in a smaller alliance.

Survivor is a great example of economics in action. If you aren't already a fan, you should start watching it!

Read more:


Saturday, 24 November 2018

The debate over a well-cited article on online piracy

Recorded music on CDs and recorded music as digital files are substitute goods. So, when online music piracy was at its height in the 2000s, it is natural to expect that there would be some negative impact on recorded music sales. For many years, I discussed this with my ECON110 (now ECONS102) class. However, in the background, one of the most famous research articles on the topic actually found that there was essentially no statistically significant effect of online piracy on music sales.

That 2007 article was written by Felix Oberholzer-Gee (Harvard) and Koleman Strumpf (Kansas University), and published in the Journal of Political Economy (one of the Top Five journals I blogged about last week; ungated earlier version here). Oberholzer-Gee and Strumpf used 17 weeks of data from two file-sharing servers, matched to U.S. album sales. The key issue with any analysis like this is:
...the popularity of an album is likely to drive both file sharing and sales, implying that the parameter of interest Î³ will be estimated with a positive bias. The album fixed effects vi control for some aspects of popularity, but only imperfectly so because the popularity of many releases in our sample changes quite dramatically during the study period.
The standard approach for economists in this situation is to use instrumental variables (which I have discussed here). Essentially, this involves finding some variable that is expected to be related to U.S. file sharing, but shouldn’t plausibly have a direct effect on album sales in the U.S. Oberholzer-Gee and Strumpf use school holidays in Germany. Their argument is that:
German users provide about one out of every six U.S. downloads, making Germany the most important foreign supplier of songs... German school vacations produce an increase in the supply of files and make it easier for U.S. users to download music.
They then find that:
...file sharing has had only a limited effect on record sales. After we instrument for downloads, the estimated effect of file sharing on sales is not statistically distinguishable from zero. The economic effect of the point estimates is also small.... we can reject the hypothesis that file sharing cost the industry more than 24.1 million albums annually (3 percent of sales and less than one-third of the observed decline in 2002).
Surprisingly, this 2007 article has been a recent target for criticism (although, to be fair, it was also a target for criticism at the time it was published). Stan Liebowitz (University of Texas at Dallas) wrote a strongly worded critique, which was published in the open access Econ Journal Watch in September 2016. Liebowitz criticises the 2007 paper for a number of things, not least of which is the choice of instrument. It is worth quoting from Liebowitz's introduction at length:
First, I demonstrate that the OS measurement of piracy—derived from their never-released dataset—appears to be of dubious quality since the aggregated weekly numbers vary by implausibly large amounts not found in other measures of piracy and are inconsistent with consumer behavior in related markets. Second, the average value of NGSV (German K–12 students on vacation) reported by OS is shown to be mismeasured by a factor of four, making its use in the later econometrics highly suspicious. Relatedly, the coefficient on NGSV in their first-stage regression is shown to be too large to possibly be correct: Its size implies that American piracy is effectively dominated by German school holidays, which is a rather farfetched proposition. Then, I demonstrate that the aggregate relationship between German school holidays and American downloading (as measured by OS) has the opposite sign of the one hypothesized by OS and supposedly supported by their implausibly large first-stage regression results.
After pointing out these questionable results, I examine OS’s chosen method. A detailed factual analysis of the impact of German school holidays on German files available to Americans leads to the conclusion that the extra files available to Americans from German school holidays made up less than two-tenths of one percent of all files available to Americans. This result means that it is essentially impossible for the impact of German school holidays to rise above the background noise in any regression analysis of American piracy.
I leave it to you to read the full critique, if you are interested. Oberholzer-Gee and Strumpf were invited to reply in Econ Journal Watch. However, instead they published a response in the journal Information Economics and Policy (sorry, I don't see an ungated version online) the following year.  However, the response is a great example of how not to respond to a critique of your research. They essentially ignored the key elements of Liebowitz's critique, and he responded in Econ Journal Watch again in the May 2017 issue:
Comparing their IEP article to my original EJW article reveals that their IEP article often did not respond to my actual criticisms but instead responded, in a cursorily plausible manner, to straw men of their own creation. Further, they made numerous factual assertions that are clearly refuted by the data, when tested.
In the latest critique, Liebowitz notes an additional possible error in Oberholzer-Gee and Strumpf's data. It seems to me that the data error is unlikely (it is more likely that the figure that represents the data is wrong), but since they haven't made their data available to anyone, it is impossible to know either way.

Overall, this debate is a lesson in two things. First, it demonstrates how not to respond to reasonable criticism - that is, by avoiding the real questions and answering some straw man arguments instead. Related to that is making your data available. Restricting access to the data (except in cases where the data are protected by confidentiality requirements) makes it seem as if you have something to hide! In this case, the raw data might have been confidential, but the weekly data used in the analysis are derivative and may not be. Second, as Leibowitz notes in his first critique, most journal editors are simply not interested in publishing comments on articles published in their journal, where the comments might draw attention to flaws in the original articles. I've struck that myself with Applied Economics, and ended up writing a shortened version of a comment on this blog instead (see here). It isn't always the case though, and I had a comment published in Education Sciences a couple of months ago. The obstructiveness of authors and journal editors to debate on published articles is a serious flaw in the current peer reviewed research system.

In the case of Oberholzer-Gee and Strumpf's online piracy article, I think it needs to be seriously down-weighted. At least until they are willing to allow their data and results to be carefully scrutinised.

Thursday, 22 November 2018

Drinking increases the chances of coming to an agreement?

Experimental economics allows researchers to abstract from the real world, presenting research participants with choices where the impact of individual features of the choice can be investigated. Or sometimes the context in which choices are made can be manipulated to test their effects on behaviour.

In a 2016 article published in the Journal of Economic Behavior & Organization (I don't see an ungated version, but it looks like it might be open access anyway), Pak Hung Au (Nanyang Technological University) and Jipeng Zhang (Southwestern University of Finance and Economics) look at the effect of drinking alcohol on bargaining.

They essentially ran three experiments, where participants drank either: (1) a can of 8.8% alcohol-by-volume beer; (2) a can of non-alcoholic beer; or (3) a can of non-alcoholic beer (and where they were told it was non-alcoholic). They then got participants to play a bargaining game where each participant was given a random endowment of between SGD1 and SGD10, then paired up with another and:
...chose whether to participate in a joint project or not. If either one subject in the matched pair declined to participate in the joint project, they kept their respective endowments. If both of them agreed to start the joint project, the total output of the project was equally divided between them. The project’s total output was determined by the following formula: 1.2 × sum of the paired subjects’ endowment.
So, if Participant X had an endowment of 4 and Participant Y had an endowment of 8, and they both chose to participate in the joint project, they would each receive $7.20 (12 * 1.2 / 2). So, it should be easy to see that participants with a low endowment are more likely to be better off participating than participants with a high endowment (who will be made worse off, like Participant Y in the example above, unless the person they are paired with also has a high endowment). Au and Zhang show that participating in the joint project should only be chosen by those with an endowment of 3 or less.

Using data from 114 NTU students, they then find that:
...[w]hen the endowment is between 1 and 4 dollars, almost all subjects participate in the alcohol treatment and a large proportion (around 90–95%) of the subjects join the project in the two nonalcohol treatments. When the endowment is above 8 dollars, the participation ratio drops to around 0.2...
...subjects in the alcohol treatment are more likely to join the project at almost all endowment levels, compared with their counterparts in the two nonalcohol treatments.
Importantly, they show that drinking alcohol has no effect on altruism or risk aversion, ruling out those as explanations for the effect, leading them to conclude that:
...in settings in which skepticism can lead to a breakdown in negotiation, alcohol consumption can make people drop their guard for each others’ actions, thus facilitating reaching an agreement.
Drinking makes people more likely to come to an agreement. I wonder how you could reconcile those results with the well-known effects of alcohol on violent behaviour? Perhaps alcohol makes people more likely to agree to engage in violent confrontation?

Read more:


Wednesday, 21 November 2018

Tauranga's beggar ban, and the economics of panhandling

Tauranga City Council has just passed a ban on begging. The New Zealand Herald reports:
The council voted 6-5 to ban begging and rough sleeping within five metres of the public entrances to retail or hospitality premises in the Tauranga City, Greerton and Mount Maunganui CBDs.
The bans will become law on April 1, 2019, as part of the council's revised Street Use and Public Places Bylaw...
[Councillor Terry] Molloy said the measures of success would be a marked reduction in beggars and rough sleepers in the targeted areas, the community feeling a higher level of comfort with their security in those areas, happier retailers and no proof the problem had moved elsewhere.
Becker's rational model of crime applies here. Panhandlers (or beggars) weigh up the costs and benefits of panhandling when they decide when (and where) to panhandle. If you increase the costs of panhandling (for example, by penalising or punishing panhandlers), you can expect there to be less of it (at least, in the areas where the penalties are being enforced). How much less? I'll get to that in a moment.

You may doubt that panhandlers are rational decision-makers. However, a new paper by Peter Leeson and August Hardy (both George Mason University) shows that panhandlers do act in rational ways. Using data from 258 panhandlers, observed on 242 trips to Washington D.C. Metrorail stations, Leeson and Hardy find that:
Panhandlers solicit more actively when they have more human capital, when passersby are more responsive to solicitation, and when passersby are more numerous. Panhandlers solicit less actively when they encounter more panhandling competition. Female panhandlers also solicit less actively...
Panhandlers are attracted to Metro stations where passersby are more responsive to solicitation and to stations where passersby are more numerous. Panhandlers are also attracted to Metro stations that are near a homeless shuttle-stop and are more numerous at stations that are near a homeless service.
In other words, if the benefits of panhandling increase (because passersby are more responsive, or more numerous), there will be more panhandling. If the costs of panhandling are higher (because the Metro station is further to travel to), there will be less panhandling. This is exactly what you would expect from rational panhandlers. As Leeson and Hardy note:
...panhandlers behave as homo economicus would behave if homo economicus were a street person who solicited donations from passersby in public spaces.
Interestingly, Leeson and Hardy also explain in economic terms how panhandling works:
...panhandler solicitation is generally regarded as a nuisance; it threatens to create “psychological discomfort…in pedestrians,” such as guilt, awkwardness, shame, even fear (Ellickson 1996: 1181; Skogan 1990; Burns 1992).5 Third, pedestrians are willing to pay a modest price to avoid this discomfort...
By threatening more discomfort, more active panhandler solicitation extracts larger payments through two channels. First, it makes passersby who would have felt the threat of some discomfort and thus paid the panhandler something even if he had solicited less actively feel a still greater threat and thus pay him more. Second, it makes passersby who would not have felt the threat of any discomfort and thus not paid the panhandler anything if he had solicited less actively feel a threat and thus pay him a positive amount.
An interesting question arises though. Panhandlers have imperfect information about how willing passersby are to pay to avoid this discomfort. They find this out through soliciting. So, in areas where passersby pay more (or more often), that might encourage panhandlers to be more active. I'm sure I'm not the only one who has been told not to pay panhandlers, because you just encourage more of the activity. It seems obvious. But is it true?

A new article by Gwendolyn Dordick (City College New York) and co-authors, published in the Journal of Urban Economics (ungated earlier version here), suggests probably not. They collected data from 154 walks through downtown Manhattan (centred around Broadway) in 2014 and 2015, where they observed the location and numbers of panhandlers. Importantly, their data collection occurred before and after several significant changes downtown, including the opening of One World Trade Center and its observation deck, and the increase in foot traffic associated with the September 11 Museum. This allowed them to evaluate how an increase in potential earnings (through more passersby, especially tourists) affects panhandling. They find that:
...the increase in panhandling was small and possibly zero (although our confidence intervals are not narrow enough to rule out relatively large changes). Panhandling moved from around Broadway toward areas where tourists were more common... We tentatively conclude that the supply elasticity of panhandling is low...
The moral hazard involved in giving to panhandlers seems to be small. More generally, the incidence of policy changes in places like Downtown Manhattan is likely to be pretty simple: a fairly constant group of panhandlers gains or loses; there is no “reserve army of panhandlers ”to eliminate any rise in returns by flooding in, and no shadowy “panhandling boss ”behind the scenes to soak up any gains by asking more money for right to panhandle in various locations (since control over even the best location is not worth much because substitute lo- cations are often vacant). Giving to panhandlers does not to any great extent encourage panhandling or a fortiori homelessness. 
In other words, when the expected earnings from panhandling increase, there isn't a sudden influx of new panhandlers, and existing panhandlers don't spend more time soliciting. Interestingly, they also find that:
Because the number of people who want to panhandle, even at the best times and places, is small, space is effectively free. Supply at zero price exceeds demand. Because space is free, so is courtesy, and so is abiding by norms.
There was little fighting among the panhandlers, because space was abundant. The main constraint to panhandling in Manhattan appeared to be that people didn't want to panhandle, not a lack of space to do so. This may also be because the panhandlers are 'target earners' - they only panhandle for long enough to earn what they wanted to for that day, so if the earnings are good they don't have to panhandle for very long (although Dordick et al.'s results cast some doubt on whether panhandlers actually are target earners).

What can Tauranga learn from the results of these two papers? Greater penalties on panhandlers will reduce panhandling, but it might also simply move it elsewhere. In these other locations, panhandlers will have to work even harder (perhaps being even more aggressive), and for longer, to achieve their target earnings. And because the best spaces for panhandling have been taken away from them, there may be even more conflict over the remaining spaces. I guess we'll see in due course.

[HT for the Leeson and Hardy paper: Marginal Revolution]