Monday, 31 December 2018

Scott Sumner on behavioural economics in introductory economics

In a recent blog post, Scott Sumner argues against a large role for behavioural economics in introductory economics:
The Atlantic has an article decrying the fact that economists are refusing to give behavioral economics a bigger role in introductory economics courses. I’m going to argue that this oversight is actually appropriate, even if behavioral economics provides many true observations about behavior...
Most people find the key ideas of behavioral economics to be more accessible than classical economic theory. If you tell students that some people have addictive personalities and buy things that are bad for them, they’ll nod their heads.  And it’s certainly not difficult to explain procrastination to college students. Ditto for the claim that investors might be driven by emotion, and that asset prices might soar on waves of “irrational exuberance.”  Thus my first objection to the Atlantic piece is that it focuses too much on the number of pages in a principles textbook that are devoted to behavioral economics.  That’s a misleading metric.  One should spend more time on subjects that need more time, not things that people already believe.
The whole post is worth reading. Although I don't agree with all of it, as I think behavioural economics does have a role to play in introductory economics. However, in my ECONS102 class I use it to illustrate the fragility of the rationality assumption, while pointing out that many of the key intuitions of economics (such as that people respond to incentives, or even the workhorse model of supply and demand) don't require that all decision-makers be acting with pure rationality. At the introductory level, I think it's much more important that students take away some economic intuition than a collection of mostly ad hoc anecdotes, which is essentially what behavioural economics is currently. That point was driven home by this article by Koen Smets (which I blogged about earlier this year).

Sumner argues in his blog post that we should focus on discouraging people from believing in eight popular myths. I don't agree with all of his choices there. For Myth #2 (Imported goods, immigrant labor, and automation all tend to increase the unemployment rate), I'd say it is arguable. For Myth #3 (Most companies have a lot of control over prices.  (I.e. oil companies set prices, not “the market”), it depends what you mean by "a lot". For Myth #7 (Price gouging hurts consumers), Exhibit A is the consumer surplus.

However, one popular myth that should be discouraged is the idea that behavioural economics will (or should) entirely supplant traditional economics. There is space for both, at least until there is a durable core of theory in behavioural economics.

[HT: Marginal Revolution]

Sunday, 30 December 2018

Book review: Prediction Machines

Artificial intelligence is fast becoming one of the dominant features of narratives of the future. What does it mean for business, how can businesses take advantage of AI, and what are the risks? These are all important questions that business owners and managers need to get their heads around. So, Prediction Machines - The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, is a well-timed, well-written and important read for business owners and managers, and not just those in 'technology firms'.

The title of the book invokes the art of prediction, which the books defines as:
[p]rediction takes information you have, often called "data", and uses it to generate information you don't have.
Students of economics will immediately recognise and appreciate the underlying message in the book, which is that:
[c]heaper prediction will mean more predictions. This is simple economics: when the cost of something falls, we do more of it.
So, if we (or to be more precise, prediction machines) are doing more predictions, then complementary skills become more valuable. The book highlights the increased value of judgment, which is "the skill used to determine a payoff, utility, reward, or profit".

The book does an excellent job of showing how AI can be embedded within and contribute to improved decision-making through better prediction. If you want to know how AI is already being used in business, and will likely be used in the future, then this book is a good place to start.

However, there were a couple of aspects where I was disappointed. I really enjoyed Cathy O'Neil's book Weapons of Math Destruction (which I reviewed last year), so it would have been nice if this book had engaged more with O'Neil's important critique. Chapter 18 did touch on it, but I was left wanting more:
A challenge with AI is that such unintentional discrimination can happen without anyone in the organization noticing. Predictions generated by deep learning and many other AI technologies appear to be created from a black box. It isn't feasible to look at the algorithm or formula underlying the prediction and identify what causes what. To figure out if AI is discriminating, you have to look at the output. Do men get different results than women? Do Hispanics get different results than others? What about the elderly or the disabled? Do these different results limit their opportunities?
Similarly, judgment is not the only complement that will increase in value. Data is a key input to the prediction machines, and will also increase in value. The book does acknowledge this, but is relatively silent on the idea of data sovereignty. There is an underlying assumption that businesses are the owners of data, and not the consumers or users of products who unwittingly give up valuable data on themselves or their choices. Given the recent furore over the actions of Facebook, some wider consideration of who owns data and how they should be compensated for their sharing of the data (or at least, how businesses should mitigate the risks associated with their reliance on user data) would have been timely.

The book was heavily focused on business, but Chapter 19 did pose some interesting questions with application to AI's role in wider society. These questions do need further consideration, but it was entirely appropriate that this book highlighted them while leaving the substantive answers to some other authors to address. These questions included, "Is this the end of jobs?", "Will inequality get worse?", "Will a few huge companies control everything?", and "Will some countries have an advantage?".

Notwithstanding my two gripes above, the book has an excellent section on risk. I particularly liked this bit on systemic risk (which could be read in conjunction with the book The Butterfly Defect, which I reviewed earlier this year):
If one prediction machine system proves itself particularly useful, then you might apply that system everywhere in your organization or even the world. All cars might adopt whatever prediction machine appears safest. That reduces individual-level risk and increases safety; however, it also expands the chance of a massive failure, whether purposeful or not. If all cars have the same prediction algorithm, an attacker might be able to exploit that algorithm, manipulate the data or model in some way, and have all cars fail at the same time. Just as in agriculture, homogeneity improves results at the individual level at the expense of multiplying the likelihood of system-wide failure.
Overall, this was an excellent book, and surprisingly free of the technical jargon that infest many books on machine learning or AI. That allows the authors to focus on the business and economics of AI, and the result is a very readable introduction to the topic. Recommended!

Saturday, 29 December 2018

The leaning tower that is PISA rankings

Many governments are fixated on measurement and rankings. However, as William Bruce Cameron wrote (and which has wrongly been attributed to Albert Einstein), "Not everything that counts can be counted, and not everything that can be counted counts". And even things that can be measured and are important might not be measured in a way that is meaningful.

Let's take as an example the PISA rankings. Every three years, the OECD tests 15-year-old students around the world in reading, maths, science, and in some countries, financial literacy. They then use the results from those tests to create rankings in each subject. Here are New Zealand's 2015 results. In all three subjects (reading, maths, and science), New Zealand ranks better than the OECD average, but shows a decline since 2006. To be more specific, in 2015 New Zealand ranked 10th in reading (down from 5th in 2006), 21st in maths (down from 10th in 2006), and 12th in science (down from 7th in 2006).

How seriously should we take these rankings? It really depends on how seriously the students take the PISA tests. They are low-stakes tests, which means that the students don't gain anything from doing them. And that means that there might be little reason to believe that the results are reflective of actual student learning. Of course, students in New Zealand are not the only students who might not take these tests seriously. New Zealand's ranking would only be adversely affected if students here are more likely not to take the test seriously, or where the students who don't take the test seriously are better students on average than those who don't take the test seriously in other countries.

So, are New Zealand students less serious about PISA than students in other countries? In a recent NBER Working Paper (ungated version here), Pelin Akyol (Bilkent University, Turkey), Kala Krishna and Jinwen Wang (both Penn State) crunch the numbers for us. They identify non-serious students as those where there were several questions left unanswered (by skipping them or not finishing the test) despite time remaining, or where students spent too little time on several questions (relative to their peers). They found that:
[t]he math score of the student is negatively correlated with the probability of skipping and the probability of spending too little time. Female students... are less likely to skip or to spend too little time. Ambitious students are less likely to skip and more likely to spend too little time... students from richer countries are more likely to skip and spent too little time, though the shape is that of an inverted U with a turning point at about $43,000 for per capita GDP.
They then adjust for these non-serious students, by imputing the number of correct answers they would have gotten had they taken the test seriously. You can see the more complete results in the paper. Focusing on New Zealand, our ranking would increase from 17th to 13th if all students in all countries took the test seriously, which suggests to me that the low-stakes PISA test is under-estimating New Zealand students' results. We need to be more careful about how we interpret these international education rankings based on low-stakes tests.

[HT: Eric Crampton at Offsetting Behaviour, back in August; also Marginal Revolution]

Friday, 28 December 2018

The beauty premium in the LPGA

Daniel Hamermesh's 2011 book Beauty Pays: Why Attractive People are More Successful (which I reviewed here) makes the case that more attractive people earn more (or alternatively, that less attractive people earn less). However, the mechanism that drives this beauty premium (or even its existence) is still open to debate. It could arise because of discrimination - perhaps employers prefer working with more attractive people, or perhaps customers prefer to deal with more attractive workers. Alternatively, perhaps more attractive workers are more productive - for example, maybe they are able to sell more products.

However, working out whether either of these two effects, or some combination of both, is driving the beauty premium is very tricky. A 2014 article by Seung Chan Anh and Young Hoon Lee (both Sogang University in Korea), published in the journal Contemporary Economic Policy (sorry I don't see an ungated version) provides some evidence on the second effect. Anh and Lee use data from the Ladies Professional Golf Association (LPGA), specifically data from 132 players who played in at least one of the four majors between 1992 and 2010. They argue that:
Physically attractive athletes are rewarded more than unattractive athletes for one unit of effort. Being rewarded more, physically attractive athletes devote more effort to improving their productivity. Consequently they become more productive than less attractive athletes with comparable natural athletic talents.
In other words, more attractive golfers have an incentive to work harder on improving, because they can leverage their success through higher earnings in terms of endorsements, etc. However, Anh and Lee focus their analysis on tournament earnings, which reflect the golfers' productivity. They find that:
...average performances of attractive players are better than those of average looking players with the same levels of experience and natural talent. As a consequence, attractive players earn higher prize money.
However, in order to get to those results they have to torture the data quite severely, applying spline functions to allow them to estimate the effects for those above the median level of attractiveness. The main effect of beauty in their vanilla analysis is statistically insignificant. When you have to resort to fairly extravagant methods to extract a particular result, and you don't provide some sense of robustness by showing that your results don't arise solely as a result of your choice of method, they will always be a little questionable.

So, the take-away from this paper is that more attractive golfers might work harder and be more productive. Just like porn actresses.

Wednesday, 26 December 2018

Late-night tweeting is associated with worse performance

In what is one of the least surprising research results of 2018, a new article in the journal Sleep Health (sorry I don't see an ungated version) by Jason Jones (Stony Brook University) and co-authors looks at the relationship between late-night activity on Twitter and next-day game performance of NBA players. Specifically, they had data from 112 players from 2009 and 2016, and limited their analysis to East Coast teams playing on the East Coast, and West Coast teams playing on the West Coast (to avoid any jetlag effects). They found that:
[f]ollowing late-night tweeting, players contributed fewer points and rebounds. We estimate that players score 1.14 fewer points (95% confidence interval [CI]: 0.56-1.73) following a late-night tweet. Similarly, we estimate that players secure 0.49 fewer rebound [sic] (CI: 0.25-0.74). They also commit fewer turnovers and fouls in games following a late-night tweet. We estimate the differences to be 0.15 fewer turnover (CI: 0.06-0.025) and 0.22 fewer foul (CI: 0.12-0.33). These results are consistent with a hypothesis that players are less active in a game following a late-night tweet but not that the general quality of their play necessarily deteriorates.
We noted that, on average, players spent 2 fewer minutes on the court following late-night tweeting (no late-night tweeting: 24.8 minutes, late-night tweeting: 22.8 minutes).
Presumably, coaches realise that the sleep-deprived players are not playing at their usual standard, so the players spend more time on the bench rather than on the court. That explains some of the other effects (fewer points, rebounds, turnovers, and fouls), but Jones et al. also found that shooting percentage was lower after late-night tweeting (and shooting percentage shouldn't be affected by the number of minutes on court). Interestingly:
...infrequent late-night tweeters who late-night tweeted before a game scored significantly fewer points; made a lower percentage of shots; and also contributed fewer rebounds, turnovers, and fouls as compared to nights when they did not late-night tweet. By contrast, these effects were not seen among frequent late-night tweeters...
So, those players who were regular late-night tweeters were less affected than those whose late-night tweeting was uncommon. Of course, the results don't tell us for sure whether those who regularly sleep less are less affected by lost sleep, or why. Late-night tweeting is an imperfect proxy for lack of sleep (at least in part because those not engaging in late-night tweeting need not necessarily be sleeping). However, the results are suggestive that late-night tweeting is associated with worse performance. Which should give us pause when we think about people in positions of power who have made a regular habit of tweeting at odd times.

[HT: Marginal Revolution]

Monday, 24 December 2018

Hazardous drinking across the lifecourse

I have a Summer Research Scholarship student working with me over the summer, looking at changes in drinking patterns by age, over time. The challenge in any work like that is disentangling age effects (the drinking patterns specific to people of a given age), cohort effects (the drinking patterns specific to people of the same birth cohort), and period effects (the drinking patterns specific to a given year). As part of that work though, I came across a really interesting report from September this year, by Andy Towers (Massey University) and others, published by the Health Promotion Agency.

In the report, Towers et al. use life history data on 801 people aged 61-81 years, from the Health, Work and Retirement Study, and look at how their self-reported pattern of alcohol consumption (hazardous or non-hazardous drinking [*]) changed over their lifetimes. They found that:
In terms of the nature of hazardous drinking levels across the lifespan of this cohort of older New Zealanders:
  • drinking patterns were largely stable across lifespan, with long periods of hazardous or non-hazardous drinking being the norm
  • one-third of participants (36%) became hazardous drinkers as adolescents or young adults, and remained hazardous drinkers throughout the lifespan
  • only a small proportion (14%) were life-long (i.e., from adolescence onwards) non-hazardous drinkers
  • transition into or out of hazardous drinking was not common (less than 10% in each decade); when it occurred, it was usually a singular event in the lifespan (i.e., no further transitions occurred).
Transitions into hazardous drinking were linked to spells of unemployment prior to mid-life (i.e. before they turned 40), and relationship breakdown in mid-life. Transitions out of hazardous drinking were linked to development of a chronic health condition in young adulthood or mid-life. However, these transitions (in either direction) were pretty uncommon - most hazardous drinkers in one decade remained hazardous drinkers in the following decade, and most non-hazardous drinkers remained non-hazardous drinkers.

The implication of these results is that it might be easier to reduce hazardous drinking among younger people, because it appears to be quite persistent once people start hazardous drinking. Also, interventions to reduce hazardous drinking could be usefully targeted at those facing spells of unemployment or relationship breakups. [**]

Of course, these results tell us a lot about drinking of people currently aged 60-80 years, but they don't tell us a whole lot about people in younger cohorts, whose drinking across the life course may well be on a different trajectory (see my point above about age effects vs. cohort effects). Also, there is a survivorship bias whenever you interview older people about their life history. Perhaps the heaviest drinkers are not in the sample because they have already died of cirrhosis or liver cancer, etc. So these results might be understating hazardous drinking at younger ages within these cohorts, if non-hazardous drinkers have greater longevity. There is also the problem of recall bias associated with asking older people about their drinking habits from up to six decades earlier - it wouldn't surprise me if the stability and persistence of hazardous drinking were at least partly artefacts of the retrospective life history data collection method. The measure of childhood educational performance they used in some analyses [***] seemed to be a bit dodgy (but it doesn't affect the results I highlighted above).

Still, in the absence of better prospective data from a cohort study, these results are a good place to start. And they raise some interesting questions, such as whether cohorts heavily affected by unemployment during the Global Financial Crisis have been shifted into persistent hazardous drinking, and whether recent cohorts of young people will continue to persist in higher rates of non-hazardous drinking that have been observed (e.g. see page 25 of the June 2016 edition of AlcoholNZ). More on those points in a future post perhaps.

*****

[*] Hazardous or non-hazardous drinking was measured using a slightly modified version of a three-item measure called the AUDIT-C, which you can find here. A score of 3 or more for women, or 4 or more for men, is defined as hazardous drinking.

[**] Of course, it's all very well to say that this targeting would be useful, but how to actually achieve it is another story.

[***] They asked research participants to rate their performance in English at age 10, compared with other children. I say this is a bit dodgy, because 87 percent rated themselves the same or better than their peers, and 13 percent rated themselves worse. I call this out as a Lake Wobegon effect.

Tuesday, 18 December 2018

Stalin and the value of a statistical death

The value of a statistical life (or VSL) is a fairly important concept in cost-benefit evaluations, where some of the benefits are reductions in the risk of death (and/or where the costs include increases in the risk of death). The VSL can be estimated by taking the willingness-to-pay for a small reduction in risk of death, and extrapolating that to estimate the willingness-to-pay for a 100% reduction in the risk of death, which can be interpreted as the implicit value of a life. The willingness-to-pay estimate can be derived from stated preferences (simply asking people what they are willing to pay for a small reduction in risk) or revealed preferences (by looking at data on actual choices people have made, where they trade off higher cost, or lower wages, for lower risk).

A recent working paper by Paul Castaneda Dower (University of Wisconsin-Madison), Andrei Markevitch and Shlomo Weber (both New Economic School, in Russia) takes quite a different approach. Castaneda Dower et al. use data from the Great Terror in Russia to estimate Stalin's implicit value of a statistical life for Russian citizens (or, more specifically, Russian citizens of German or Polish ethnicity). Before we look at how this worked, a little background on the Great Terror:
Coercion and state violence were important policy elements during the whole period of Stalin’s rule. In an average year between 1921 and 1953, there were about two hundred thousand arrests and twenty-five thousand executions... The years of 1937-38, known as the Great Terror, represent a clear spike in Stalin’s repressions. Approximately, one and a half million Soviet citizens were arrested because of political reasons, including about seven hundred thousand who were executed and about eight hundred thousand sent into Soviet labor camps...
In the late 1930s, the Soviet government considered Poland, Germany and Japan as the most likely enemies in the next war. After the first “national operation” against the Poles (launched by the NKVD decree No. 00485 on 11 August 1937), Stalin gradually expanded “national operations” against almost all ethnic minorities with neighboring mother-states.
Castaneda Dower et al. focus on Poles and Germans because they are the only ethnic groups for which data are available. The key aspect of this paper is the trade-off that Stalin faced:
We assume that Stalin’s objective in implementing the Great Terror was to enhance the chances of the survival of his regime in each region of the country, subject to the direct economic loss of human life. Therefore, we derive Stalin’s decisions from the presumed balance between the loss of economic output and enhancement of regime survival.
So, Stalin trades off the lost economic output of Russian citizens against the risk of regime change. More correctly then, this is Stalin's value of a statistical death (rather than life). The authors use this trade-off and comparisons between border regions (where the risk of regime change is higher because ethnic groups may have closer links with those outside the country) and interior regions, and find that:
...Stalin would have been willing to accept a little more than $43,000 US 1990 for the reduction in citizens’ fatality risk equivalent to one statistical life. This magnitude is stable across a wide variety of robustness checks. While this value is sizeable, it is far below what it could have been had Stalin cared more for the survival of his regime or would stop at nothing to ensure its survival. At the same time, the value is far from what it would be if he had not been willing to put his citizens lives at risk to improve the likelihood of his regime’s survival.
The authors show that VSLs in democracies at a similar level of development are substantially higher. This no doubt reflects Stalin's lack of concern for his people, and the people's lack of power to hold their leader to account.

[HT: Marginal Revolution, back in April]

Saturday, 15 December 2018

Pornography actress productivity

Certain research article titles can make you wonder, "how...?". For example, take this 2017 article by Shinn-Shyr Wang (National Chengchi University, in Taiwan) and Li-Chen Chou (Wenzhou Business College, in China), published in the journal Applied Economics Letters (sorry, I don't see an ungated version), and entitled "The determinants of pornography actress production". It certainly made me wonder, how did they collect the data for that?

It turn out that it wasn't as dodgy as you might first imagine. Wang and Chou used data between 2002 and 2013:
...released by the Japanese ‘Digital Media Mart’, which on a regular basis collects data regarding videos and personal information of the Japanese AV actresses.
Essentially, the study tests whether actresses' physical appearance (proxied by cup size and whether they a side job [*] as a model or entertainer outside the pornography industry), and engagement in risky sex, affect the number of movies the actresses appear in. They find that:
...the later an actress commences her career, the fewer video shots she produces. Cup sizes and experiences as models or entertainers have positive effects on the number of video shots... having acted in risky sex videos could increase the production of an AV actress by more than 60%, which implies that, if the actress is willing to perform risky sex, her production may be significantly increased.
I'm unconvinced by their proxies for physical appearance, and a more interesting study would have addressed this by rating the actresses appearance directly (and no, that wouldn't necessitate the researchers watching loads of porn). The finding in relation to risky sex might be the result of an upward-sloping supply curve (there may be greater demand for riskier sex, so riskier sex attracts higher pay, so the actress works more), but of course there would also be a compensating differential to overcome as well (since riskier sex is probably a negative characteristic of the work, the actress would want to be paid more to compensate them for engaging in it). It would be more interesting to know the results in relation to earnings, rather than production, but I suspect that data is particularly difficult to obtain.

*****

[*] It seems to me that the pornography might well be the side job, and the modelling or entertainment the main job.

Friday, 14 December 2018

The compensating differential for being an academic

It seems obvious that wages differ between different occupations. There are many explanations for why, such as differences in productivity between workers in different occupations. However, there are also less obvious explanations. Or at least, explanations that are less obvious until you start to recognise them (after which, you start to see them everywhere). One such explanation is what economists refer to as compensating differentials.

Consider the same job in two different locations. If the job in the first location has attractive non-monetary characteristics (e.g. it is in an area that has high amenity value, where people like to live), then more people will be willing to do that job. This leads to a higher supply of labour for that job, which leads to lower equilibrium wages. In contrast, if the job in the second area has negative non-monetary characteristics (e.g. it is in an area with lower amenity value, where fewer people like to live), then fewer people will be willing to do that job. This leads to a lower supply of labour for that job, which leads to higher equilibrium wages. The difference in wages between the attractive job that lots of people want to do and the dangerous job that fewer people want to do is the compensating differential.

So to summarise, jobs that have highly positive characteristics will have lower wages than otherwise identical jobs with less positive characteristics. Now let's consider a specific example. Academics in many fields could probably earn substantially more if they worked in the private sector, than what they earn as an academic. For some fields (like finance, economics, accounting, computer science, or engineering), the differential is much higher than others (like education or anthropology). Why don't academics simply shift en masse into the private sector? Is that partially a result of compensating differentials?

A NBER Working Paper from earlier this year (ungated version here) by Daniel Hamermesh (Barnard College, and author of the excellent book Beauty Pays: Why Attractive People are More Successful, which I reviewed here), goes some way towards addressing the latter question. Using data from the U.S. Current Population Survey for 2012-2016, Hamermesh finds that:
...the adjusted pay difference between professors and other advanced-degree-holders shows a disadvantage of about 13 percent.
In other words, holding other factors (like demographic and economic characteristics) constant, academics earn 13 percent less than holders of a PhD (or EdD) degree in other jobs. At the median (the middle of the wage distribution), the difference is 16 percent.

Many people argue that academics like their jobs because they have more flexible use of their time (or, as I have fallaciously heard, that we have a sweet job because we only have to work when students are on campus). Using data from the American Time Use Survey 2003-2015, Hamermesh finds that:
...professors do much more of their work on weekends than do other advanced-degree-holders, and they do less during weekdays. They put in nearly 50 percent more worktime on weekends than other highly-educated workers (and 50 percent more than less-educated workers too). Professors spread their work effort more evenly over the week than other advanced-degree-holders...
...the spreading of professors’ work time across the week can account for nearly five percentage points of the wage differential, i.e., almost one-third of the earnings difference at the median...
So, more flexibility in work schedules accounts for part of the difference, but not all. Hamermesh also supports this with the results from a survey of 289 academics who specialise in the study of labour markets (I guess this was Hamermesh surveying people he knew would respond to a survey from him!). The survey shows that:
[f]reedom and novelty of research, and the satisfaction of working with young minds, are by far the most important attractions into academe. Only 41 percent of respondents listed time flexibility as a top-three attraction, slightly fewer than listed enjoying intellectual and social interactions with colleagues.
Clearly there are a number of characteristics that make academia an attractive job proposition for well-educated people who could get a higher-paying job in the private sector. Academics are willing to give up some income (and in some cases substantial income) for those characteristics - what they are giving up is largely a compensating differential.

[HT: Marginal Revolution, back in January]

Wednesday, 12 December 2018

Book review: How Not to Be Wrong

As signalled in a post a few days ago, I've been reading How Not to Be Wrong - The Power of Mathematical Thinking by Jordan Ellenberg, which I just finished. Writing an accessible book about mathematics for a general audience is a pretty challenging ask. Mostly, Ellenberg is up to the task. He takes a very broad view of what constitutes mathematics and mathematical thinking, but then again I can't complain, as I take a pretty broad view of what constitutes economics and economic thinking. The similarities don't end there. Ellenberg explains on the second page the importance of understanding mathematics:
You may not be aiming for a mathematically oriented career. That's fine - most people aren't. But you can still do math. You probably already are doing math, even if you don't call it that. Math is woven into the way we reason. And math makes you better at things. Knowing mathematics is like wearing a pair of X-ray specs that reveal hidden structures underneath the messy and chaotic surface of the world... With the tools of mathematics in hand, you can understand the world in a deeper, sounder, and more meaningful way.
I think I could replace every instance of 'math' or 'mathematics' in that paragraph with 'economics' and it would be equally applicable. The book has lots of interesting historical (and recent) anecdotes, as well as applications of mathematics to a variety of topics as broad as astronomy and social science (as noted in my post earlier in the week). I do feel that mostly the book is valuable for readers that have some sensible background in mathematics. There are some excellent explanations, and I especially appreciated what is easily the clearest explanation of orthogonality I have ever read (on page 339 - probably a little too long to repeat here). Just after that is an explanation of the non-transitivity of correlation that provides an intuitive explanation for how instrumental variables regression works (although Ellenberg doesn't frame it in that way at all, that was what I took away from it).

There are also some genuinely funny parts of the book, such as this:
The Pythagoreans, you have to remember, were extremely weird. Their philosophy was a chunky stew of things we'd now call mathematics, things we'd now call religion, and things we'd now call mental illness.
However, there are some parts of the book that I think Ellenberg doesn't quite get right. For instance, there is a whole section on geometry in the middle of the book that I found to be pretty heavy going. Despite that, if you remember a little bit of mathematics from school, there is a lot of value in this book. It doesn't quite live up to the promise in the title, of teaching the reader how not to be wrong, but you probably wouldn't be wrong to read it.

Monday, 10 December 2018

When the biggest changes in alcohol legislation aren't implemented, you can't expect much to change

Back in October, the New Zealand Herald pointed to a new study published in the New Zealand Medical Journal:
New laws introduced to curb alcohol harm have failed to make a dent on ED admissions, new research has found.
The study, released today, showed that around one in 14 ED attendances presented immediately after alcohol consumption or as a short-term effect of drinking and that rate had remained the same over a four-year period.
Here's the relevant study (sorry I don't see an ungated version, but there is a presentation on the key results available here), by Kate Ford (University of Otago) and co-authors. They looked at emergency department admissions at Christchurch Hospital over three-week periods in November/December 2013 and in November/December 2017, and found that:
...[t]he proportion of ED attendances that occurred immediately after alcohol consumption or as a direct short-term result of alcohol did not change significantly from 2013 to 2017, and was about 1 in 14 ED attendances overall.
The key reason for doing this research was that the bulk of the changes resulting from the Sale and Supply of Alcohol Act 2012 had been implemented in between the two data collection periods. The authors note that:
...[a] key part of the Act was a provision to allow territorial authorities to develop their own Local Alcohol Policies (LAPs). The Act was implemented in stages from December 2012 onwards and subsequently, many local authorities attempted to introduce LAPs in their jurisdictions.
However, here's the kicker:
In many cases these efforts met legal obstacles, particularly from the owners of supermarket chains and liquor stores... For example, a provisional LAP in Christchurch was developed in 2013 but by late 2017 it still had not been introduced. 12 This provisional LAP was finally put on hold in 2018... Similar problems have been encountered in other regions... 
If you are trying to test whether local alcohol policies have had any effect on alcohol-related harm, it's pretty difficult to do so if you're looking at a place where a local alcohol policy hasn't been implemented. Quite aside from the fact that there is no control group in this evaluation, and that the impact of the earthquakes makes Christchurch a special case over the time period in question, it would have been better to look at ED admissions in an area where a local alcohol policy has actually been implemented (although, there have been too few local authorities that have been successful in this). To be fair though, the authors are well aware of these issues, and make note of the latter two in the paper.

However, coming back to the point at hand, whether legislation is implemented as intended is a big issue in evaluating the impact of the legislation. A 2010 article by David Humphreys and Manuel Eisner (both Cambridge), published in the journal Criminology and Public Policy (sorry I don't see an ungated version), makes the case that:
Policy interventions, such as increased sanctions for drunk-driving offenses... often are fixed in the nature in which they are applied and in the coverage of their application... To use epidemiological terminology, these interventions are equivalent to the entire population receiving the same treatment in the same dose...
However, in other areas of prevention research... the onset of a policy does not necessarily equate to the effective implementation of evidence-based prevention initiatives...
This variation underlines a critical issue in the evaluation of effects of the [U.K.'s Licensing Act 2003]...
In other words, if you want to know the effect of legislative change on some outcome (e.g. the effect of alcohol licensing law changes on emergency department visits), you need to take account of whether the legislation was fully implemented in all places.

Read more:


Sunday, 9 December 2018

Slime molds and the independence of irrelevant alternatives

Many people believe that rational decision making is the sole preserve of human beings. Still others recognise that isn't the case, as many studies in animals as varied as dolphins (e.g. see here), monkeys (e.g. see here) or crows show. How far does that extend though?

I've been reading How Not to Be Wrong - The Power of Mathematical Thinking by Jordan Ellenberg (book review to come in a few days). Ellenberg pointed me to this 2011 article (open access) by Tanya Latty and Madeleine Beekman (both University of Sydney), published in the Proceedings of the Royal Society B; Biological Sciences. You're probably thinking that's a weird source for me to be referring to on an economics blog, but Ellenberg explains:
You wouldn't think there'd be much to say about the psychology of the plasmodial slime mold, which has no brain or anything that could be called a nervous system, let along feelings or thoughts. But a slime mold, like every living creature, makes decisions. And the interesting thing about the slime mold is that it makes pretty good decisions. In the slime mold world, these decisions more or less come down to "move toward things I like" (oats) and "move away from things I don't like (bright light)...
A tough choice for a slime mold looks something like this: On one side of the petri dish is three grams of oats. On the other side is five grams of oats, but with an ultraviolet light trained on it. You put a slime mold in the center of the dish. What does it do?
Under those conditions, they found, the slime mold chooses each option about half the time; the extra food just about balances out the unpleasantness of the UV light.
All good so far. But this isn't a post about the rationality of slime mold decision-making. It's actually about the theory of public choice. And specifically, about the independence of irrelevant alternatives. Say that you give a person the choice between chocolate and vanilla ice cream, and they choose chocolate. Before you hand over the ice cream though, you realise you also have some strawberry as well, so you offer them that instead. The person thinks for a moment, and says they would like vanilla instead. They have violated the independence of irrelevant alternatives. Whether strawberry is available or not should not affect the person's preference between chocolate or vanilla - strawberry is irrelevant to that choice. And yet, in the example above, it made a difference.

Ok, back to slime molds. Ellenberg writes:
But then something strange happened. The experimenters tried putting the slime mold in a petri dish with three options: the three grams of oats in the dark (3-dark), the five grams of oats in the light (5-light), and a single gram of oats in the dark (1-dark). You might predict that the slime mold would almost never go for 1-dark; the 3-dark pile has more oats in it and is just a dark, so it's clearly superior. And indeed, the slime mold just about never picks 1-dark.
You might also guess that, since the slime mold found 3-dark and 5-light equally attractive before, it would continue to do so in the new context. In the economist's terms, the presence of the new option shouldn't change the face that 3-dark and 5-light have equal utility. But no: when 1-dark is available, the slime mold actually changes its preferences, choosing 3-dark more than three times as often as it does 5-light!
What's going on here? The slime mold is essentially making collective decisions (which is why I said this was a post about the theory of public choice). And with collective decisions, the independence of irrelevant alternatives can come into play. As Ellenberg notes, in the 2000 U.S. presidential election, the availability of Ralph Nader as a candidate has been credited with George W. Bush's victory over Al Gore. Nader took just enough votes from Gore supporters (who would have probably voted Gore if Nader was not available) to ensure that Bush won the critical state of Florida, and ultimately, the election. Something similar is going on with the slime molds, as Ellenberg explains:
...the slime mold likes the small, unlit pile of oats about as much as it likes the big, brightly lit one. But if you introduce a really small unlit pile of oats, the small dark pile looks better by comparison; so much so that the slime mold decides to choose it over the big bright pile almost all the time.
This phenomenon is called the "asymmetric domination effect," and slime molds are not the only creatures subject to it. Biologists have found jays, honeybees, and hummingbirds acting in the same seemingly irrational way.
Except, it's not irrational. In the case of the slime molds at least, it's a straightforward consequence of collective decision-making.

Friday, 7 December 2018

Arnold Kling on public choice theory and lobbying

In my ECONS102 class, we discuss the lobbying activities of firms with market power. The motivation for that discussion is that firms with market power make a large profit (how large the profit is depends in part on how much market power they have), so they have an incentive to use some of the profits (their economic rent) to maintain their market power. They can do this by lobbying government to avoid excess regulation. However, that simple exposition doesn't explain the full range of lobbying activities that firms engage in, and it doesn't explain why consumers don't engage in lobbying (e.g. for lower prices) to the same extent that producers do.

On Medium last week, Arnold Kling wrote an interesting article on why costs increase in some industries faster than others. However, on the above point it was this bit that caught my attention:
In reality, you do not produce everything in the economy. You are much more specialized in production than in consumption. This makes you much more motivated to affect public policy in the sector where you produce than in the sector where you consume.
In theory, government policy is supposed to promote the general welfare. But as a producer, your goal for government policy is to increase demand and restrict supply in your industry. If you are in the field of education, you want to see more government spending devoted to education, tuition grants and subsidies for student loans, in order to increase demand. You want to make it difficult to launch new schools and colleges, in order to restrict supply. If you run a hospital, you want the government to subsidize demand by providing and/or mandating health insurance coverage. But you want to restrict supply by, for example, requiring prospective hospitals to obtain a “certificate of need.” If you are a yoga therapist, you want the government to mandate coverage for yoga therapy, but only if it is provided by someone with the proper license.
Think about an average consumer (and worker), considering how much effort to put into lobbying the government for a policy change. They might be quite motivated to engage in lobbying government in terms of their employment or wages (e.g. subsidising wages, or introducing occupational licensing), where they are a producer, and can capture a lot of the gains from a policy change. In that case, the benefit to be gained from the policy change will affect them a lot, and may offset the cost of the effort of lobbying. However, they will be much less motivated to engage in lobbying government in terms of consumption goods. In the latter case, the benefit is much lower than lobbying in terms of employment or wages, while the cost is likely to be about the same.

This explanation also relates to the idea of rational ignorance. Consumers individually face only a small cost of a policy (like a subsidy on sugar farmers) that provides a large benefit to producers. The producers have a great incentive to lobby against losing the policy (or in favour of gaining it), but consumers have only a small incentive to lobby in favour of eliminating the policy (or against it being implemented in the first place).

There's a lot more of interest in Kling's article. Market power and lobbying is just one of many reasons why costs increase faster in some industries or sectors than others.

Tuesday, 4 December 2018

Summer heat and student learning

Many of us would appreciate the idea that it is difficult to concentrate on hot days, even without the distraction that we could be hanging out at the beach. So, that raises legitimate questions. Does students' learning suffer when it is hotter? Should students avoid summer school, if they are trying to maximise their grades? These questions aren't of purely academic interest. Climate change means that we will likely have increasingly hotter summers over time, which makes these questions increasingly relevant for the future.

A 2017 article by Hyunkuk Cho (Yeungnam University School of Economics and Finance in South Korea), published in the Journal of Environmental Economics and Management (sorry I don't see an ungated version), provides some indication. Cho used data from 1.3 million Korean students, from 1729 high schools in 164 cities, and looked at the relationship between the number of hot summer days and the students' scores in the Korean college entrance exam, which is held in November (at the end of the northern autumn). They found that:
...an additional day with a maximum daily temperature equal to or greater than 34°C during the summer, relative to a day with a maximum daily temperature in the 28–30°C range, reduced the math and English test scores by 0.0042 and 0.0064 standard deviations. No significant effects were found on the reading test scores. When an additional day with a maximum daily temperature equal to or greater than 34°C reduces the math and English test scores by 0.0042 and 0.0064 standard deviations, ten such days reduce the test scores by 0.042 and 0.064 standard deviations, respectively. The effect size is equivalent to increasing class size by 2–3 students during grades 4–6...
If you're in a temperate climate like New Zealand, you might read that and think, 'we don't have that many days greater than 34°C, so probably there is no effect here'. But, Cho also found that:
...hot summers had greater effects on the test scores of students who lived in relatively cool cities. If cities with an average maximum daily temperature below 28.5°C have one more day with a maximum daily temperature equal to or greater than 34°C, relative to a day with a maximum daily temperature in the 28–30°C range, the reading, math, and English test scores decreased by 0.0073, 0.0124, and 0.0105 standard deviations, respectively...
Interestingly, the analysis was based on the summer temperatures, while the test was taken in the autumn. Cho found no effect of the temperature on the day of the test itself. The results aren't causal, so some more confirmatory work is still needed.

However, if we were to extrapolate from the main results a little, students exposed to temperatures higher than the norm for a particular area may well be made worse off in terms of their learning. If you're wanting to maximise your grades, the beach should be looking like an even more attractive option now.

Monday, 3 December 2018

Public health vs. competition in Australian alcohol regulation

In New Zealand, the Sale and Supply of Alcohol Act has a harm minimisation objective. Specifically, Section 4(1) of the Act specifies that:
The object of this Act is that - 
(a) the sale, supply, and consumption of alcohol should be undertaken safely and responsibly; and
(b) the harm caused by the excessive or inappropriate consumption of alcohol should be minimised.
Section 4(1)(b) clearly puts a public health objective within the legislation. In contrast, the corresponding Australian legislation has no such public health objective. So, I was interested to read this 2017 article (open access) by Janani Muhunthan (University of Sydney) and co-authors, published in the Australian and New Zealand Journal of Public Health. In the article Muhunthan et al. look at cases in the Australian courts between 2010 and 2015, of appeals of liquor licensing or planning decisions related to alcohol outlets. In total, they looked at 44 such cases, and found that:
Most decisions (n=34, 77%) resulted in an outcome favourable to the industry actor in the case (n=24 development applications and n=10 liquor licensing decisions). The majority of decisions involving liquor outlets were brought by liquor establishments owned by Australia’s two major grocery chains (n=11/19) and had a success rate of greater than 70% (n=8/11) in disputes. Governments and their agencies were successful in having appeals dismissed in less than a quarter (n=10) of the cases studied.
In the discussion, they note that:
Competition principles underpinned by legislation were highly influential in decisions and it is the presence of such legislation that enabled pro-competition decisions to be the default outcome. A consequence of the lack of explicit legislative support for preventive health arguments is that public health impact is relegated in practice below other considerations including market freedoms, amenity and the compatibility of industry’s proposal with existing planning controls.
The goal of increasing competition in Australia essentially overrides the public health considerations. One of the key problems they identified is that public health evidence was general, rather than specifically related to a given alcohol outlet, whereas industry data supporting an appeal was much more specific. They conclude that:
[t]he ability of government and community groups to better execute a public health case in the area of alcohol regulation can thus be addressed from the top down through the inclusion of explicit public health objectives in existing planning and liquor licensing controls.
As noted at the start of this post, New Zealand legislation already includes the public health objective. However, in spite of the explicit public health objective of the legislation, the problem appears to me to be essentially the same - it is difficult for the public health case to be made when the data do not speak specifically to a given application for a liquor licence. Now that the Sale and Supply of Alcohol Act has been in operation for a few years, it would be really interesting to do some comparative work on a similar basis to the Muhunthan et al. article, but based on New Zealand cases.

Saturday, 1 December 2018

Meta loss aversion

Back in August, this article in The Conversation pointed me to this article by David Gal (University of Illinois at Chicago) and Derek Rucker (Northwestern), published in the Journal of Consumer Psychology (sorry I don't see an ungated version anywhere). Loss aversion is the idea that people value losses much more than equivalent gains (in other words, we like to avoid losses much more than we like to capture equivalent gains). It is a central and defining idea in behavioural economics. In their article, Gal and Rucker present evidence that loss aversion is not real:
In sum, an evaluation of the literature suggests little evidence to support loss aversion as a general principle. This appears true regardless of whether one represents loss aversion in the strong or weak forms presented here. That is, a strong form suggests that, for loss aversion to be taken as a general principle, one should observe losses to generally outweigh gains and for gains to never outweigh losses. The weak form, as we have represented it, might simply be that, on balance, it is more common for losses to loom larger than gains than vice versa. This is not to say that losses never loom larger than gains. Yes, contexts exist for which losses might have more psychological impact than gains. But, so do contexts and stimuli exist where gains have more impact than losses, and where losses and gains have similar impact.
From what I can see Gal and Rucker's argument rests on a fairly selective review of the literature, and in some cases I'm not convinced. A key aspect of loss aversion is that people make decisions in relation to a reference point, and it is comparison with that reference point that determines whether they are facing a loss or a gain. Gal and Rucker even recognise this:
For example, one individual who obtained $5 might view the $5 as a gain, whereas another individual that expected to obtain $10, but only obtained $5, might view the $5 obtained as a loss of $5 relative to his expectation... 
It isn't clear to me that the following evidence actually takes account of the reference point:
The retention paradigm involves a comparison of participants’ [willingness-to-pay] to obtain an object (WTP-Obtain) to a condition where participants are asked their maximum willingness-to-pay to retain an object they own (WTP-Retain). The retention paradigm and its core feature—the WTP-retain condition— proposes to offer a less confounded test of the relative impact of losses versus gains...
...in the discrete variant of the retention paradigm, both the decision to retain one’s endowed option and the decision to exchange the endowed option for an alternative are framed as action alternatives. In particular, participants in one condition are informed that they own one good and are asked which of two options they would prefer, either (a) receiving $0 [i.e., the retention option], or (b) exchanging their endowed good for an alternative good. Participants in a second condition are offered the same choice except the endowed good and the alternative good are swapped.
It seems to me that in both the endowed option (where the research participant is paying to retain something they have been given) and the alternative option (where the research participant is paying to obtain that same item), the reference point is not having the item. So, it isn't a surprised when Gal and Rucker report that, in some of their earlier research:
Gal and Rucker (2017a) find across multiple experiments with a wide range of objects (e.g., mugs and notebooks; mobile phones; high-speed Internet, and train services) that WTP-Retain did not typically exceed WTP-Obtain. In fact, in most cases, little difference between WTP-Retain and WTP-Obtain was observed, and for mundane goods, WTP-Obtain exceeded WTP-Retain more often than not.
An interesting aspect of Gal and Rucker's paper is that they try to explain why, in the face of the competing evidence they have accumulated, loss aversion is so widely accepted across many disciplines (including economics, psychology, law, marketing, finance, etc.). They adopt a Kuhnian argument:
Kuhn (1962) argues that as a paradigm becomes entrenched, it increasingly resists change. When an anomaly is ultimately identified that cannot easily be ignored, scientists will try to tweak their models rather than upend the paradigm. They “will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict” (Kuhn, 1970, p. 78).
I want to suggest an alternative (similar to an earlier post I wrote about ideology). Perhaps academics are willing to retain their belief in loss aversion because, if they gave up on it, they would keenly feel its loss? Are academics so loss averse that they are unwilling to give up on loss aversion? If that's the case, is that evidence in favour of loss aversion? It's all getting rather circular, isn't it?