Thursday, 31 October 2019

You won't find meth labs in places where you're not looking for them

I just read this 2018 article by Ashley Howell, David Newcombe, and Daniel Exeter (all University of Auckland), published in the journal Policing (gated, but there is a summary of some of it here). The authors report on the locations of clandestine methamphetamine labs in New Zealand, based on data from police seizures between 2004 and 2009.

It's an interesting dataset and paper, and they report that:
In the unadjusted spatial scan, there were five locations in the study area with significantly high clandestine methamphetamine laboratory rates (Fig. 2). The ‘most likely’ cluster, centred in Helensville (north-west of Auckland), had a RR of 4.14 with 59 observed CLRT incidents compared to 15.1 expected incidents. A similarly high cluster (RR = 4.09, P = 0.000) was found in the Far North TA.
In other words, there were four times as many lab seizures in Helensville and the Far North than would be expected, if lab seizures were randomly distributed everywhere. The other locations were Hamilton, West Auckland, Central Auckland, and there was a sixth cluster centred on Papakura in some of their analyses. This bit also caught my eye (emphasis mine):
In addition, 26 laboratories (2%) were found at storage units, 21 (2%) discovered in motel or hotel rooms, and another 27 were abandoned in public areas, including cemeteries, parks, roadsides to school yard dumpsters and even the parking lot of a police station.
I wonder how much effort it took for police to find that last one? The paper gives some insights into where the most meth labs have been seized by police. However, we should be cautious about over-interpreting the results, because by definition, you can only seize labs in locations where you are looking for them. So, if police are more diligent or exert more effort in searching for meth labs in Hamilton or the Far North, we would expect to see more lab seizures there, even if there were actually fewer labs than in other locations.

To be fair, the authors are aware of this, and in the Discussion section they note that:
Reports of clandestine laboratory seizures may also be prone to subjectivity. There is no way to be certain that CLRT incident density is not a symptom of a greater police presence or different policing priorities.
However, that doesn't stop them from noting in the abstract that:
Identifying territorial authorities with more clandestine laboratories than expected may facilitate community policing and public health interventions.
It is true that identifying areas with more meth labs than expected would give information about resource allocation. The problem is that this paper doesn't tell us where meth labs are, it only tells us where police have found them.

[HT: The inimitable Bill Cochrane]

Wednesday, 30 October 2019

Book review: Economic Fables

One of the things I tell my first-year economics students, in the very first week of class, is that economics is about telling stories. And when there is a diagram involved, then it is an illustrated story. That idea comes through strongly in Ariel Rubenstein's 2012 book, Economic Fables. Here's what Rubenstein has to say in the Introduction:
Economic theory formulates thoughts via what we call "models". The word model sounds more scientific than the word fable or tale, but I think we are talking about the same thing.
The author of a tale seeks to impart a lesson about life to his readers. He does this by creating a story that hovers between fantasy and reality. It is possible to dismiss any tale on the grounds that it is unrealistic, or that it is too simplistic. But this is also its advantage. The fact that it hovers between fantasy and reality means that it can be free from irrelevant details and unnecessary diversions. This freedom can enable us to broaden our outlook, make use aware of a repressed emotion and help us learn a lesson from the story. We will take the tale's message with us when we return from the world of fantasy to the real world, and apply it judiciously when we encounter situations similar to those portrayed in the tale.
Rubinstein would have us treat economic models in this way, which I think is a fair goal to have. The book itself is partly a memoir, partly an exposition of some economic fables that are clearly favourites of Rubinstein's, and partly a discussion of some interesting interdisciplinary research that Rubinstein has been involved in. At times, the fables become more mathematical than is probably necessary, making them into more abstract models. The real highlights of the book are Rubinstein's linking to his own experiences, and then his discussion of interdisciplinary research in, surprisingly, linguistics.

Rubinstein starts this latter part of the book by describing interdisciplinary work, and especially the 'colonisation' of other disciplines by economists and the tools of economics. I particularly appreciated this bit:
But, in general, it seems to me that the spread of economics to other areas derives from the view expressed by the economist Steven Levitt: "Economics is a science with excellent tools for gaining answers, but a serious shortage of interesting questions."
The interdisciplinary research (in linguistics) that Rubinstein describes relates to persuasion - how one person persuades another as to the truth of something. It makes for interesting reading, but is difficult to excerpt here. Let's just say that it involves a lot of applied game theory, but thankfully is not too math-centric.

The book has lots of interesting asides, and I made a number of notes that will come in handy in both of my first-year papers next year. One bit got me thinking about the difference between income taxes and inheritance taxes (emphasis mine):
Nonetheless, and despite the fact that inheritance tax is imposed in nearly all of the countries that we envy, there is enormous opposition to instituting this tax in Israel. The tax is perceived by most people, including those who are not affluent, as crueler than income tax. This is because income tax takes something that is not yet owned, while inheritance tax takes a bite out of something that has already found a home among a person's assets.
I had never thought about inheritance taxes in this way, but it makes sense. The opposition to an inheritance tax is an endowment effect - we are much more willing to give up something we don't yet own (some of our income, as income tax), than to give up something we already have (some of our wealth). Unlike income tax, an inheritance tax is a loss to us, and we are loss averse - we are very motivated to avoid losses, and that would be expressed in an unwillingness to have an inheritance tax (and, like Israel, New Zealand currently has no inheritance tax).

This is an interesting book, and well worth reading. I can see why Diane Coyle recommended it as "a great book for economics students", and I would share that recommendation.

Tuesday, 29 October 2019

Hamilton won't be our second largest city any time soon

I was interviewed for the Waikato Times last week, and the story appeared on the Stuff website yesterday:
Hamilton city might be growing, but not as explosively as some people might dream. 
Commentators have recently said Waikato is 'ready to go pop' with development, citing the hundreds of millions of dollars spent on community facilities and transport infrastructure. 
Labour list MP Jamie Strange said he believes Hamilton's growth is so significant it could become New Zealand's second biggest city in the next 30 years. 
But a Waikato University professor said it would take drastic change for Hamilton to surpass Wellington and Christchurch in three decades...
"If you are talking about is Hamilton going to be a city of 200,000 - sure it's going to be that.
"But Wellington is certainly going to be big and bigger and Christchurch is also going to be big and bigger."
Read the full story for more from me. The overall point is that the population of Hamilton is growing. But the populations of Wellington and Christchurch are growing as well, and they have a large head start. How long will it take Hamilton to catch up? I gave the reporter (Ellen O'Dwyer) some quick back-of-the envelope calculations.

Based on Census Usually Resident Population counts (available here), Hamilton City grew from 129,588 in 2006 to 141,612 in 2013 and 160,911 in 2018. Wellington City grew from 179,466 in 2006 to 190,956 in 2013 and 202,737 in 2018. Christchurch City declined from 348,456 in 2006 to 341,469 in 2013, then grew to 369,006 in 2013. Hamilton grew faster than Wellington and Christchurch (in both absolute and relative terms) between each of the last two Censuses.

If the absolute rates of growth of both Hamilton and Wellington (from the previous paragraph) between 2013 and 2018 continued, Hamilton would catch up to Wellington in the early 2040s. Based on the absolute rates of growth between 2006 and 2018, this wouldn't happen until the 2080s. As for Christchurch, forget it - the comparable catch-up time is measured in centuries.

None of those calculations take into account the fact that Wellington City is only one part of a larger urban conglomeration that includes Porirua City, Lower Hutt City, and Upper Hutt City. Once you factor those areas in as well, the Wellington urban area is far larger than Hamilton and it would take something spectacular for the Hamilton urban area (even if you include fast-growing Te Awamutu, Cambridge, and Ngaruawahia) to catch up.

Sorry Jamie. Hamilton isn't going to catch Wellington (and definitely not Christchurch) any time soon.

Monday, 28 October 2019

The latest NZ research on social media and mental health is nothing special

If there are two lessons that I wish journalists could learn, they are:

  1. That the latest published research is not necessarily the best research, and just because it is newer, it doesn't overturn higher quality older research; and
  2. That correlation is not the same as causation.
This article from the Jamie Morton of the New Zealand Herald last week fails on both counts:

We blame friends' posts about weddings, babies and holidays for driving "digital depression" - but is social media really that bad for mental health?
A new study that dug deep into how platforms like Facebook, Twitter and Instagram influence our psychological wellbeing suggests not.
In fact, the weak link the Kiwi researchers found was comparable to that of playing computer games, watching TV or just minding kids.
The study is by Samantha Stronge (University of Auckland) and co-authors, and was published in the journal Cyberpsychology, Behavior, and Social Networking (sorry, I don't see an ungated version, but it appears to be open access). The authors used data from one wave of the New Zealand Attitudes and Values Survey, which is a large panel study that is representative of the New Zealand population. Using a sample of over 18,000 survey participants, they found that:
After adjusting for the effects of relevant demographic variables, hours of social media use correlated positively with psychological distress... every extra hour spent using social media in a given week was associated with an extra .005 units on the psychological distress scale. Notably, social media use was the second strongest predictor of psychological distress out of the other habitual activities, at approximately half the effect size of sleeping...
The coefficient is tiny. However, there are a couple of problems with this study. The first is pretty non-technical - this study is pure correlation. It tells you nothing about causation at all. That might not be a problem if the effect is zero.

However, the second problem with this study is that the authors simply dump a whole bunch of variables into their regression, without considering that many of these variables are correlated with each other. That leads to a high risk of multicollinearity, and the consequence of multicollinearity is that the coefficients are biased towards zero. In other words, you are more likely to observe a tiny effect simply because of the way they have analysed their data.

This research paper basically adds nothing to our understanding of whether social media is good or bad or otherwise for mental health. There are much higher quality studies available already (such as this one and this one).

This seems to me to be a real problem with the NZAVS research. They have a very large panel of data, and a huge number of measures covering lots of different domains. However, their approach to using this data appears to be 'throw everything at the wall and see what sticks'. However, that approach does not lead to high quality research, and even if it does lead to a large number of publications, they are mostly of dubious value, like this one.

Journalists could do with a bit more understanding on what constitutes a genuine contribution to the research literature.

Read more:

Saturday, 26 October 2019

A sobering report on the culture in the economics profession

Long-time readers of this blog will know that I have written many times on the gender gap in economics (see the bottom of this post for a list of links). However, I haven't written anything since this post in February. That's not because there wasn't anything to say - the news was all bad up to that point. The latest news doesn't get any better, with the report from the Committee on Equity, Diversity and Professional Conduct of the American Economic Association on the professional climate in economics released last month. It was picked up by the New York Times:
This month the American Economic Association published a survey finding that black women, compared to all other groups, had to take the most measures to avoid possible harassment, discrimination and unfair or disrespectful treatment. Sixty-two percent of black women reported experiencing racial or gender discrimination or both, compared to 50 percent of white women, 44 percent of Asian women and 58 percent of Latinas. Twenty-nine percent and 38 percent of black women reported experiencing discrimination in promotion and pay, respectively, compared to 26 percent and 36 percent for whites, 28 percent and 36 percent for Asians and 32 percent and 40 percent for Latinas.
“I would not recommend my own (black) child to go into this field,” said one of the black female respondents. “It was a mistake for me to choose this field. Had I known that it would be so toxic, I would not have.”
The report is available here, and it makes for sobering reading. It was based on a survey sent to all current and recent (within nine years) members of the American Economic Association, and received over 10,000 responses (a response rate of 22.9%). It collected responses to a mixture of closed-ended and open-ended questions about the general climate in economics, experiences of discrimination, avoidance behaviour, exclusion and harassment. Here's some highlights (actually, they're more like lowlights):
Women very clearly have a different perception of the climate in the economics profession... It is particularly notable that, when asked about satisfaction with the overall climate within the field of economics, men were twice as likely as women to agree or strongly agree with the statement “I am satisfied with the overall climate within the field of economics” (40% of men vs. 20% of women). This large gender disparity is consistent across a variety of related statements about the field broadly: women are much less likely to feel valued within the field, much less likely to feel included socially, and much more likely to have experienced discrimination in the field of economics...
Female respondents are also much more likely to report having experienced discrimination or unfair treatment as students with regard to access to research assistantships, access to advisors, access to quality advising, and on the job market...
When we examine experiences of discrimination in academia... we see that, again, women face significantly more discrimination or unfair treatment than men along all dimensions (again, this gap is larger than the gap in discrimination faced by non-whites relative to whites). Most notably, women are much more likely to report personal experiences of discrimination or unfair treatment in promotion decisions and compensation, 27% and 37% respectively, compared to only 11% and 12% for men. Women are also significantly more likely to report personal experiences of discrimination or unfair treatment in teaching assignments and service obligations, course evaluations, publishing decisions and funding decisions...
Personal experiences of discrimination are also quite common among women working outside of academia...
...close to a quarter of female respondents report not having applied for or taken a particular employment position to avoid unfair, discriminatory or disrespectful treatment, compared to 12% of male respondents...
And that's just from the bits relating the women. This bit also caught my attention, as it is both negative and affects everyone:
Experiences of exclusion are strikingly common in economics, both among male and female respondents... For example, 65% of female respondents report feeling of social exclusion at meetings or events in the field; 63% report having felt disrespected by economist colleagues; 69% report feeling that their work was not taken as seriously as that of their economist colleagues; 59% report feeling that the subject or methodology of their research was not taken seriously. The corresponding shares among men are smaller but still strikingly large: 40%, 38%, 43% and 40%, respectively.
It's also not entirely bad (depending on how you look at it):
More than 80% of female respondents and 60% of male respondents agree that economics would be a more vibrant discipline if it were more inclusive...
As the New York Times article suggests, there are also a lot of intersectional issues, with minority ethnic groups, non-heterosexual and non-binary genders, all facing similar and overlapping issues of discrimination and exclusion.

The problems seem larger than other disciplines, and the report provides some comparisons. However, it is worth noting that the issues have been made very visible of late, and that might affect people's responses to questions such as those in this survey. It was interesting to note, for example, that both self-identified liberals and self-identified conservatives reported discrimination on the basis of their political beliefs.

Notwithstanding the issues with the survey though, it does highlight that there is a problem (as if we didn't know), and provides a baseline to which we can compare as we try to improve the culture within the profession. Read the report though, and you'll get a sense of just how much work needs to be done.

Read more:

Friday, 25 October 2019

This isn't what the 'year of delivery' was supposed to be delivering

It had to happen. When the government is willing to give Amazon (market capitalisation ~US$870 billiona subsidy of up to $300 million to make a Lord of the Rings TV series, that's a clear signal to other big corporates that there is free money on offer, if they can just find a way to put the squeeze on the government. It doesn't even appear to take much in the way of lobbying. If a big corporate threatens some lost jobs, especially in the regions, the government will let them feed at the trough.

And so, up next we have Rio Tinto (market capitalisation ~US$87 billion), as the New Zealand Herald reported yesterday:
The global mining giant Rio Tinto coughs and this country's power companies catch a $2 billion cold. That's the amount of money wiped off their balance sheets following yet another threat hanging over the future of the Tiwai Pt aluminium smelter, Southland's biggest employer by a country mile.
Why have the companies' shareholders got the shivers? The smelter uses 13 per cent of the country's electricity and if they close down their pot lines, cheap electricity will flood the market meaning profits will be lower.
That's just one of the reasons why power generators want the smelter to continue. But the biggest reason should be for the people of Southland where 1000 jobs would be directly at stake with a flow on to 3500 people dependent on the flow-on opportunities of the business.
Tiwai Pt gets a cut price power deal from Meridian, from the nearby Manapouri hydro plant that's been supplying it for almost 50 years.
There's now yet another threat from Rio Tinto to close the plant. The last one was six years ago when it said it'd be shutting up shop at the end of 2016, but decided against it after the Key Government came to the table with $30 million and with a warning from Bill English that it'd be the last bite at the cherry.
If you thought the bulk of the government's higher-than-expected surplus was going to be spent on social housing (or any housing, for that matter), reducing child poverty, or improving mental health, you might want to think again. According to the government, this was the "year of delivery" - it's becoming clear that the year of delivery meant stacks of cash being delivered by the government to big corporates.

Yes, I'm angry. You should be too.

Wednesday, 23 October 2019

The spread of Christianity in Austronesian societies

What explains the success of Christianity as a religion, especially in (relatively) modern times? For example, modern Pacific Island cultures are predominantly Christian, and none of them would have begun converting until the first missionaries arrived in the 17th Century. An article last year by Joseph Watts (Max Planck Institute for the Science of Human History and University of Oxford) and co-authors, published in the journal Nature Human Behaviour (sorry I don't see an ungated version), looks into this question.

Watts et al. used data on Austronesian cultures (which are spread from Tahiti in the east to Sumatra in the west, plus Madagascar) and the timing of the conversion of half of each culture to Christianity over the period from 1668 to 1950, to test three hypotheses:

  1. "whether cultures with greater political organization are faster to convert to Christianity, as predicted by top-down theories of conversion" - more politically complex societies are also more inter-connected, and the theory is that an innovation such as Christianity will spread faster;
  2. "whether cultures with higher levels of social inequality are faster to convert" - more unequal societies have a social stratification, and Christianity brings a more egalitarian ideal that might appeal to those at the 'bottom'; and
  3. "whether larger populations are slower to convert" - with larger populations, it simply takes longer for innovations to be adopted, particular when an innovation (such as Christianity) requires interaction between people (conversion).
They found that:
...population size was found to be significantly positively correlated with conversion times, indicating that larger populations took longer to convert to Christianity. Consistent with the top-down theory, political complexity was negatively associated with conversion times... Counter to the bottom-up theories, there was no reliable support for an association between conversion time and social inequality...
You might be wondering why I've posted about this particular research paper. It isn't because of the theory, or the results. Instead, I found the methods quite interesting. Health warning: the following description might be a little too technical for some readers.

A particular problem emerges in regression models when observations are not independent. For instance, regions that are close together spatially (e.g. neighbouring regions) are likely to be similar, and demonstrate similar relationships between variables. Treating the regions as independent observations (when they aren't, because they are neighbouring and therefore similar) leads the estimated standard errors from a regression model to be too small. The consequence of that is that we are more likely to have models tell us that coefficients are statistically significant, than they would be if we correctly treated the observations as not being independent. To deal with this in the case of regions, there are spatial econometric models.

Watts et al. don't deal with the problem of spatial dependence, but they do deal with a closely related problem that I have been thinking about for the last year or more - cultural dependence. We often run cross-country regression models (e.g. for happiness studies), treating the country-level observations as independent. However, countries that have similar cultures are not independent observations, and we should be accounting for that. Watts et al. do this in their model:
Standard regression methods assume that cultures are independent from one another, despite them being related through common descent and patterns of borrowing... This non-independence can lead to spurious correlations, and the difficulty of distinguishing such spurious correlations from correlations that result from actual causal relationships between variables has come to be known as Galton’s problem. The PGLS-spatial method developed by Freckleton and Jetz makes it possible to address Galton’s problem using a phylogeny to control for non-independence due to common ancestry, and geographic proximity to control for non-independence due to diffusion between cultures.

Their phylogenetic approach uses a language-based family tree to define how close (or far away) each Austronesian culture is from others. This is a nice approach to dealing with the issue of cultural dependence between the observations, and something we should make more use of in regression models. The irony is that in the case of this paper, the phylogenetic and spatial dependencies turned out to be statistically insignificant. I guess that, sometimes, a lack of independence between observations isn't as big a problem as we may worry it is.

[HT: New Zealand Herald, last year]

Tuesday, 22 October 2019

Skirt length and the state of the economy

When economists have too much spare time on their hands, you end up with research papers like this one from 2010, by Marjolein van Baardwijk and Philip Hans Franses (Erasmus University Rotterdam), on the relationship between hemlines and the state of the economy:
In 1926 an economist called George Taylor introduced a “theory” that is called the hemline index. This theory says that hemlines on women’s dresses fluctuate with the economy, measured by stock prices or gross domestic product. When the economy is flourishing, hemlines increase, meaning one would see more miniskirts, and when the economic situation is deteriorating the hemlines drop, perhaps even to the floor...
The main findings are that the hemline increases over the years, with a non-linear trending pattern, and that the economy leads the hemline with about 3 to 4 years.
So, when the economy is in an expansionary phase (or rather, when the economy was expansionary three or four years previously), hemlines move upwards and there are shorter skirts. And, when the economy is in a recession (three or four years previously), hemlines move downwards and there are longer skirts.

The authors obviously didn't have a lot of time on their hands, because the paper is only a few pages long (excluding the figures and tables). Moreover, their choices of analysis techniques, and the way they manipulate the data in some quite idiosyncratic ways, suggest that this is an exercise in 'torturing the data until it confesses' (paraphrasing a famous quote from Ronald Coase).

It raises an important question though: how is skirt length responding to the current period of secular stagnation?

Monday, 21 October 2019

Epic rap battle: Mises vs. Marx

Some readers will remember the epic rap battles between Keynes and Hayek (here is the original video, and here is round two). Now, Emergent Order has been at it again, with a rap battle between Capitalism (represented by Ludwig von Mises) and Socialism (represented by Karl Marx). Enjoy!


[HT: Marginal Revolution]

Sunday, 20 October 2019

Book review: A Short History of Drunkenness

Every now and again, I get surprised by a book that wasn't quite what I was expecting. This was definitely the case with A Short History of Drunkenness, by Mark Forsyth. Actually, I'm not quite sure what I was expecting now that I'm writing this review, and given the title of the book. It is definitely a fun read and not to be taken too seriously, as this bit from the introduction nicely illustrates:
Anyway, some of alcohol's effects are not caused by alcohol. It's terribly easy to hand out nonalcoholic beer without telling people that it contains no alcohol. You then watch them drink and take notes. Sociologists do this all the time, and the results are consistent and conclusive. First, you can't trust a sociologist at the bar; they must be watched like hawks.
As if we needed another reason not to trust sociologists, in a bar or anywhere else for that matter. Or this bit:
...if you turned up in eighth-century Baghdad you could easily obtain wine, so long as you went to the Jewish Quarter, or the Armenian Quarter, or the Greek Quarter. There were enough quarters to make a strict mathematician blush.
Anyway, Forsyth takes us on a journey through humanity's history of drinking and drunkenness, from prehistory to prohibition, and across a wide geographical scope from Britain to Australia. The bibliography suggests that there is a research foundation to the book, but I suspect there is more than a little artistic licence being taken. For instance, I doubt that Australians would appreciate their country being accused of being "filled with grapes and Foster's". I don't know any Australians who actually drink Foster's - it's made in Manchester.

Despite the odd hiccup, the content is genuinely interesting and Forsyth's lively writing style keeps it entertaining. I even learned a few things, like:
New Zealand held a referendum on prohibition in 1919 and the Drys won, until the votes were counted from the army who were overseas at the time. Still, it was a damned close-run thing.
It turns out that story is true. Forsyth's background is in etymology, so it is no surprise that he is able to tell us the difference in Medieval times between an inn (a rather expensive hotel), a tavern (the equivalent of a modern-day cocktail bar), and an alehouse (a woman's house, open to the common folk, to whom the 'alewife' would sell her excess ale). We use those terms mostly interchangeably now, but in Medieval Britain they meant quite different things.

If you want to read a summary of serious research on alcohol, this book is not for you. However, if you're looking for an excuse to buy something frivolous, this might be a good choice. After all, it's coming close to the time of year where many people over-indulge, and if you need an excuse for your over-indulgence you could do worse than appealing to a long human tradition of drunkenness.

Thursday, 17 October 2019

Nobel Prize for Abhijit Banerjee, Esther Duflo, and Michael Kremer

It was great news this week that the 2019 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) was awarded to Abhijit Banerjee (MIT), Esther Duflo (MIT), and Michael Kremer (Harvard), "for their experimental approach to alleviating global poverty". The prize announcement is here, and here is a longer summary of their work, from the awarding committee.

The coverage online has been overwhelmingly positive, and for good reason. Their work in using rigorous randomised controlled trials (RCTs) has become the gold standard method in much of development economics. David McKenzie provides an excellent summary of what is special about this award. There are also a couple of great articles on The Conversation this week, by Arnab Bhattacharjee and Mark Schaffer (Heriot-Watt University), and by Gabriela D'Souza (Monash University).

I have to admit that I was a little surprised by this award. I know they have been among many people's picks for a Nobel Prize for the last few years, but I honestly thought it too soon. Esther Duflo becomes the youngest person (at 46 years) to win the prize for economics. Duflo also becomes only the second women (after Elinor Ostrom in 2009) to win the award, and that is only one tiny step towards redressing the gender imbalance in economics.

I haven't used much of the awardees' work in my current teaching, but when I was teaching graduate development economics a few years ago, I included RCTs and impact evaluations as part of the topic coverage. In my current ECONS102 class, I also refer to Michael Kremer's alternative view on intellectual property, that following a successful invention, the government purchases the patent and places it in the public domain, thereby reducing the problems associated with creating monopolies for patented products that have high social benefits.

Finally, the book Poor Economics, by Banerjee and Duflo, has been sitting on my to-be-read pile for far too long. It will be accelerated closer to the top of the pile, and you can expect a review from me before too much longer.

A welcome award, and much deserved!

Monday, 14 October 2019

Why study economics? Improve your analytical skills edition...

Consider the following three questions:

  1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
  2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
  3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
Those three questions make up the Cognitive Reflection Test (CRT), an increasingly used measure of analytical skills and problem-solving ability (and in case you are wondering, the correct answers are at the end of this post). Measuring analytical skills is difficult but important. It allows us to evaluate wild claims like: "Taking economics courses will improve your analytical skills more than taking other courses".

Interestingly, that's exactly the claim that this new article by Seife Dendir (Radford University), Alexei Orlov (US Securities and Exchange Commission), and John Roufagalas (Troy University), published last month in the Journal of Economic Behavior and Organization, does (sorry, I don't see an ungated version). They use data from the beginning and end of a semester at "a comprehensive, medium-sized university", where the CRT was completed by students at the beginning of the semester (the pre-test), and a functionally equivalent three-question test was completed by students at the end of the semester (the post-test). They then compare the differences in the before-and-after semester results between students who were, and were not, taking at least one economics class (this is what is referred to as a difference-in-differences research design).

Using data from 620 students (including 529 pre-tests, and 381 post-tests), they first note that:
While the majority of students did not answer any questions correctly on either the pre-test or the post-test (as indicated by the medians of 0), there was an overall improvement in student scores between the pre-test and the post-test (as reflected in the means of 0.54 and 0.90, respectively).
You read that right. More than half of everyone got zero out of three correct, both in the pre-test and in the post-test. You're not feeling so confident about your answers to the questions at the top of the post now, are you? The important results compare economics and non-economics students, and find that:
...the typical student in our sample scores 0.225 points better in the post-test as compared to the pre-test, almost a 42% improvement [ = 0.225/0.54]. The DiD coefficient equals 0.226, which means that the typical student enrolled in an economics class improves his or her post-test score by an additional 0.226 points (over and above the improvement made by non-economics students). This improvement is about 42% of the average pre-test score [ = 0.226/0.54], or 25% of the average post-test score [ = 0.226/0.90], or 7.5% of the maximum possible score of three [ = 0.226/3].
The students taking economics start from a lower pre-test score (on average) than non-economics students, and then improve their score by significantly more over the semester than non-economics students. The results are robust to the inclusion of various control variables, and robust to alternative specifications of the econometric model. Based on these results, economics improves students' analytical skills.

But wait! What about students enrolled in other courses that might plausibly improve analytical skills, like STEM? It turns out that the results are similar for STEM students, and for mathematics students, but not for statistics students, or business students, social science students, or arts students. That makes sense.

It seems that the improvement in analytical skills from economics derives entirely from economics principles classes, because Dendir et al. find that the results for other economics classes are statistically insignificant. There are two interpretations to this result. First, economics principles is where students learn the basic building blocks of 'thinking like an economist', which plausibly provide the greatest gains in analytical skills. Alternatively, perhaps there is something unusual about the way that economics principles is taught at this one university, such as a focus on the types of analytical skills that are picked up in the CRT?

This is an important first result demonstrating the impact of economics on analytical skills in a plausibly causal research design. However, some effort at replication of this result is clearly warranted. In the meantime though, there is support for the claim that "Taking economics courses will improve your analytical skills more than taking other courses (except for mathematics)".

Oh, and the intuitive (but wrong) and correct answers to the CRT questions:

  1. Intuitive answer: 10 cents. Correct answer: 5 cents.
  2. Intuitive answer: 100 minutes. Correct answer: 5 minutes.
  3. Intuitive answer: 24 days. Correct answer: 47 days.
Read more:

Saturday, 12 October 2019

What is Facebook worth to you?

Last year, I wrote a post about a research paper by Erik Brynjolfsson (MIT) et al., which estimated the consumer surplus generated by Facebook. The authors used non-market valuation techniques - essentially, they asked people how much money they would accept to give up Facebook for a month. The average was $48.49 in 2016, which decreased to $37.76 in 2017.

That was just one estimate though, so before we accept it, it should be replicable. I just read this 2018 paper, by Jay Corrigan (Kenyon College), Saleem Alhabash (Michigan State University), Matthew Rousu (Susquehanna University), and Sean Cash (Tufts University), published in the open access journal PLoS ONE. They use four different samples to estimate what people would be willing to aceept to give up Facebook for differing time periods, between one hour and one year. For three of the samples, they are able to enforce the choice (otherwise, the participant wouldn't be paid). Since consumer surplus is the difference between what a person would be willing to pay for the good or service (or what they would be willing to accept to give it up), and what they actually pay (which in this case is zero), the willingness-to-accept is a measure of consumer surplus.

Converting the estimates from the various samples, with different timeframes, to the equivalent value for a year, Corrigan et al. find the following:


The results are reasonably consistent across the different samples. The first three columns are scaled up from the willingness-to-accept to give up Facebook for one day, three days, and one week respectively, while the last three columns are willingness-to-accept to give up Facebook for a year, for three different samples (a student sample, a community sample, and an online sample). The results are much larger than those of Brynjolfsson (MIT) et al., whose estimates imply an annual value of less than US$600. Clearly, there is more to do in this space.

The estimate of the value provided by Facebook isn't just a fanciful exercise. Goods and services that are provided to consumers for 'free', such as Facebook and other social networks, search engines, video sites like YouTube, and blogs, as well as the increasing phenomenon of 'free' online games like Fortnite, present a bit of a problem for estimating aggregate economic output. The traditional measure of aggregate output, GDP, measures the total value of all goods and services produced, based on their market prices. If the market price is zero, then there is zero contribution to GDP. And yet, these services obviously generate substantial value for society (otherwise, people wouldn't be willing to accept such a high value before they would give them up). Corrigan et al. estimate that:
...across all three samples, the mean bid to deactivate Facebook for a year exceeded $1,000. Even the most conservative of these mean [Willingness-to-Accept] estimates, if applied to Facebook’s 214 million U.S. users, suggests an annual value of over $240 billion to users.
We already know that GDP is not a good measure of societal wellbeing. However, the increasing prevalence of free goods and services is making GDP even worse as a measure over time. We need to find an alternative measure of wellbeing, but it isn't clear at this point what such a measure would be.

[HT: Marginal Revolution, late last year]

Read more:


Friday, 11 October 2019

Online classes lower student grades and completion

I've written several posts about research papers that compare online education with more traditional classroom learning (see here and here and here). However, those studies were distinctly small scale compared to the study reported in this 2017 article, by Eric Bettinger (Stanford), Lindsay Fox (Mathematica Policy Research), Susanna Loeb (Stanford), and Eric Taylor (Harvard), published in the journal American Economic Review (seems to be open access, but just in case there is an ungated earlier version here). They used data from a large for-profit university in the U.S., including over 230,000 students doing 750 different courses.

Importantly, this paper was able to establish causal estimates of the impact of online delivery on student performance in the course and in subsequent courses, using an instrumental variable strategy. However, first it is worth noting that the particular setting is important:
Each course is offered both online and in-person, and each student enrolls in either an online section or an in-person section. Online and in-person sections are identical in most ways: both follow the same syllabus and use the same textbook; class sizes are approximately the same; both use the same assignments, quizzes, tests, and grading rubrics. The contrast between online and in-person sections is primarily the mode of communication. In online sections, all interaction—lecturing, class discussion, group projects—occurs in online discussion boards, and much of the professor’s lecturing role is replaced with standardized videos. In online sections, participation is often asynchronous while in-person sections meet on campus at scheduled times. In short, the university’s online classes attempt to replicate its traditional in-person classes, except that student-student and student-professor interactions are virtual and asynchronous.
So, the only difference between the online and traditional in-person classes is that the online course is run online (with necessary differences in communication between students and the lecturer). To get at the causal estimates though, Bettinger et al. use the interaction between the distance to the nearest campus (of which there were over 100) and whether the course was offered both online and in-person in that semester. As they explain:
...in the interaction instrument design, the reduced-form coefficient only measures how the slope, between distance and grade, changes when students are offered an in-person class option. The main effect of distance (included in both the first and second stages) nets out any other plausible mechanisms which are constant across terms with and without an in-person option. Parallel reasoning can be constructed for the Offered component of the instrument.
I know that's quite technical, but essentially Bettinger et al. are comparing the outcomes for students who did, and did not, take the online course when it was available to them, knowing that distance is a determinant of whether the students would take the in-person course. I hadn't seen this type of interaction instrument used before, and it's something that might come in handy in some of my other work.

The results that Bettinger et al. find were not surprising to me:
The estimated effect of taking a course online is a 0.44 grade point drop in course grade, approximately a 0.33 standard deviation decline. Put differently, students taking the course in-person earned roughly a B− grade (2.8) on average while their peers in online classes earned a C (2.4). Additionally, taking a course online reduces a student’s GPA the following term by 0.15 points. The negative effect of online course-taking occurs across the distribution of course grades. Taking a course online reduces the probability of earning an A or higher by 12.2 percentage points, a B or higher by 13.5 points, a C or higher by 10.1 points, and a D or higher (passing the course) by 8.5 points...
Basically, it's all bad news for online courses. They also find that the impacts are greatest for those at the bottom of the grade distribution, with statistically insignificant effects on the top three deciles of students. That isn't too much different for the results on flipped classrooms that I have discussed before (see here and here). Students also do significantly worse in future courses (after the online course), and are significantly less likely to be enrolled in the future (they are more likely to drop out).

There are some important take-away messages from this research. First, many universities are progressing towards online delivery, either alongside traditional in-person delivery or in place of it. There may be cost savings to online delivery, but those cost savings need to be carefully weighed up against the worse student outcomes that can be expected from online courses. Second, many students seem to be very keen on signing up for online courses, in preference to in-person classes, probably because of the flexibility that an online course provides. Again, the benefit of flexibility needs to be carefully weighed up against the worse outcome (not just in that course, but in future courses) that would result from taking the online course.

Of course, as Bettinger et al. note in their paper, for some students there is no alternative to online education. However, for those who do have a choice, taking an in-person class may be preferable.

Read more:


Tuesday, 8 October 2019

More progressive taxation may increase economic welfare in New Zealand

This week in my ECONS102 class, we've been covering poverty and inequality, as well as redistribution and the economics of social security. It's a lot to cover, but one of the key points is that economics can't directly answer the question of how much redistribution (from rich to poor) is the right amount of redistribution, because that depends on the normative preferences of society (or the normative preferences of the government of the day, if you prefer).

So, I was interested to read this article in The Conversation last month, by Nicolaus Herault (University of Melbourne), John Creedy, and Norman Gemmell (both Victoria University of Wellington). The article discusses how tax rates can be used to increase total welfare in New Zealand:
If we asked people in New Zealand what they think the best income tax reform would be, we would expect a range of responses. People will no doubt have different views about which of the four income tax rates and corresponding income thresholds should be lowered or increased.
In our new study, we examine how tax rates should be changed to improve social welfare in New Zealand.
At 33%, the current highest marginal income tax rate in New Zealand is relatively low compared to other major advanced economies. For instance, it’s 45% in both the UK and Australia.
We find that, under a range of assumptions, lifting the highest income tax rate and using the proceeds to lower one of the two lowest tax rates achieves the greatest improvement to welfare.
The underlying research is available here, published in the journal International Tax and Public Finance (sorry, I don't see an ungated version online). Herault et al. used microsimulation, which involves taking a population at some point in time, making some changes (in this case, changes to the tax system), and simulating how the population would respond (in this case, in terms of their welfare). Microsimulation is a very cool method for investigating how the population will respond, especially if you can account for the fact that not everyone will respond in the same way (one of my PhD students is using microsimulation to look at small-area ethnic population projections for Auckland city).

Herault measure 'social welfare' as a combination of household income and leisure time (I prefer to avoid using the term 'social welfare', since it can easily be confused with social security by the uninitiated, and prefer 'economic welfare' instead). In the sort of model that Herault et al. run, if tax rates go up, people will earn less disposable income, but will work less and therefore have more leisure time. And the reverse occurs if tax rates go down. So, it is ambiguous whether their measure of welfare goes up or down when taxes change. They find that:
...the tax reform that would increase social welfare the most consists of a reduction in one of the two lowest tax rates, funded by an increase in the highest tax rate. Such a reform would lead to more rate progression in the tax system, come at no revenue loss to the government, and increase social welfare. This conclusion applies whether one gives a high or low priority to reduction in inequality.
Many people would argue that we need a tax system that is more progressive, in order to reduce income inequality. If these results are to be believed, then it seems that having a more progressive income tax system would increase economic welfare as well.

Monday, 7 October 2019

Book review: Mostly Harmless Econometrics

Every now and again, it is worthwhile to revisit some things we learned long ago. So it was when I picked up Mostly Harmless Econometrics, by Joshua Angrist and Jorn-Steffen Pischke. The book is not new, and has been sitting on my shelf for several years. I have read bits of it previously, but only with particular topics in mind. A few years ago, it was the required textbook in one of the Masters-level econometrics papers at the University of Waikato. Someone once described it to me as a 'good refresher' for economics graduates who had been out in the workforce for a few years, and looking to get back up to speed on empirical economics work.

Having read the book, I have a new appreciation for just how difficult our econometrics students were finding the paper when this was the text. While Angrist and Pischke note that "[t]he main prerequisite for understanding the material here is basic training in probability and statistics", what they think of as 'basic training' is well beyond what I would consider it to be. This book is not an easy read, unless you have a very thorough grounding in the basics of econometrics, as well as a good understanding of probability theory and matrix algebra. And to be fair, Angrist and Pischke aren't hiding the level of the text, because on the first page of the preface it says:
Our view of what's important has been shaped by our experience as empirical researchers, and especially by our work teaching and advising economics Ph.D. students. This book was written with these students in mind.
I would say that some economics Ph.D. students would struggle with the more technical aspects of the book. And certainly, this is a book that is not for the fainthearted undergraduate economics student, although some may find value in it. This book definitely doesn't strike me as a 'refresher for graduates', who in my experience (along with undergraduates and many graduate students) are looking for the cookbook approach that is to be found in many of the more traditional econometrics textbooks.

Aside from the level though, there is a lot of value in the book. I jotted down a bunch of notes to myself, and the book is especially helpful in its coverage of instrumental variables regression, regression discontinuity, and quantile regression. The book is well sourced, with pointers to the appropriate primary literature (in case you are a glutton for punishment), but Angrist and Pischke do a great job of summarising the state of knowledge on the techniques they cover (or at least, the state of knowledge as of 2009, when the book was published).

If all you're looking for is a cookbook econometrics textbook, there are many better offerings than this one. But if you're looking for something deeper, then this book might be a good place to start.

Wednesday, 2 October 2019

The premium to study at a Go8 university in Australia

Back in 2014, I blogged about the signalling value of enrolling at a top university:
Every lawyer has to have a law degree, and every doctor has to have a medical degree. So there is no signalling benefit from the degree itself - a student can't signal their quality as an employee with the degree, because all other applicants will have a degree too. The quality of the student then has to be signalled by the quality of the institution they studied at, rather than the degree itself. An effective signal has to be costly (degrees at top-ranked institutions are costly) and more costly to lower quality students (which seems likely in this case, because lower quality students would find it much more difficult to get into a top-ranked institution).
A new article by David Carroll (Monash University), Chris Heaton (Macquarie University), and Massimiliano Tani (UNSW Canberra), published in the journal The Economic Record (possibly ungated, but just in case there is an ungated earlier version here), looks at how big the premium is for studying at a top university (compared with lower-ranked universities). Specifically, they used data on the earnings of Australian students collected in their Graduate Destination Survey 2013-2015. They compared the earnings for different fields of study across universities, focusing on comparing the Group of Eight (Go8) research-intensive universities (Australian National University, Monash University, the University of Adelaide, the University of Melbourne, the University of New South Wales, the University of Queensland, the University of Sydney and the University of Western Australia) with other universities. They grouped the other universities into Go8, ATN (five former institutes of technology), NGU (New Generation Universities - all former colleges of advanced education) and 'Other'.

They expected to find that graduates of the Go8 earned significantly more, for two reasons:
If institutional factors such as student-to-staff ratios and faculty qualifications are important in the human capital production function, then graduates of ‘better’ institutions (i.e. those with more favourable ratios) should be paid a premium due to their enhanced productivity relative to their peers. Under a signalling interpretation (Spence, 1973), employers, believing that attending a prestigious university is correlated with productivity, will pay a premium to graduates from these institutions, especially when institutional quality is more visible to employers than individual productivity, as is the case for recent graduates with limited work histories.
Unfortunately, they can't easily disentangle the pure human capital premium from the signalling value. A further issue with this type of study is selection bias - the types of students that go to top universities are meaningfully different from the types of students who go to other universities. The students going to the top universities tend to be better motivated, more conscientious, and harder working, and so you'd expect them to earn more after graduation regardless of where they graduated from. Carroll et al. deal with this issue by controlling for the average university admission scores (Australian Tertiary Admission Ranks or ATARs) of each field of study. Because they don't have individual-level ATAR scores, they run their analysis at the field-of-study level. They find:
...statistically significant evidence of Go8 premia for graduate starting salaries, once selection by ability (as measured by the ATAR of accepted students) is taken into account. The magnitudes of the unconditional premia are fairly small, ranging from 4.3 to 5.5 per cent, and we estimate that between 45 and 65 per cent of these premia are due to variations in fields of study and gender balance, regional wage differences, and the recruitment of better-quality students by the Go8.
In other words, there is a premium for studying at a Go8 university. However, the premium is small, and once you account for differences in the types of students who go to Go8 universities and those that go to other universities, the premium falls to around 2 percent. That's certainly much lower than I expected. It made me wonder whether there is a lot of heterogeneity in the premium by field of study (e.g. is the Go8 premium larger for economics and business, or science, or something else?), but we don't find out from the paper.

There are some clear limitations. They drop the data from the University of Melbourne, which is (by some measures) the top ranked of the Go8 universities. That would suggest the results may be downward biased. They also drop data from students who went onto further (postgraduate) study, presumably because those students were not in the workforce at the time of the Graduate Destinations Survey. To the extent that graduates with postgraduate study earn more than graduates with bachelor's degrees, that might also downward bias the results, if Go8 students are more likely to go onto postgraduate study.

What we can take away from this study is clear. University ranking matters, but maybe it doesn't matter quite as much as we previously thought?

Read more: