Monday, 30 January 2023

Income inequality and life expectancy in Asia and Africa

Many claims have been made that higher inequality causes worse health outcomes. In fact, that is one of the central claims in the Wilkinson and Pickett book The Spirit Level (which I reviewed here). Of course, as I noted in my review, the relationships that Wilkinson and Pickett establish are correlations, not causal relationships. However, if we put that aside and accept that inequality worsens health outcomes, then we should expect there to be a robust association between inequality and life expectancy. In places with higher inequality, life expectancy should be lower ceteris paribus (holding all else equal). And looking at a single place, when inequality is lower, life expectancy should be higher ceteris paribus.

However, the literature is divided on whether such relationships exist. So, I was interested to read this recent article by Lisa Martin (University of Oxford) and Joerg Baten (University of Tübingen), published in the Journal of Economic Behavior and Organization (sorry, I don't see an ungated version online). Establishing a relationship between inequality and life expectancy requires variation in both variables. That is somewhat difficult to establish when income and life expectancy data are only available at the country level for a small number of years for many countries, or for a few countries (all of which are developed countries) over a longer time period.

Martin and Baten avoid this problem by using data on height and height inequality to fill in gaps in the data. These variables are available for a broader range of countries, including developing countries. Their data come from the Clio Infra project, which has data on a range of economic indicators for many countries going back to the early 1800s (and in some cases going back to the 1500s). The data that Martin and Baten use covers the period from 1820 to 2000, and limited to countries in Asia and Africa.

Martin and Baten use data on height to estimate life expectancy, which relies on a fairly simple regression model that includes height and regional dummy variables. They then use data on the coefficient of variation in height to estimate the Gini coefficient measure of income inequality. Both estimates appear to do a reasonable job. Finally, Martin and Baten use the estimated life expectancy and income inequality variables, and look at the relationship between them, controlling for:

...(1) existence of a health insurance system, (2) wars, (3) the pandemic decade of the 1910s–flu, (4) malaria-intensive countries...

In their simplest model, which doesn't include the control variables (only time fixed effects), they find that:

The coefficient of inequality is estimated at approximately -0.23 and statistically significant. It implies that an increase in the Gini coefficient by one index point translates approximately into a two and a half month decrease in estimated life expectancy.

And when the controls are included:

...we again observe a statistically significant negative coefficient for income inequality, though it is smaller than the significant estimates in [the simplest specification]...

Martin and Baten recognise that these results don't establish a causal relationship (which is a problem across this literature), so they then employ an instrumental variables approach. Their instrument of choice is:

...the ratio of the share of the land suitable for the cultivation of the “inequality crop” (sugar) to the share of the land suitable for the cultivation of the “equality crop” (wheat).

They justify this as follows:

A sugar plantation is a clear example of an agricultural production type of large-scale economies... On the other hand, wheat production is already highly productive on much smaller farm units, as has been amply demonstrated in the agricultural economics literature. The specialization of a country on the cultivation of large-scale cash crops is positively associated with inequality, whereas food crops such as wheat are not scale-intensive and were historically planted in smallholdings.

That passes the smell test that this ratio could be a useful instrument for inequality, although whether the instrument affects life expectancy only through its effect on inequality is arguable (this exclusion restriction is a requirement for a valid instrument, but it cannot be tested for). Anyway, in this IV analysis, Martin and Baten find that:

...the significant impact of inequality remains a consistent determinant of life expectancy.

Overall, this study seems to support a causal relationship between income inequality and life expectancy. In other words, lower inequality causes higher life expectancy. Martin and Baten aren't able to test the mechanisms that might explain this causal relationship explicitly, but nevertheless they conclude that:

...all these factors were at work for our sample of Africa and Asia in the last two centuries - a public goods effect that we can separate out with the health insurance system variable, a correlation with income (or poverty) and a psychosocial effect of less healthy behavior in more unequal societies.

However, before we simply accept these results at face value, we need to recognise that all of this is based on estimates of life expectancy and income inequality that are estimated from other regression models. We should be somewhat cautious about results from models that use derived variables, or as in this case (I think), where only some of the data come from derived values (while the rest of the data are 'real'). This sort of approach imposes an additional structure on the data used in the model that does not exist in the real data, and could lead to biased results.

So, the results are interesting, and consistent with the negative correlation between income inequality and life expectancy established in other studies. However, these results are not definitive, and the question of whether the relationship is truly causal remains somewhat open.

Sunday, 29 January 2023

Ageing and inequality in China

Two major ongoing trends for China over the last decade or more have been increasing income inequality, and an ageing population. Could they be related? That is the research question addressed in this 2018 article by Xudong Chen (Baldwin Wallace University), Bihong Huang (Asian Development Bank Institute), and Shaoshuai Li (University of Macau), published in the journal The World Economy (ungated earlier version here). They use data from the China Health and Nutrition Survey (CHNS), which includes longitudinal data on 4400 households from 36 suburban neighbourhoods and 108 towns, collected over nine waves between 1989 and 2011. They look at how within-cohort inequality varies over the life cycle within their data, and find that:

An increasing age effect on income inequality is observed for most cohorts, although not linear...

The coefficients on age, our main variable of interest, are significantly positive, indicating that ageing population enlarges inequality in both income and durable consumption.

The implication is that, as the population ages in aggregate, overall inequality will increase. That is because as birth cohorts age, the within-cohort component of inequality increases. It is also because more of the population will be in older age groups, where within-cohort inequality is higher.

However, there is an important piece of the puzzle missing in the Chen et al. paper. That is the between-cohort component of inequality. Chen et al. include cohort fixed effects in their models, but they don't tell us anything about whether the inequality between birth-cohorts is increasing, decreasing, or remaining steady over time. If the income gap between successive cohorts is narrowing, that could offset the increasing within-cohort inequality. On the other hand, if the income gap between successive cohorts is increasing, that will make inequality even worse. We just don't know, and yet there is evidence that inequality in China may have started to decrease (see here).

The Chen et al. paper therefore gives us some insight into only part of the question about how an ageing population may overall contribute to increasing inequality over time. Interestingly, that is one potential contributor to global inequality that could have used more thorough exposition in Branko Milanovic's book Global Inequality (which I reviewed yesterday). After all, China is a large contributor to global inequality (see here).

Now, there are good theoretical reasons to believe that ageing populations increase inequality (and those reasons, starting with Modigliani's lifecycle theory, are briefly explained in the Chen et al. paper). How much extra inequality we may have as a result of population ageing, and the consequences (if any) of increasing inequality that arises from population ageing, are interesting questions that thoughtful researchers are hopefully considering. I just hope that they are considering both the within-birth-cohort and between-birth-cohort components of inequality.

Saturday, 28 January 2023

Book review: Global Inequality (Branko Milanovic)

Inequality has become a major topic of conversation in recent years, both in the media and in casual conversation. When most people think about inequality, they are thinking about inequality within their country - the difference between the lives of the rich, and the lives of the poor, but within the same country context. However, if we broaden the scope beyond simply considering a single country, and consider global inequality, it quickly becomes clear that the within-country inequality is dwarfed by the inequality between countries. To see this, consider the difference in living standards between a poor family in a rich country like New Zealand, and a poor family in a poor country like Chad or Congo. Those families really are worlds apart.

Both components (within-country and between-countries) are covered in Branko Milanovic's 2016 book Global Inequality, which I just finished reading. Milanovic is one of the world's greatest authorities on global inequality, having written many seminal research articles on the topic (many of which I have discussed on this blog, including here and here and here and here). The book has essentially four parts. In the first part, Milanovic looks at how global inequality has changed over the last 25 years. This sets the scene for what follows, because the last 25 years has seen a decline in global inequality, driven in large part by the rapid growth of China. This idea is well established in Milanovic's research, which I have discussed earlier. Chinese growth, which is lowering global inequality, sits aside rising inequality within China, as well as within the US and other developed countries. The juxtaposition of these trends sets up the potential for an interesting exploration of global inequality overall.

The second part of the book looks at the deficiencies of the Kuznets Curve. The classical Kuznets Curve suggests that from low levels of income per capita, inequality initially rises, but then at higher levels of income per capita, inequality begins to fall. However, this explanation is unable to explain the recent rise in inequality in western developed countries. Milanovic offers an alternative explanation, which he terms Kuznets Waves. He suggests that western countries have been through a first wave (increasing inequality up to the World Wars, and then decreasing inequality from then until the 1970s), and have begun a second wave (with inequality increasing in recent years). In contrast, developing countries remain in their first wave.

The third part looks at the evolution of inequality over a much longer timeframe of the past two centuries. For those like me who are familiar with Milanovic's other work, this section has little new to offer. However, the fourth part of the book looks to the future, using the concept of Kuznets Waves, as well as increasing income convergence between countries, to consider general trends in future global inequality. This last section was the highlight of the book for me, even though the fraught nature of prediction will render the specific details already out of date in some places. In particular, Milanovic's views on inequality within the US are interesting, and include that:

  • Higher elasticity of substitution between capital and labor, in the face of increased capital intensity of production, will keep the share of national income that accrues to capital owners high.
  • Capital incomes will remain highly concentrated, thus leading to high interpersonal inequality of incomes.
  • High labor and capital income earners may increasingly be the same people, thus further exacerbating overall income inequality.
  • Highly skilled individuals who are both labor- and capital-rich will tend to marry each other.
  • Concentration of income will reinforce the political power of the rich and make pro-poor policy changes in taxation, funding for public education, and infrastructure spending even less likely than before.

Some of these trends and predictions are already unfolding, and were underway at the time of Milanovic writing the book, so this is not extensive futurism. Nevertheless, the underlying explanations for these trends is helpful to understanding Milanovic's highlighting of them as particularly important for future global inequality. Along the way, Milanovic refers to a 'new capitalism', where:

...rich capitalists and rich workers are the same people. The social acceptability of the arrangement is enhanced by the fact that rich people work. It is moreover difficult or impossible for the outsider to tell what part of their income comes from ownership and what part from labor. While in the past, rentiers were commonly ridiculed and disliked for doing work that involved nothing more demanding than coupon-clipping, under the new capitalism, criticism of the top 1 percent is blunted by the fact that many of them are highly educated, hardworking, and successful in their careers. Inequality thus appears in a meritocratic garb...

The book isn't all good, however. Regular readers of this blog will know that I truly dislike people who compare stocks with flows (see here, for example), and Milanovic at one point does compare wealth (a stock) to GDP (a flow). However, that is a minor gripe in a book that is filled with excellent explanations that avoid unnecessary technicality. Some readers will be looking for a prescription of how to tackle inequality in the future. On that point, Milanovic is surprisingly unclear. He does seem to strongly favour a move towards the equalisation of endowments (which many economists now refer to as pre-distribution), rather than using the tax-and-transfer system that most governments currently rely on (re-distribution). However, Milanovic carefully leaves specifics on how such equalisation could be achieved alone in this book.

If you are looking for an overall primer on global inequality, then I recommend this book as a good place to start. It's not a page-turner, but Milanovic has done a good job of making this topic come alive. Inequality is an important topic for us to understand, and we need to broaden our understanding of it beyond considering the inequality only our own country.

Friday, 27 January 2023

Bounded rationality, and international egg smuggling from Mexico

New Zealand has an egg shortage, but we are not alone. The US is also suffering an egg shortage, but for a different reason. In New Zealand, the shortage arose because of a ban on battery caged eggs, as well as supermarkets choosing to no longer sell colony caged eggs as well (see here). In the US, it's because of avian influenza killing egg-laying chickens.

The egg shortage has led to an interesting side effect in the US, as NPR reported last week:

As the price of eggs continues to rise, U.S. Customs and Border Protection officials are reporting a spike in people attempting to bring eggs into the country illegally from Mexico, where prices are lower...

A 30-count carton of eggs in Juárez, Mexico, according to Border Report, sells for $3.40. In some parts of the U.S., such as California, just a dozen eggs are now priced as high as $7.37.

Shoppers from El Paso, Texas, are buying eggs in Juárez because they are "significantly less expensive," CPB spokesperson Gerrelaine Alcordo told NPR in a statement.

In New Zealand, rational egg consumers have switched to buying their own chickens. In the US, they are becoming international egg smugglers. To see why, consider how a rational egg consumer would respond to the egg shortage.

Mexican eggs and US eggs are substitutes. When the price of one substitute increases, or one substitute becomes unavailable, some consumers will switch to the other. In this case, some rational consumers want to switch from US eggs to Mexican eggs. However, because Mexican eggs are banned from import to the US (more on that in a moment), the only way for egg consumers to get Mexican eggs is to smuggle them into the country. That comes with costs in the form of the risk of fines, if the smuggler is caught. However, if consumers aren't aware of the fines (and it appears from the story that many are not), then these boundedly rational consumers would not take those costs into account (we call these smugglers boundedly rational, because they lack full information so their rationality is bounded). These boundedly rational consumers would engage in more egg smuggling than they would if they took the costs of getting caught into account. That explains the huge increase in egg smuggling.

Now, the ironic thing about this whole situation, from the same NPR story:

Eggs from Mexico have been prohibited by USDA since 2012, "based on the diagnosis of highly pathogenic avian influenza in commercial poultry."

So, an egg shortage, caused by avian influenza in the US, cannot be alleviated by importing eggs from Mexico, because of a 2012 rule designed to keep avian influenza out of the US. Sometimes, the craziness of international trade rules is just too much to bear.

[HT: Marginal Revolution]

Tuesday, 24 January 2023

Does economics have a bigger publication bias problem than other fields?

Publication bias is the tendency for studies that show statistically significant effects, often in a particular direction predicted by theory, to be much more likely to be published than studies that show statistically insignificant effects (or statistically significant effects in the direction opposite to that predicted by theory). Meta-analyses, which collate the results of many published studies, often show evidence for publication bias.

Publication bias could arise because of the 'file drawer' problem: since studies with statistically significant effects are more likely to be published, researchers put studies that find statistically insignificant effects into a file drawer, and never try to get them published. Or, publication bias could result from p-hacking: researchers make a number of choices about how the analysis is conducted in order to increase the chances of finding a statistically significant effect, which can then be published.

Is publication bias worse in some fields than others? That is the question addressed by this new working paper by František Bartoš (University of Amsterdam) and co-authors. Specifically, they compare publication bias in the fields of medicine, economics, and psychology by undertaking a 'meta-analysis of meta-analyses', combining about 800,000 effect sizes from 26,000 meta-analyses. However, the sample is not balanced across the three fields, with 25,447 meta-analyses in medicine, 327 in economics, and 605 in psychology. Using this sample though, Bartoš et al. find that:

...meta-analyses in economics and psychology predominantly show evidence for an effect before adjusting for PSB [publication selection bias] (unadjusted); whereas meta-analyses in medicine often display evidence against an effect. This disparity between the fields remains even when comparing meta-analyses with equal numbers of effect size estimates. When correcting for PSB, the posterior probability of an effect drops much more in economics and psychology (medians drop from 99.9% to 29.7% and from 98.9% to 55.7%, respectively) compared to medicine (38.0% to 27.5%).

In other words, we should be much more cautious about the claims of statistically significant effects arising from meta-analyses in economics than equivalent claims from meta-analyses in psychology or medicine (over and above any general caution we should have about meta-analysis - see here). In other words, these results suggest that publication bias is a much greater problem in economics than in psychology or medicine.

However, there are a few points to note here. The number of meta-analyses included in this study is much lower for economics than for psychology or medicine. Although Bartoš et al. appear to account for this, I think it suggest the potential for another issue.

Perhaps there is publication bias in meta-analyses (a meta-publication bias?), which Bartoš et al. don't test for? If it were the case that meta-analyses that show statistically significant effects were more likely to be published in economics than meta-analyses that show statistically insignificant effects, and this meta-publication bias was larger in economics than in psychology or medicine, then that would explain the results of Bartoš et al. However, it would not necessarily demonstrate that there was underlying publication bias in the underlying economic studies. Bartoš et al. need to test for publication bias in the meta-analysis sample.

That is a reasonably technical point. However, it does seem likely that there is publication bias, and it would not surprise me if it is larger in economics than in medicine, but I wouldn't necessarily expect it to be any worse than psychology. As noted in the book The Cult of Statistical Significance by Stephen Ziliak and Dierdre McCloskey (which I reviewed here), there remains a dedication among social scientists in general, and economists in particular, to finding statistically significant results, and that is a key driver of publication bias (see here).

Maybe economists are dodgy researchers. Or maybe, we just need to be better at reporting and publishing statistically insignificant results, and adjusting meta-analyses to account for this bias.

[HT: Marginal Revolution]

Sunday, 22 January 2023

Home crowds and home advantage

It is well known that, in most if not all sports, there is a sizeable advantage to playing at home. However, it isn't clear exactly what mechanism causes this home advantage to arise. Is it because when a team (or individual sportsperson) is playing at home, they don't have to travel as far, and are refreshed and comfortable at game time? Or, is it because the team (or individual sportsperson) is more familiar with the home venue than their competitors are? Or, is it because of home fan support?

Previous research has found it very difficult to disentangle these different mechanisms as being the underlying cause of home advantage. Cue the coronavirus pandemic, which created an excellent natural experiment that allows us to test a range of hypotheses, including about home advantage in sports. Since fans were excluded from stadiums in many sports, if home advantage was no longer apparent, we can at least rule out home fan support as being a contributor to home advantage.

And that is essentially what the research reported in this new article by Jeffrey Cross (Hamilton College) and Richard Uhrig (University of California, Santa Barbara), published in the Journal of Sports Economics (open access), tries to do. Specifically, they look at four of the top five European football leagues (Bundesliga, La Liga, Premier League, and Serie A), all of which faced a disrupted 2019-20 season, and after the disruption resumed play with restrictions that prevented fan attendance at games. Essentially, they compare home team performance before and after the introduction of the no-fans policy. Their preferred outcome variable is 'expected goals' rather than actual goals scored. Cross and Uhrig justify the choice as:

Due to randomness, human error, and occasional moments of athletic brilliance, the realized score of a match is a noisy signal for which team actually played better over the course of 90 minutes. In order to mitigate this noise, we focus on expected goals, or xG, which measure the quantity and quality of each team’s chances to score; they have been shown to better predict future performance and more closely track team actual performance than realized goals... Expected goals are calculated by summing the ex ante probabilities that each shot, based on its specific characteristics and historical data, is converted into a goal... For example, if a team has four shots in a game, each with a scoring probability of 0.25, then their expected goals for the match would sum to 1. However, their realized goals could take any integer value from 0 to 4...

Their data goes back to the 2009-10 season, and includes some 15,906 games in total. However, they only have data on expected goals from the 2017-18 season onwards, which includes 4,336 games. Because the games with no fans were played later than usual, the temperature was higher (as the season was extending closer to summer), so they make sure to control for weather, as well as for the number of coronavirus cases.

Looking at realised goals, Cross and Uhrig find that:

...raw home field advantage decreased by 0.213 goals per game from a baseline of a 0.387 goals per game advantage for the home team... This represents a decrease of 55%.

But, as they argue, this is quite a noisy measure of home advantage. So, they turn to their measure of expected goals, and find that:

...raw home field advantage, as measured by expected goals instead of realized goals, decreased by 64% from a 0.307 expected goal advantage for the home team to just 0.110 expected goals. Although the magnitude of the decrease is smaller than realized goals in absolute terms (0.197 xG as opposed to 0.213 G), it represents a larger fraction of the initial home field advantage (64% as opposed to 55%) because the initial home field advantage is smaller as measured by expected goals than realized goals.

Finally, looking at game outcomes, they find that:

...the lack of fans led to fewer home wins and more home losses, but the probability of a draw is unaffected, suggesting that fans are symmetrically pivotal: fans are approximately as likely to shift a result from a draw to a home win as they are from a home loss to a draw... Approximately 5.4 percentage points are shifted from the probability of winning to the probability of losing.

So, coming back to the question we started with, at least some of the home advantage that football teams experience is due to home crowd support. Given that home advantage decreased by somewhere between 55 percent and 64 percent, the share of home advantage that home crowd support is responsible for is sizeable. Of course, this doesn't necessarily extend to all sports. But it does show that home crowd support is important.

Friday, 20 January 2023

Grading bias, classroom behaviour, and assessing student knowledge

There is a large literature that documents teachers' biases in the grading of student assessments. For example, studies have used comparisons of subjectively graded (by teachers) assessments and objectively graded (or blind graded) assessments, to demonstrate gender bias and racial bias. However, grading bias may not just arise from demographics. Teachers may also show favouritism towards well-behaved students (relative to badly-behaved students). The challenge with demonstrating that bias is that researchers often lack detailed measures of student behaviour.

That is not the case for the research reported in this recent article by Bruno Ferman (Sao Paulo School of Economics) and Luiz Felipe Fontes (Insper Institute of Education and Research), published in the Journal of Public Economics (sorry, I don't see an ungated version online). They used data from a Brazilian private education company that manages schools across the country, and covered:

...about 23,000 students from grades 6-11 in 738 classrooms and 80 schools.

Importantly, the data includes student assessment results that were graded by their teacher, standardised test results that were machine-graded, and measures of student behaviour, which the company collected in order to "better predict their dropout and retention rates". Ferman and Fontes collate the behavioural data, and:

...classify a student being assessed in subject s and cycle t as well-behaved (GBits = 1) if she is in the top quartile within class in terms of good behavior notifications received until t by all teachers except the subject one. We classify bad-behaved students (BBits = 1) analogously.

They then compare maths test scores between well-behaved and badly-behaved students, and show that:

...the math test scores of ill-behaved students (BB = 1) are on average 0.31 SD below those such that BB = 0. The unconditional grade gap between students with GB = 1 and GB = 0 is even greater: 0.54 SD in favor of the better-behaved pupils.

So far, so unsurprising. Perhaps better-behaved students also study harder. However, when Ferman and Fontes control for blindly graded math scores, they find that:

...the behavior effects are significantly reduced, indicating that a share of the competence differences seen by teachers is captured by performance in the blindly-scored tests... Nevertheless, the behavior effects remain significant and are high in magnitude, indicating that teachers confound scholastic and behavioral skills when grading proficiency exams. Our results suggest that the better(worse)-behaved students have their scores inflated (deducted) by 0.14 SD...

This is quite a sizeable effect, amounting to "approximately 60% of the black-white achievement gap". And that is simply arising from teacher grading bias. Ferman and Fontes then go on to show that their results are robust to some alternative specifications, and that there is also apparent teacher bias in decisions of which students are allowed to move up to the next grade.

However, should we care about grading bias? Ferman and Fontes point out that their results:

...characterize an evaluation scheme that is condemned by educators and classroom assessment specialists, which explicitly warn against the adjustment of test scores to reflect students’ behavior... and consider this practice as unethical... Their argument is that achievement grades are the main source of feedback teachers send about the students’ proficiency levels. Therefore, test scores help pupils form perceptions about their own aptitudes and assist them in the process of self-regulation of learning; additionally, they help parents to understand how to allocate effort to improve their children’s academic achievement...

Still, one could argue that biasing test scores may be socially desirable if it induces a student to behave better, generating private benefits to the pupil and positive externalities to peers...

Let me suggest another counterpoint. If grades are a signal to universities or to employers about the relative ranking of students in terms of performance, then maybe you want those grades to reflect students' behaviour as well as students' attainment of learning outcomes. You might disagree, but I'd argue that there are already elements of this in the way that we grade students (in high schools and universities) already. If teachers (and educational institutions) were purists about grades reflecting student learning alone, then we would never estimate student grades for students who miss a piece of assessment, we would never scale grades (up or down). The fact that we do those things (and did so especially during the pandemic) suggests that student grades already can't be interpreted solely as reflecting students' attainment of learning outcomes.

Employers (and universities) want grades that will be predictive of how a student will perform in the future. However, academic achievement is an imperfect measure of future performance of students. This is demonstrated clearly in this recent article by Georg Graetz and Arizo Karimi (both Uppsala University), published in the journal Economics of Education Review (open access). They used administrative data from Sweden, focusing mainly on the cohort of students born in 1992. Graetz and Karimi are most interested in explaining a gender gap that exists between high school grades (where female students do better) and the standardised Swedish SAT tests (where male students do better). Specifically:

...female students, on average, outperform male students on both compulsory school and high school GPAs by about a third of a standard deviation. At the same time, the reverse is true for the Swedish SAT, where female test takers underperform relative to male test takers by a third of a standard deviation...

Graetz and Karimi find that differences in cognitive skills, motivation, and effort explain more than half of the difference in GPAs between female and male students, and that female students have higher motivation and exert greater effort. In contrast, there is selection bias in the SAT scores. This arises in part because Swedish students can qualify for university based on grades, or based on SAT scores. So, students that already have high grades are less likely to sit the SATs. Since more of those students are females with high cognitive skills, the remaining students who sit the SAT test disproportionately include high-cognitive-skill males, which is why males on average perform better in the Swedish SATs.

However, aside from being kind of interesting, that is not the important aspect of the Graetz and Karimi paper that I want to highlight. They then go on to look at the post-high-school outcomes for students born in 1982, and look at how those outcomes relate to grades and SAT scores. In this analysis, they find that:

Grades and SAT scores are strong predictors of college graduation, but grades appear about twice as important as SAT scores, with standardized coefficients around 0.25 compared to just over 0.1...

A one-standard-deviation increase in CSGPA and HSGPA is associated with an increase in annual earnings of SEK15,500 and 25,200, respectively (SEK1,000 is equal to about USD100). But for the SAT score, the increase is only SEK8,000.

In other words, high school grades are a better predictor of both university outcomes (graduation) and employment outcomes (earnings) than standardised tests. This should not be surprising, given that, when compared with standardised tests, grades may better capture student effort and motivation, which will be predictive of student success in university and in employment. And to the extent that good student behaviour is also associated with higher motivation and greater effort, perhaps we want grades to reflect that too. [*]

None of this is to say that we shouldn't be assessing student knowledge. It's more that grades that represent a more holistic measure of student success, will be more useful in predicting future student performance. That is more helpful for employers, and as a result it may be more helpful for encouraging students to study harder as well.

*****

[*] Of course, selection bias matters here too. In the case of the Swedish SATs, the most motivated and hardest working students may have opted out of the SAT test entirely. However, the analysis that Graetz and Karimi undertook is (I think) limited to students who had both grades and SAT scores recorded.

Thursday, 19 January 2023

Tea drinking vs. beer drinking, and mortality in pre-industrial England

When I introduce the difference between causation and correlation in my ECONS101 class, I talk about how, even when there is a good story to tell about why a change in one variable causes a change in the other, that doesn't necessarily mean that an observed relationship is causal. It appears that I am just a susceptible to a good story as anyone else. When a research paper has a good story, and the data and methods seem credible, I'm willing to update my priors by a lot (unless the results also contradict a lot of the prior research). I guess that's a form of confirmation bias.

So, I was willing to accept at face value the results of the article on tea drinking and mortality in England that I blogged about earlier this week. To recap, that research found that the increase in tea drinking in 18th Century England, by promoting the boiling of water, reduced mortality. However, now I'm not so sure. What has caused me to re-evaluate my position is this other paper by Francisca Antman and James Flynn (both University of Colorado, Boulder), on the effect of beer drinking on mortality in pre-industrial England.

Antman is the author of the tea-drinking article, so it should be no surprise to expect that the methods and data sources are similar, given the similarity of the two papers in terms of research question and setting. However, there are some key differences between the two papers (which I will come to in a minute). First, why study beer? Antman and Flynn explain that:

Although beer in the present day is regarded primarily as a beverage that would be worse for health than water, several features of both beer and water available during this historical period suggest the opposite was likely to be true. First, brewing beer would have required boiling the water, which would kill many of the dangerous pathogens that could be found in contaminated drinking water. As Bamforth (2004) puts it, ‘the boiling and the hopping were inadvertently water purification techniques’ which made beer safer than water in 17th century Great Britain. Second, the fermentation process which resulted in alcohol may have added antiseptic qualities to the beverage as well...

Notice that the first mechanism here is basically the same as for tea. Boiling water makes water safer to drink, even when it is being used in brewing. Also:

...beer in this period, which sometimes referred to as ”small beer,” was generally much weaker than it is today, and thus would have been closer to purified water. Accum (1820) found that small beer in late 18th and early 19th century England averaged just 0.75% alcohol by volume, a tiny fraction of the content of even the ‘light’ beers of today.

The data sources are very similar to those used for the tea drinking paper, and the methods are substantially similar as well. Antman and Flynn compare parish-level summer deaths (which are more likely to be associated with water-borne disease than summer deaths) between areas with high water quality and low water quality, before and after a substantial increase in the malt tax in 1780. Using this difference-in-differences approach, they find that:

...the summer death rate in low water quality parishes increases by 22.2% relative to high water quality parishes, with a p-value on the equality of the two coefficients of .001.

Antman and Flynn then use a second identification strategy, which is to compare summer deaths between parishes that have gley soil (suitable for growing barley, which is then malted and used to make beer) and parishes without gley soil, before and after the change in the malt tax. In this analysis, they find that:

...parishes with gley soil had summer death rates which increased by approximately 18% after the malt tax was implemented relative to parishes without gley soil.

Not satisfied with only two identification strategies, Antman and Flynn then use a third, which is rainfall. Their data is limited to the counties around London (because that is where they have the rainfall data from). In this analysis, they find that:

...the effect of rainier barley growing seasons on parishes with few nearby water sources is positive and significant, indicating that summer deaths rise following particularly rainy barley growing seasons... [and] ...rainy barley-growing seasons lead to more summer deaths in areas where beer is most abundant, even controlling for the number of deaths occurring in the winter months.

So, the evidence seems consistent with beer drinking being associated with lower mortality, because in areas where beer drinking decreased (because of the increase in the malt tax) by a greater amount, mortality increased by more.

But not so fast. There are two problems here, when you compare across the tea drinking and beer drinking research. First, the data that they use is not consistent. The tea drinking paper uses all deaths in each parish. The beer drinking paper uses only summer deaths, arguing that summer deaths are more likely to be from water-borne causes. If that is the case, why use all deaths in the tea drinking paper? What happens to the results from each paper when you use the same mortality data specification?

Second, the increase in the Malt Tax was in 1780. The decrease in the tea tax (which the tea drinking paper relies on) was in 1784. The two tax changes are awfully close together timewise, and disentangling their effects would be difficult. However, neither paper seems to account for the other properly. The beer drinking paper includes tea imports as a control variable, but in the tea drinking paper it wasn't tea imports, but tea imports interacted with water quality that was the key explanatory variable (and the timing of the tea tax change interacted with the water quality variable). The tea drinking paper doesn't really control for changes in beer drinking at all.

That second problem is the bigger issue, and creates a potentially problematic omitted variable problem in both papers. If you don't include changes in tea drinking in the beer drinking paper, and the two tax changes happened around the same time, how can you be sure that the change in mortality was due to tea drinking, and not beer drinking? And vice versa for failing to include changes in beer drinking in the tea drinking paper.

However, maybe things are not all bad here. Remember that the two effects are going in opposite directions. It is possible that the decrease in the tea tax increased tea drinking, and mortality reduced, while the increase in the malt tax decreased beer drinking, and mortality increased. However, then we come back to the first problem. Why use a measure of overall mortality in the tea drinking paper, and a measure of only summer mortality in the beer drinking paper, when both papers are supposed to be looking at changes in mortality stemming from water-borne diseases?

Hopefully now you can see why I have my doubts about the tea drinking paper, as well as the beer drinking paper. Both are telling an interesting story, but the inconsistencies in data and approach across the two papers should make use extra cautious about the results, and leave us pondering the question of whether the results are causal or simply correlation.

[HT for the beer paper: The Dangerous Economist]

Read more:

Wednesday, 18 January 2023

NFL owners' rational response to the anthem protests

One of the most memorable aspects of the 2016 NFL season was the national anthem protests. Starting with Colin Kaepernick in a preseason game for the 49ers, players chose to remain seated, to kneel, to raise their fists, or to stay in the locker room, during the playing of the national anthem, in order to protest racial inequality. The anthem protests sent President Trump into something of an apoplectic frenzy. However, ultimately Kaepernick paid a high price for his protests, as he wasn't offered a contract by any other team after opting out of his 49ers contract at the end of the season. But did teams also pay a price for the protest actions of their players?

That is the question addressed in this recent article by Noah Sperling and Donald Vandegrift (both College of New Jersey), published in the Journal of Sports Economics (sorry, I don't see an ungated version online). Specifically, Sperling and Vandegrift look at the effect of protests on TV viewership, for the following game. The reason for looking at TV viewership, rather than game attendance, is:

Though attendance captures actions rather than attitudes, attendance as an outcome measure is still flawed. Stadium capacities impose an inherent upper bound on attendance and tickets are often purchased months in advance. Thus, attendance is unable to track short-term, weekly changes in demand.

They focus on the following game to overcome two timing issues with the TV viewership data:

Given that Nielsen ratings are calculated based on average ratings over an entire game, it is difficult to determine if the observed rating is capturing the full effect of the protest behavior. It is possible that this averaging could be capturing disgruntled viewers who were unable to change the channel in time following a protest and thus they are counted as a viewer for purposes of the rating. Other situations could include an anti-protest viewer who failed to notice the protest during the game and only became aware of the action from media reporting following the game’s conclusion. The same concerns apply to the viewership-in-millions measure which is also averaged across the span of the entire game.

Sperling and Vandegrift also distinguish between two 'levels' of protests:

Unambiguous protests include any protests in which a player kneels or sits during the national anthem, stays in the locker room during the national anthem, or raises a fist during the national anthem. By contrast, ambiguous protests include all other player protests (e.g., locking arms with teammates during the national anthem)...

They find that:

...(1) the unambiguous protests reduce viewership in the week following the protests by about 15% while ambiguous protests do not generally produce statistically significant reductions in viewership; (2) the negative effect of unambiguous protests on viewership is particularly strong in metro locations that voted more heavily for Donald Trump in 2016; and (3) following Donald Trump’s statements in week 3 of the 2017 season, both ambiguous and unambiguous protests increased and the increase in ambiguous protests was particularly large.

That put the profit-maximising NFL team owners in a difficult position. The protests negatively affected TV viewership, which (if the protests continued) would be sure to negatively impact future revenues that the NFL (and team owners) would receive from TV broadcast contracts. Sperling and Vandegrift note in the conclusion that the increase in protests following President Trump's statements in 2017:

...taken together with: (1) subsequent negotiations between players and owners over the anthem protests; (2) the willingness of some owners to join players in less objectionable forms of protest; and (3) the May 2018 agreement to “stand and show respect for the flag and the anthem” (Haislop, 2020), suggests that the owners advanced or supported the ambiguous protests to rebut arguments that they sought to suppress the players’ expressive rights while they pursued actions to curtail unambiguous protests that threatened their income derived from TV broadcasts.

The owners responded in a very carefully constructed way that would ensure that their profits were maintained. They supported ambiguous protests, which ensured that the protests had virtually no effect on TV viewership (and future team revenue). However, letting the players continue to protest (albeit in an ambiguous way) kept the players happy and willing to continue to play for the team. Such rational owners!

Tuesday, 17 January 2023

Tea drinking and mortality in pre-industrial England

[Update: I now have some doubts about this paper - see this follow-up post]

The importance of clean water for public health is thoroughly uncontroversial in modern times. In the temporary absence of a safe water source, one recommendation is to boil water for drinking, which will kill off most bacteria and other pathogens. However, prior to the acceptance of the germ theory of disease, the importance of clean water was relatively unknown. Water-borne diseases such as dysentery and cholera were relatively common (at least, compared with modern times).

In the late 17th Century, the English began drinking tea in large numbers (more on that in a moment). One of the important aspects of tea drinking is that it requires boiling of water. Did that lead to a reduction in mortality, especially from water-borne disease? That is the research question addressed in this forthcoming article by Francisca Antman (University of Colorado, Boulder), to be published in the journal Review of Economics and Statistics (ungated earlier version here, and relatively non-technical summary by the author here). Why investigate this? Antman notes that:

...several historians have suggested that the custom of tea drinking was instrumental in curbing deaths from water-borne diseases and thus sowing the seeds for economic growth.

Antman's research is the first to quantitatively attempt to assess these claims. She uses data on mortality rates at the parish level for 404 parishes from the mid-16th Century to the mid-19th Century, and a couple of different proxies for water quality:

The primary water quality measure used in the analysis is the number of water sources within 3 km of the parish, as calculated using data from the United Kingdom Environment Agency Statutory Main River Map of England overlaid on a map of historical parish boundaries... It is expected that parishes with a higher number of rivers proximate to the parish would have benefited from greater availability of running water, and thus would have benefited from relatively cleaner water compared with those parishes which were limited to only a few sources and thus suffered from a greater likelihood of contamination...

An alternative water quality proxy, the average elevation within a parish, is also offered to show that the relationship between tea and mortality is robust to alternative measures of water quality... Elevation is believed to be positively correlated with water quality because parishes at higher elevation would have been less likely to be subjected to water contamination from surrounding areas.

Antman applies two different strategies to identify the effect of tea drinking on mortality. In the first, she compares the decline in mortality between high (above the median) water quality parishes and low (below the median) water quality parishes over time. She particularly compares the difference between before and after the widespread adoption of tea drinking, which she dates as:

...the Tea and Windows Act of 1784 which reduced the tea tax from 119 to 12.5 percent at one stroke...

The second strategy uses lagged national-level tea imports as an indicator of when tea drinking increased in prevalence, but is otherwise similar. The results from the first strategy are nicely summarised in panel A1 of Figure 1 from the paper:

Notice that mortality declines in both high-water-quality parishes (dashed line with diamonds) and low-water-quality parishes (solid line with circles) after 1784, but that the decline was larger in low-water-quality parishes. And even though there is an up-tick in mortality towards the end of the time period (probably due to urbanisation, as urban areas had higher mortality than rural areas), the difference between high-water-quality and low-water-quality parishes continued to increase. Antman notes that:

With regard to the magnitudes of the impact of tea drinking on mortality, the estimates suggest that areas with worse water quality saw yearly mortality rates drops by about 18% by the end of the period, relative to parishes with better water quality...

The results are similar using the second strategy, although the size of the effect was smaller. Antman also shows that her results are robust to controlling for smallpox mortality, and to controlling for wages, as well as robust to including different proxies for parish-level population. So, it does seems that tea drinking, by promoting the boiling of water, did reduce mortality in pre-industrial England.

[HT: The Dangerous Economist, last year]

Saturday, 14 January 2023

An unlikely argument that UV radiation and cataract prevalence explain differences in institutional quality

Economists define institutions as the basic rules, customs and practices of society, that determine and constrain the way that people interact. Institutions are the foundations of the economic and political systems of each country, and have been implicated in the explaining the differences in economic development between countries (see this article by Dietrich Vollrath, and my post on that topic as well). Indeed, a range of excellent work by Daron Acemoglu and James Robinson and their collaborators focuses on the role of institutions in explaining development, as detailed in their book Why Nations Fail (which I will review sometime this year - it is near the top of my large pile of books-to-be-read).

But what causes countries to have different political and economic institutions? Acemoglu and Robinson have focused attention on colonial relationships, as well as the roles of health and disease. So, I was interested to read this 2018 article by James Ang (Nanyang Technological University), Per Fredriksson (University of Louisville), Aqil bin Nurhakim, and Emerlyn Tay (both Nanyang Technological University), published in the journal Kyklos (ungated version here). Ang et al. focus on what, to me, seems a surprising angle - the role of sunlight (specifically, UV radiation) on institutional quality across countries. Their argument is that:

...populations facing a permanent threat of developing eye disease have historically had a lower incentive to invest in cooperation via institution building. Moreover, specialized activities such as institution building necessitate a food surplus, which requires prior investments in skills and technologies. Such activities and investments are hindered by a higher likelihood of blindness and hence fewer individuals specialize in law creation activities.

If that argument holds, then countries that (in the past) experienced high levels of UV radiation, where eye disease (specifically, cataracts) were more common, would have lower quality institutions in modern times. They use data on UV radiation that is derived from NASA, along with the World Bank Worldwide Governane Indicators (WGI) as their measure of institutional quality. The data do seem to support a relationship, as shown in Figure 1 from the paper:

And their regression analysis supports this. After controlling for a range of geographical variables (mean elevation, distance to nearest coast, terrain ruggedness, precipitation, whether the country is landlocked, a small island, and which continent the country is in):

...a one standard deviation (SD) increase in the intensity of UV-R (1 SD=0.778) leads to a 0.628 SD decrease in institutional quality (1 SD=0.262), all else equal.

The effect seems quite large:

For example, Papua New Guinea has a high UV-R intensity equal to 2.653 (max=3.285), and a poor institutional quality score of 0.272 (max=0.98). Based on the estimates... if Papua New Guinea instead experienced UV-R intensity similar to Estonia (0.514), the estimated institutional quality score would equal 0.725, a substantial rise. This is only slightly lower than Estonia’s current score (0.817), and is similar to those of neighboring Lithuania (0.736) and Latvia (0.710), with UV-R intensities similar to Estonia.

Indeed, possibly implausibly large. Could UV radiation really explain 83 percent of the difference in institutional quality between Papua New Guinea and Estonia? [*] It seems unlikely to me. However, in their robustness checks, Ang et al. go on to rule out a range of other things, including distance from the equator (absolute latitude), land area, agricultural suitability, the proportion of the population living in the tropics, and the number of frost days. They then show that cataract prevalence, unlike other disease prevalence, is negatively correlated with institutional quality, even when UV radiation is also in the econometric model. They conclude that:

...UV-R and institutional quality should be negatively related. Our results provide considerable support for this notion.

However, there are two problems remaining with this study. First, I don't really like their approach to the measurement of institutional quality:

Percentile rank data ranging from 0 to 100 for each country are used, where the country with the lowest ranked institutions is assigned a value of 0. The ranking scores for six abovementioned indicators are first averaged over the period 2006 to 2015 and are then combined into a composite index by taking their average. The data of this variable are then divided by 100 in order to give a measure that varies between 0 and 1, where a larger value signifies greater institutional quality.

I'm not sure what the average percentile ranking actually tells us. Country X can improve its ranking by improving its own institutional quality, or because the institutional quality of other countries improves, but slower than that of Country X. Although Ang et al. show that their results are robust to using other measures of institutional quality, they never show that the results are robust to using the underlying WGI scores rather than the ranking. That in itself should make us a little skeptical.

Second, Ang et al. never show that their results are robust to the inclusion of controls for colonial ties. Institutional quality has been shown to be associated with which colonial power (if any) historically controlled a country, along with the type of control (extractive or inclusive) that the colonial power exercised. This is the big omission, and is the basis of some of Acemoglu and Robinson's prior work. I suspect that the reason that UV radiation looks like it explains such a large proportion of differences in institutional quality is that UV radiation is correlated with colonial ties variables, which themselves are highly explanatory of the differences (economists refer to this as omitted variable bias). Ang et al. try to argue that omitted variable bias is not an issue, but it very much obviously is, when the existing literature suggests that there are important variables that have not been included.

So, I think we can file this study under interesting, but not convincing.

*****

[*] The difference in institutional quality between Estonia (0.817) and Papua New Guinea (0.272) is 0.545. If Papua New Guinea's institutional quality were raised to 0.725, the difference with Estonia would only be 0.092, an 83 percent decrease.

Thursday, 12 January 2023

Book review: Bowling Alone

When it comes to social capital, the one book that almost everyone refers to is Robert Putnam's 2000 book Bowling Alone, which I finally read this month. I read the first edition, which had been sitting on my bookshelf for some time, but now I notice that there is an updated second edition from 2020 - I might have to follow up this review later.

Anyway, the book's reputation as the go-to source for details on American social capital over the years is well deserved. The introductory chapter provides a good summary of many of the key concepts in social capital, which:

...refers to connections among individuals - social networks and the norms of reciprocity and trustworthiness that arise from them.

There are good explanations of bridging social capital (inclusive, between groups) and bonding social capital (exclusive, within groups), as well as a lot of background on the key literature on social capital. Given the datedness of the book now (and my not knowing there was a newer edition), I expect this would be among the most valuable parts of the book to readers new to the concept.

Putnam then goes into great detail on the trends in civic engagement and social capital over time. He draws on a variety of survey-based evidence, as well as organisational records. Clearly, a monumental effort has gone into collating the data, and this is confirmed by the large number of collaborators and research assistants noted in the acknowledgements section. Putnam documents declines in or important changes in the nature of political participation ("Participation in politics is increasingly based on the checkbook, as money replaces time..."), civic participation ("...active involvement in face-to-face organizations has plummeted, whether we consider organizational records, survey reports, time diaries, or consumer expenditures..."), and religious participation ("...over the last three to four decades Americans have become about 10 percent less likely to claim church memberships, while our actual attendance and involvement in religious activities has fallen by roughly 25 to 50 percent..."), as well as connections in the workplace, and more informal social connections. Clearly, something important changed over the course of the second half of the 20th Century.

The third section of the book was both the most interesting to me, and the most frustrating. It explores the potential causes of the decline in social capital over time, looking at pressures of time and money, mobility and urban sprawl, technology and mass media (mostly the spread of television), and generational change. There is again a good range of evidence in this section, and a rough conclusion on the relative contributions of these to the changes is presented at the end of the section:

First, pressures of time and money, including the special pressures on two-career families, contributed measurably to the diminution of our social and community involvement during these years. My best guess is that no more than 10 percent of the total decline is attributable to that set of factors.

Second, suburbanization, commuting, and sprawl also played a supporting role. Again, a reasonable estimate is that these factors together might account for perhaps an additional 10 percent of the problem.

Third, the effect of electronic entertainment - above all, television - in privatizing out leisure time has been substantial. My rough estimate is that this factor might account for perhaps 25 percent of the decline.

Fourth and most important, generational change - the slow, steady, and ineluctable replacement of the long civic generation by their less involved children and grandchildren - has been a very powerful factor... this factor might account for perhaps half of the overall decline.

The analysis in this section is supported by evidence, as I noted above, but the evidence is all correlational. That points to the difficulty of establishing causation in the absence of natural or other experiments, and Putnam is suitably modest. His rough estimates of the contributions of the various factors to the changes in social capital may well be true, but they are impossible to test in a robust and credible way. However, my biggest gripe with this section (and the book overall) is that the analyses in this section use an inconsistent set of control variables. Different analyses control for different factors, but the choice of controls isn't justified. That is somewhat mitigated by the data having been made available at the Bowling Alone website, so interested readers can explore the robustness to a varied set of control variables. However, most readers would be unlikely to do so.

The last two sections of the book look at why we should care (i.e. what are the negative consequences of low social capital, such as its effects on education, health, crime, democracy, and so on), and what society should do to arrest the change. These were the least interesting sections to me, as I am familiar with more of the recent literature on the consequences of social capital. However, the connection between social capital and people's belief that they would win a fistfight is both surprising, and an example of the breadth of evidence that Putnam applies in the book. As for the final section, the dated nature of the first edition of the book makes the solutions interesting, but the world has moved on (and I look forward to reading the newer edition to see what Putnam thinks now).

This is a book that could easily degenerate into a boring progression of tables of statistics. However, Putnam does a good job of keeping it lively. For example, in discussing social capital as a collection of different forms of capital, Putnam starts with a quirky example of different forms of physical capital:

Physical capital is not a single "thing", and different forms of physical capital are not interchangeable. An eggbeater and an aircraft carrier both appear as physical capital in our national accounts, but the eggbeater is not much use for national defense, and the carrier would not be much help with your morning omelet.

Overall, and in spite of my few gripes on the analysis, I really enjoyed this book, which is an in-depth treatment of an important topic. While I suspect that many people who cite the book have never read it, they really should. And, so should you.

Wednesday, 11 January 2023

Legalised marijuana sales reduce the birth rate

The legalisation of marijuana in the US, where states have made medical marijuana sales legal, and then retail marijuana sales legal, and all at different times, has provided a wealth of possibilities for studying the effects of marijuana legalisation. From these studies, we have learned that medical marijuana sales may decrease harm from opiates, decrease violent crime, while legalised retail marijuana may increase house values (see also here), and lower crime rates, but may displace drug dealers into selling harder drugs. There is also evidence (from Europe) that legal access to marijuana reduces academic performance by students.

So, I was interested to read this recent working paper by Sarah Papich (University of California, Santa Barbara), which focuses on the effect of legalisation of marijuana sales on a completely different outcome - birth rates. The effect of marijuana use on the birth rate is theoretically ambiguous, because:

The medical literature suggests that marijuana use has two competing effects on fertility... First, marijuana use could lower the likelihood of pregnancy through effects on both men’s and women’s reproductive systems. Marijuana use is associated with lower sperm counts (Gundersen et al. 2015) and delayed ovulation (Bari et al. 2011), both of which make conception less likely. Second, marijuana use could lead to sexual behaviors that raise the likelihood of pregnancy. Using marijuana heightens the hedonic effect of sexual activity and diminishes the ability to think about long-term consequences of failing to use contraception. Marijuana use is associated with an increase in the amount of sexual activity (Sun and Eisenberg 2017) and a decrease in the likelihood of using contraception (Guo et al. 2002).

So, an increase in marijuana use could increase the birth rate, because people have more sex, and riskier sex, or it could decrease the birth rate, because marijuana users are less fertile. Papich uses a different-in-differences research design, which compares the difference in birth rates between states before marijuana is legalised, with the difference in birth rates between states after marijuana is legalised. Her key data come from the US National Vital Statistics System. Papich also distinguishes between the effects of legalising medical marijuana sales and legalising retail marijuana sales, while controlling for:

...shares of the total population by race, ethnicity, age, and education; unemployment rate; median household income; state cigarette tax; state beer tax; an indicator for whether the state has expanded Medicaid; indicators for abortion restrictions in the form of ambulatory surgical center laws, admitting privilege laws, and transfer agreement laws; an indicator for whether same-sex marriage is legal; the Medicaid eligibility threshold for pregnant women as a percentage of the federal poverty level; a WIC EBT indicator; and an indicator for whether marijuana has been decriminalised.

Papich finds that:

...days of marijuana use per month increase by 41% in response to RMLs and 23% in response to MMLs.

Ok, so people use more marijuana in response to medical marijuana laws (MMLs), and even more in response to recreational marijuana laws (RMLs). But what about birth rates? Papich then finds that:

...RMLs lead to a 2.78% decline in the average birth rate. This result provides evidence that marijuana’s physical effects, which suppress the likelihood of pregnancy conditional on sexual activity, have the dominant effect on fertility. Age heterogeneity analysis shows the largest decrease in the birth rate occurs among women 30-34, closely followed by women 35-39 and then by women 40-44. The birth rate in all three of these age groups declines by over 6%. This heterogeneity analysis suggests that women are having fewer total children in response to RMLs rather than delaying births...

I find that MMLs lead to a statistically insignificant decrease in birth rates.

Papich also looks at the effects on sexual activity, but those results are not as consistent, and not as convincing. However, she does provide some alternative evidence that sexual activity increases, or at least risky sexual activity increases:

RMLs are estimated to increase a state’s male gonorrhea cases by 6.1 cases per 100,000 population, a 5% increase from the mean. The effect of MMLs is statistically insignificant, with a positive point estimate.

The combination of those results suggests that legalising marijuana sales reduces birth rates, and the mechanism is likely to be through reduced fertility. In her conclusion, Papich notes some open questions that remain:

Data on contraceptive use before and after RMLs would provide insight into another mechanism through which marijuana legalization could affect fertility. Additional mechanisms, such as changes in the seriousness of romantic relationships when marijuana use increases and the effect of fewer people being imprisoned for marijuana possession, are promising areas for future research.

To those suggestions, I would add assessing whether the decrease in birth rates is primarily a result of reduced male fertility (since it appears that male marijuana use increased more than female marijuana use when recreational marijuana was legalised) or reduced female fertility. However, the headline result still holds - legalising marijuana sales reduces the birth rate.

[HT: Marginal Revolution, last year]

Tuesday, 10 January 2023

The egg shortage turns consumers to chickens

A couple of weeks ago, I blogged about the current egg shortage. When the quantity of eggs demanded by consumers is greater than the quantity of eggs suppliers by sellers at the current market price, there is a shortage. Some consumers, who would be willing and able to pay the market price, will miss out on eggs. How do consumers respond? In my earlier post, I argued that consumers would bid the price upwards, and the market price would rise. However, that is not the only consumer response. As the New Zealand Herald reported earlier this week:

Interest in online auctions for chickens has more than doubled amid a nationwide egg shortage.

Trade Me spokesperson Ruby Topzand said searches for chickens, coops and feed had risen to more than 21,400 in the past week - up from 9300, a 129 per cent increase...

Store-purchased eggs and home-laid eggs are substitutes. When the price of one substitute increases, some consumers will switch to the other. Switching to substitutes can also occur when one good has a shortage (which is the case for eggs at the moment). So, some consumers are switching from store-purchased eggs to home-laid eggs, by buying their own chickens.

Are home-laid eggs cheaper? Not necessarily. The monetary cost per egg may be lower, but the consumer needs to factor in both up-front costs (not just the cost of the chickens, but the cost of the coop for the chickens to live in) and ongoing costs of feed for the chickens. And then, there is also a surprising amount of labour involved in raising chickens (not least the time it takes to find eggs laid by the devious fowl). The time and effort, as well as the monetary cost, may make home-laid eggs a more expensive option for most consumers (as well as many consumers not having a property that is suitable for keeping chickens). That's why few consumers own their own chickens already.

However, even with the costs of home-laid eggs, the shortage has induced at least some consumers to make the switch. What will they do when the egg shortage is over?

Saturday, 7 January 2023

The evolutionary roots of folk economic beliefs?

'Folk economic beliefs' are the widespread beliefs about economic and policy issues, which are held by members of the public untrained in economics. This includes beliefs about trade, unemployment, the operation of markets, the effects of monetary policy, and so on. Many of these beliefs are incorrect, at least compared with the views and models of the majority of economists.

What leads people to adopt incorrect folk economic beliefs? That is the topic of this 2018 article by Pascal Boyer (Washington University in St. Louis) and Michael Bang Petersen (Aarhus University), published in the journal Behavioral and Brain Sciences (ungated version here). Boyer and Petersen focus on eight particular examples of folk economic beliefs, and then link those beliefs to evolutionary psychology. They argue that:

...many folk-views on the economy are strongly influenced by the operation of non-conscious inference systems that were shaped by natural selection during our unique evolutionary history, to provide intuitive solutions to such recurrent adaptive problems as maintaining fairness in exchange, cultivating reiterated social interaction, building efficient and stable coalitions, or adjudicating issues of ownership, all within small-scale groups of foragers.

The eight folk economic beliefs (FEBs) that Boyer and Petersen focus on are:

  1. FEB 1: International trade is zero-sum, has negative effects;
  2. FEB 2: Immigrants “steal” jobs;
  3. FEB 3: Immigrants abuse the welfare system;
  4. FEB 4: Necessary social welfare programs are abused by scroungers;
  5. FEB 5: Markets have a negative social impact (“emporiophobia”);
  6. FEB 6: The profit motive is detrimental to general welfare;
  7. FEB 7: Labor is the source of value; and
  8. FEB 8: Price-regulation has the intended effects.

The particular aspects of evolutionary psychology that Boyer and Petersen invoke are: (1) detecting free riders in collective action; (2) partner choice for exchange: (3) exchange and assurance by communal sharing; (4) coalitional affiliation; and (5) ownership psychology. As they explain:

In any exchange, it is crucial to monitor whether the implicit or explicit terms of the exchange are being followed. For example, if two individuals take turns helping each other forage, does one person provide less help than he receives? To solve this problem, human exchange psychology needs to contain specific mechanisms for detecting and responding to free-riders...

To engage in exchange, one needs to choose among available social partners. Given the possibility of choice, human exchange and cooperation from ancestral times have taken place in the context of competition for cooperation... as each agent could advertise a willingness to cooperate (and signal how advantageous cooperation would be), and could choose or reject partners depending on their past and potential future behavior...

Competition for cooperation has specific consequences on fairness intuitions in the context of collective action. Given that two (or more) partners contribute equal effort to a joint endeavor, and receive benefits from it, an offer to split the benefits equally is likely to emerge as the most frequent strategy – anyone faced with a meaner division of spoils will be motivated to seek a more advantageous offer from other partners. So, to the extent that people have partner options, the constraints of partner-choice explain the spontaneous intuition that benefits from collective action must be proportional to each agent’s contribution...

One important form of social relations is founded on communal sharing, where resources are pooled...

Humans are special in that they build and maintain highly stable associations bounded by reciprocal and mutual duties and expectations. Such groups – called alliances or coalitions – may be found at many different levels of organization...

The psychology underlying coalitional strategies include the following assumptions: (a) relevant payoffs to other members of the coalition are considered as gains for self (and obviously, negative payoffs as losses to self); (b) payoffs for rival coalitions are assumed to be zero-sum – the rival coalition’s success is our loss, and vice-versa; and (c) the other members’ commitment to the common goal is crucial to one’s own welfare...

These assumptions reflect two crucial selection pressures operating on human groups: First, that alliances are competitive and exclusive, because social support is a rival good. Second, that resources, status, and many other goods are zero-sum and, hence, the object for rivalry between alliances...

For exchange to happen over human evolutionary history, our ancestors needed an elaborate psychology of ownership. Who is entitled to enjoy possession of a good, and to exchange it?...

Adults and even very young children have definite intuitions about who owns what particular good, in a specific situation. For instance, they generally assume that ownership applies to rival resources (that is, such that one person’s enjoyment of the resource diminishes another person’s); that prior possession implies ownership; that extracting a resource from the environment makes one the owner; that transforming an existing resource confers ownership rights; and that ownership can be transferred, but only through codified interactions...

Then, taking each FEB in turn, Boyer and Petersen first link FEB 1 to coalitional affiliation. On that point, I found this most interesting:

...we should expect the view that trade is bad to be particularly attractive when the trading crosses perceived coalitional boundaries. It is predicted to invariably occur in the context of, precisely, debates about trade between countries. American consumers may find it intuitive that the United States might suffer from Chinese prosperity, but, on this theory, they would find it less compelling that development in Vermont damages the economy of Texas.

That explains why my usual counter-point to non-economists' negative views of international trade, which is to note that perhaps Hamilton should close its' borders to trade from the rest of New Zealand, often fails to hit the mark.

Boyer and Petersen then link FEB 2 and FEB 3 (which seem on the surface to be contradictory, as immigrants can't both steal jobs and abuse the social security system), to coalitional affiliation and detection of cheaters, reasoning that:

Immigrants are by definition newcomers to the community. Psychological research has shown that newcomers to groups activate this connection between coalitional cognition and cheater-detection, in particular, in situations where group membership is construed as conferring particular benefits. In such situations, newcomers are typically regarded with great suspicion...

The tight relationship between the concepts of nation and coalition... may explain the attractiveness of the statement that immigrants must be free-riders, scrounging on the past efforts of the host community. But, at the same time, the involved psychological systems leave open whether it is on job creation or on the welfare system that immigrants free-ride. 

FEB 4 is related to free-rider detection and notions of communal sharing, while FEB 5 and FEB 6 are linked to partner choice and the impersonal nature of markets:

In small-scale interactions, the balancing of costs and benefits occurs over reiterated exchanges, and, in order to predict these long-term outcomes, information about the partner’s reputation and past exchanges are key. Impersonal transactions, in contrast, are often anonymous, and therefore make it more difficult to track the reputation of one’s partners. To a psychology designed for partner-choice, this is likely to trigger an alarm signal, indicating that such a situation should be avoided. Second, strictly impersonal exchange goes against motivations to generate bonds of cooperation with particular individuals, as a form of social insurance. This may reinforce the intuition that impersonal transactions involve, if not danger, at least a missed opportunity. Finally, systems for partner-choice are set up to avoid engaging in exchange relationships with individuals who are much more powerful, in order to avoid exploitation... In modern markets, however, many exchanges take place with corporations or business that seem exceptionally powerful from the perspective of the individual.

FEB 7 on the labour theory of value is linked to ownership psychology, where Boyer and Petersen note that:

Ancestrally, most valued and owned goods were previously unclaimed natural resources that time and effort turned into something useable (whether food, tools, or shelter). In such situations, labor is indeed the exclusive generator of both “value” and ownership.

Finally, FEB 8 is not linked to any of the previous aspects, but instead:

To explain this FEB, we need to take into account the fact that unintended consequences of this kind are second-order effects that occur in large-scale social systems. They reflect aggregate market responses to changes in costs and benefits (e.g., if the price of the good is regulated downwards, the market responds by decreasing quantities supplied). But our psychology of social exchange is designed for small-scale social systems, for personal exchanges between oneself and one or more identified others. The intuitive inference systems that evolved to deal with such situations do not, because of the small-scale nature of the situations, include any conceptual slots for aggregate dynamics such as origins of supply. In this way, FEBs about regulation do not emerge from a single set of intuitive inference systems. Rather, they emerge from the failure of particular pieces of information to be processed by any intuitive inference system.

Boyer and Petersen's arguments are interesting, but not all their explanations are equally convincing, especially the last one. There is an excellent debate (called 'open peer review') over the subsequent pages of the journal version of the article (not the ungated one, sadly), which is well worth reading. However, the whole exercise smacks of exactly the problems that Jason Collins noted about behavioural economics in this article (which I discussed here) - the explanations are very ad hoc, and there is no real unifying framework that demonstrates which aspects of evolutionary psychology should apply to which folk economic beliefs. Without something more systematic, we are simply left with some interesting explanations that may or may not hold in a wider context.

Friday, 6 January 2023

Rational Danish bank robbers

Gary Becker's theory of rational crime suggests that criminals weight up the costs and benefits of crimes they commit. If the costs of crime go down, or the benefits go up, they will commit more crimes. On the other hand, if the costs of crime go up, or the benefits go down, they will commit fewer crimes. If the benefits of crime fall to zero (or below the costs of even the lowest-cost criminal), perhaps the number of crimes falls to zero? The New Zealand Herald reported earlier this week:

For the first time in years, Denmark hasn’t recorded a single bank robbery. There wouldn’t have been much point.

Cash transactions in the Nordic country have become virtually obsolete, with Danes increasingly opting to use cards and smart phones for payments...

Finance Denmark, the banking sector’s association, said only about 20 bank branches across the country have cash holdings. But then the number of bank branches has fallen from 219 in 1991 to 56 in 2021, it said.

News reports noted that cash withdrawals in Denmark have been dropping by about three-quarters every year for the past six years.

In 2000, 221 bank robberies were recorded, Finance Denmark said. In 2021, there was just one.

If there are fewer banks, holding less cash on the premises, then the benefits of committing a bank robbery are lower, and there are fewer bank robberies (in this case none at all). However, the criminals appear to be switching to close substitutes instead:

Initially, robbers switched their attentions from bank branches to Automatic Teller Machines, with such attacks peaking at 18 in 2016. But those too have come down to zero amid better surveillance and technical protection, the industry association said.

Better surveillance and technical protection raises the costs of crime, which again reduces the number of crimes (in this case, attacks against ATMs). Finally:

Finance Denmark said criminals in recent years have turned to defrauding people online.

Watch out for those online scams, people!