Wednesday, 31 January 2024

Cheating in the lab and in the field

A commonly used test for dishonesty or cheating in lab experiments is to ask research participants to privately (unobserved by the researcher) roll a number of dice and report the number of sixes they roll, or to privately toss a number of coins and report the number of heads. If research participants are paid for each six they roll, or each head they toss, then they have an incentive to cheat, by reporting more sixes or more heads than they actually obtained. Now, this doesn't give an individual measure of dishonesty, since a research participants might really have tossed six heads in a row (on average, one in every 64 research participants should have that outcome), but it does provide a measure of the extent of cheating across the whole sample (for example, if half of research participants report tossing five or more heads out of six, you can be pretty sure there is a lot of cheating going on).

The real question, though, is does this measure of dishonesty replicate outside of the lab environment? That is essentially the question addressed in this 2018 article by Alain Cohn (University of Michigan) and Michel André Maréchal (University of Zurich), published in the Economic Journal (ungated earlier version here). Using a sample of 162 public middle and high school students in Switzerland, Cohn and Maréchal started with the lab measure of dishonesty:

Subjects first opened an envelope containing 10 coins, each worth 0.5 Swiss francs (about US $0.55). Then, they were instructed to toss each coin in private and report their outcomes on paper. For every coin toss for which subjects reported the outcome ‘heads’ they were allowed to keep the coin; they had to put the coin back into the envelope otherwise. Participants thus faced a financial incentive to cheat by misreporting the outcomes of their coin flips without any risk of getting caught... The stakes were considerable as the maximum possible payoff in this task corresponds roughly to half the amount students of similar age receive in pocket money every week.

Cohn and Maréchal then look at the relationship between the number of heads reported and measures of school misconduct (reported by their teachers): (1) disruptiveness in class; (2) non-completion of homework; and (3) absenteeism. Since the three measures of misconduct were highly correlated, they created a single index from them (but their results also appear to hold for each measure individually). They found that:

On average, the students took 62.8% of the coins in the envelopes (95% confidence interval: 60.0%, 65.7%)...

Given that on average, students should have taken 50 percent of the coins, there was a substantial amount of cheating. Who cheated most? Cohn and Maréchal report that:

...female students behaved more honestly than male students as they took significantly less coins (p < 0.000, t-test)... Moreover, we found that high school students took significantly less coins than those from middle school after controlling for age (p = 0.011, t-test), which could be explained by less deviant students selecting into higher education. Earnings in the coin tossing task and the two measures of cognitive ability are negatively correlated. However, the correlations do not reach statistical significance, neither for crystallised nor for fluid intelligence (p = 0.599 and p = 0.744, t-tests).

What about school misconduct (cheating outside the lab)? Cohn and Maréchal report that:

...behaviour in the coin tossing task is significantly related to school misbehaviour when controlling for age, gender, nationality, education level and parental education. A higher number of coins taken is associated with increased behavioural problems in school (p = 0.015)... The coefficient estimate implies that the difference in school misbehaviour between students who took 10 coins (presumably cheaters) and those who took five coins (presumably honest individuals) is more than 0.7 points (or 0.53 standard deviations) on average. For comparison, it would require students to differ by 2.7 standard deviations in cognitive ability (i.e. crystallised intelligence) to produce the same difference in school misbehaviour.

So, the important takeaway from this paper is that there is support for the external validity of the lab measure of cheating or dishonesty. At least, to the extent that misbehaviour by school students is the same as dishonesty (which, of course, it isn't). So, while this should give experimental economists and others a little bit more comfort in using the lab measures of dishonesty, more studies of the external validity of these measures, in other contexts, are sorely needed.

Monday, 29 January 2024

An extraordinary (but weak) claim about China's poverty reduction performance

Over the last forty years, global poverty has reduced dramatically. To see how dramatically, run this animation from Our World in Data (source here):


That animation is based on a poverty line of US$30 per day, which is about the average poverty line in a developed country. A poverty line is a level of income that separates the poor (who have income below the poverty line) from the non-poor (who have income above the poverty line). The poverty rate is then the proportion of the population who are poor. For example, the World Bank uses a poverty line that is currently US$2.15 per day (which is US$1 per day in 1996, adjusted for inflation) to determine rates of extreme poverty. By that measure, over the last 40 years, 800 million people have been lifted out of poverty, and China contributed close to three-quarters of that number.

The level of the extreme poverty line has been the subject of significant debate over the years (with many articles and several books devoted to the topic). Nevertheless, that global poverty has declined, and that China has contributed significantly to this success, is generally accepted. So, I was very surprised to read this article in The Conversation by Dylan Sullivan (Macquarie University), Jason Hickel (Autonomous University of Barcelona), and Michail Moatsos (Maastricht University), which presents a counter-view. They argue that:

In contrast to the World Bank, we find that from 1981 to 1990 – at the end of the socialist period – China’s rate of extreme poverty was one of the lowest in the developing world. It averaged only 5.6%, compared to 51% in India, 36.5% in Indonesia and 29.5% in Brazil.

We find extreme poverty increased dramatically during the market reforms of the 1990s. It reached a peak of 68% as price deregulation pushed up the cost of basic food and housing, cutting the buying power of low-income people.

Extreme poverty then slid during the 2000s, but has yet to fall to the levels calculated by the World Bank.

This is an extraordinary claim, so I read the underlying research article, which was just published in the journal New Political Economy (open access). The key difference between Sullivan et al.'s results and those of the World Bank lie in how the poverty line is calculated. As they explain:

In recent years, scholars have developed an alternative approach to measuring extreme poverty, which compares incomes against the cost of basic needs in different contexts (Moatsos 2016, Allen 2017, 2020). In 2021, the OECD published estimates of the share of the population below this ‘basic needs poverty line’ (BNPL), for all countries with available household survey based data from 1981 to 2008.

It is the use of this Basic Needs Poverty Line (BNPL) that explains the extraordinary results, and that is because when it comes to basic needs:

Socialist policies of public provisioning and price controls may keep the cost of meeting basic needs quite low compared to capitalist contexts characterised by high levels of commodification and privatisation. This means that any given level of broad-gauge PPP income would have a greater welfare purchasing power – in terms of basic needs – under socialism than under capitalism; people would have better access to the key goods, such as food and housing, that are necessary for escaping extreme poverty.

So, as China opened their economy up in the 1980s, privatising housing and other markets, the cost for a household to satisfy their basic needs increased, meaning that more households would be defined as extremely poor based on the BNPL measure. So, they end up with this comparison between China and several other 'middle income' countries (Figure 4 from the article):

Sullivan et al. then go on to compare those countries' performance across a range of measures that should be correlated with living standards, including literacy, school enrolment, life expectancy, health access (physicians and hospital beds per 1000 people), and calorie availability. They compare China with those other countries in 1981 and in 1990, a period where extreme poverty was reducing based on the World Bank poverty line, but where poverty based on the BNPL was relatively flat (as shown above). With these comparisons, they show that:

In sum, the empirical data on social indicators raises significant questions about the validity of the World Bank’s estimates of the poverty rate in China during the 1980s. In 25 out of 29 comparisons, China achieved the 1st or 2nd best score of the countries reviewed here, as we would expect from China’s relative performance on the BNPL.

There is a problem with the comparisons that they make though. They look at how China ranked in 1981 and 1990 compared with these other countries, and use China's declining ranking compared to these other countries as evidence in support of the BNPL data. However, they don't make the very obvious comparison of China in 1990 with China in 1981 on those same measures. If extreme poverty was decreasing, as the World Bank data suggests, then we would expect an improvement in these other measures of living standards. And indeed, that is what we see. The literacy rate in China increased from 66% to 78%, life expectancy increased from 67 to 69, and calorie availability increased from 103% to 119% of basic needs. None of that is inconsistent with a reduction in extreme poverty. Of course, that is also consistent with the BNPL data, which shows an (albeit slight) decrease in extreme poverty between 1981 and 1990 as well. An even better comparison would have been to look at the 1990s, where poverty based on the World Bank data and poverty based on the BNPL dramatically diverged. However, Sullivan et al. don't make that comparison (presumably because it doesn't support their argument).

The setting of a global extreme poverty line is quite fraught, and there will always be disagreements about how it should be determined, and at what level it should be set. However, as the American astronomer Carl Sagan noted, extraordinary claims require extraordinary evidence. The evidence that Sullivan et al. have brought to support their extraordinary claim is not extraordinary. Before we overturn the consensus that China dramatically reduced extreme poverty during the past 40 years, we need something more.

Sunday, 28 January 2024

The post-COVID-19 housing boom as a cause of the US Great Resignation

The term 'Great Resignation' came into use last year to refer to a sustained decline in the labour force participation rate in the US. As I noted last year, there is little evidence that there has been a Great Resignation in New Zealand, despite media commentary suggesting that there has been. However, this working paper from last year, by Jack Favilukis and Gen Li (both University of British Columbia), is making me wonder why New Zealand didn't experience a Great Resignation.

Favilukis and Li look at whether the COVID-19 housing boom explains the US Great Resignation, using data from the American Community Survey. They start by showing that the Great Resignation was concentrated among older workers. Specifically:

First, up to 2019, the labor force participation rate was rising for all age groups, but especially for older Americans. Second, in 2020, labor force participation fell for all groups, but most dramatically for the youngest and oldest Americans. Third, with the exception of the oldest Americans, all groups returned to the labor force in 2021 and were near or even above 2019 rates by 2022. On the other hand, the oldest Americans further reduced labor force participation in 2021 and continued to stay out of the labor force in 2022. By 2022, nearly the entire reduction in the labor force participation rate was due to the 65+ group. This is especially striking given the pre-2020 trend.

Favilukis and Li then look at whether employment, or weekly hours worked, are related to the housing market return over the past 4.5 years. They estimate this relationship separately for each year, age group, and homeowners/renters, allowing them to look at whether these various groups were affected in different ways by the post-COVID-19 housing boom. The results don't appear to be particularly sensitive to the choice of 4.5 years of housing market returns as the key variable, with other lengths of time showing similar results.

They find that, for 2021:

Renters tend to increase their labor force participation in response to house price increases; a 40 year old renter increases her probability of being in the labor force by approximately 0.07×0.10 =0.7% for every 10% increase in house prices (e.g. from 80% to 80.7% participation). This is statistically significant... Older renters are less reactive to house price changes, with slopes slightly positive but rarely statistically significant.

Younger owners also increase their labor force participation in response to house price increases, although their response is much lower than that of younger renters... Middle aged owners are relatively unresponsive to house price changes – their labor force participation falls slightly in response to higher prices but this is not statistically significant for one year age buckets.

However, individuals above 60 have a strong negative response; a 65 year old owner decreases her probability of being in the labor force by approximately 0.11 × 0.10 =1.1% for every 10% increase in house prices (e.g. from 15% to 13.9% participation). This is strongly statistically significant...

So, it appears that the 'Great Resignation' was concentrated not only among older workers, but among older workers who are homeowners, rather than renters. And, the propensity to be out of work within that group is strongly related to housing returns. Moreover, in metropolitan areas where housing returns were higher, the Great Resignation among older homeowners was bigger.

Which brings me back to my comment from the start of this post. Post-COVID-19 housing returns have been relatively strong (according to the QV House Price Index), and yet we haven't experienced the same substantial decrease in the labour force participation rate as seen in the US. Here's the labour force participation rate among people aged 65 years and over in New Zealand over the period since 2016 (source here):

There's little evidence of a Great Resignation among older people in New Zealand. The trend remains generally upwards over time. Unfortunately, the freely available data don't allow us to look at the difference between homeowners and renters, but if there were similar effects to those reported in the Favilukis and Li paper, the Great Resignation among older homeowners would be apparent in the overall statistics without looking at those homeowners and renters separately.

Favilukis and Li conclude that:

High house prices allowed many older Americans to retire early; if not for the high house prices, their labor force participation in 2021 would have been similar to 2019.

However, clearly there is more to the story than simply a housing boom leading to Great Resignation, otherwise we would have likely seen the same effect in New Zealand. New Zealand house prices are high. Why didn't that induce older New Zealand homeowners to retire early? Hopefully, someone is investigating that question.

[HT: Marginal Revolution, last year]

Read more:

Friday, 26 January 2024

This week in research #7

I've been a bit busy this week due to a heavy dose of research fieldwork, but nevertheless, here's what caught my eye in research over the past week:

  • Hoekman and Rake (open access) investigate the geography of academic authorship and find, perhaps unsurprisingly, that authorship is more likely for researchers who are located closer to project sponsors, and where there is less national competition
  • Jedwab, Johnson, and Koyama (with ungated earlier version here) find significant heterogeneity in recovery after the Black Death, with populations returning disproportionately to locations endowed with more rural and urban fixed factors of production
  • Pazzona (open access) conducts a meta-analysis on the relationship between income inequality and crime, and finds a statistically significant but economically insignificant relationship

And from my own research:

  • With a large number of co-authors led by John Oetzel, our new article (open access) in the journal Frontiers in Public Health reports on the levels and covariates of health-related quality of life, self-rated health, spirituality, life satisfaction, and loneliness, from a sample of 75 kaumātua (Māori elders)

Tuesday, 23 January 2024

Sex sells on Instagram

We've all heard the phrase 'sex sells', when applied to advertising. It turns out that it may be true in the social media space as well. A new article by Sophia Gaenssle (Erasmus University Rotterdam), published in the journal Kyklos (open access) provides some support for this.

Gaenssle collected data over a six-month period from Heepsy (a social media influencer marketing tool) on the top 500 influencers across five categories: (1) fashion and beauty, (2) fitness and sports, (3) music, (4) photo and arts, and (5) food and vegan. The data included the average price for a sponsored post on Instagram by each influencer, and the number of such posts each influencer posted. That allowed Gaenssle to estimate social media income for each influencer (or, at least, their income from sponsored content). The data also included the 12 most recent posts from each account, which Gaenssle used to create measures of the extent of body exposure (at the extreme, nudity) for each account. Specifically:

If a picture shows 50% or more naked skin (excluding portrait pictures), the body exposure is coded as 1. If there is less nudity in the picture, it is coded as 0. To account for the degree of suggestiveness, if 50% or more focus lie on “dressed” primary sexual characteristics (breasts, bottom, genitals), this is also coded as 1... pictures. The percentage of body exposure pictures is calculated for every account...

Basically, the body exposure measure is a proportion (out of 12) of the 12 posts that included some degree of nakedness. Based on the pictures in the appendix to the paper, it doesn't take a whole lot for a picture to be coded as body exposure. However, interestingly:

The average body exposure is 37%—so less than half of the pictures on the accounts are on average nude pictures. But there are accounts with 0 and accounts with 100% body exposure.

Given that two of the five categories are 'photo and arts' and 'food and vegan', I guess that shouldn't come as a surprise. Indeed, body exposure are lowest in those two categories, as shown in the final column of Table 7 in the paper:

Gaenssle also determines the gender distribution of content for each account (i.e. whether the account predominantly features males or females in its posts). As they explain:

I coded five different variables: (1) female (clearly female features, one or more women); (2) male (clearly male features, one or more men); (3) mixed (clearly male and female, if more than one person in the picture); (4) ambivalent (sexual characteristics recognizable, but not clearly attributable to a man or a woman, e.g., transvestite or similar); and (5) no identification (no characteristics visible)... it is possible to implement a proxy—the degree of female or male pictures of every account.

This distribution is also shown in Table 7 above. It is important to note that this isn't the gender of the account holder. It is really picking up the gender distribution of their posted pictures. Gaenssle doesn't really distinguish this point in the paper, but it is important in terms of interpretation. [*]

Gaenssle then analyses the relationship between gender distribution and four measures: (1) body exposure; (2) posting frequency; (3) price per picture; and (4) average advertising revenue (which is essentially posting frequency multiplied by the average price per picture). I'm only going to focus here on the first and last of those measures. In relation to body exposure, they find that:

...accounts with focus on female contents have significantly higher body exposure than accounts with focus on male contents (p-value = 0.0000). As such, women appear to show more nudity (mean men = 1.25 pics out of 12, mean women 4.3 pics out of 12).

If you've spent much time on Instagram, that finding probably wouldn't shock you. And neither would this:

Although female accounts achieve lower prices per picture, their revenue is significantly higher. The difference in posting frequency compensates for the price difference, so that women ultimately achieve higher ad revenues (p-value = 0.0238), (mean men = 22,654.02 USD per week, mean women 26,209.39 USD per week).

I guess I was surprised that the results were so close. However, bear in mind that this isn't comparing earnings for male influencers with female influencers. It is based on the gender shown in the pictures on each account.

The important question, though, is: does body exposure increase income for these influencers? On that point, Gaenssle finds that:

...body exposure has a significant positive effect on income in all four models... one increase in body exposure (one more picture coded as nude) increases the advertising revenue by 3.9%.

That result is certainly consistent with the idea that 'sex sells'. Gaenssle then digs a bit further, showing that across the five categories:

Except for Music, the trend is “accounts with higher degree of nudity can achieve higher levels of income.”... Only within the category Fitness, the effects are so strong that significant distinctions can be observed. Here, nudity is specifically beneficial.

Then looking at the effect of gender distribution of pictures by category, Gaenssle finds that the effects of body exposure on income are largest for fitness and photo accounts that show predominantly male images. However, those results are possibly stretching the data a little too far, and the confidence intervals on those results are quite wide (such that there aren't statistically significant differences by gender across categories).

So, overall the results suggest that sex sells on Instagram. Of course, Gaenssle isn't demonstrating a causal relationship here, simply that accounts that exhibit more body exposure have higher average earnings. It could be that there is some other factor that is driving both body exposure and earnings. For example, there is evidence that extroverted people earn more on average than introverted people (see here, for example). Extroverts may be more willing to have higher body exposure on their Instagram account. Gaenssle doesn't account for the confounding effect of extroversion (or other personality traits) on their results, so can't claim that body exposure causes higher earnings. Nevertheless, the results are interesting and worth further exploration.

*****

[*] Of course, Instagram accounts generally do feature pictures of the account holder, but this isn't universally the case.

Monday, 22 January 2024

More on legalised marijuana and student academic performance

Following last week's post about marijuana legalisation and student time use, I followed up by reading this 2020 article by Adam Wright and John Krieg (both Western Washington University), published in the journal Economic Inquiry (sorry, I don't see an ungated version online). They look at the case of Western Washington University, comparing the academic performance of students before and after attaining age 21, before and after 2012, when marijuana was legalised in Washington State (and became legally available to people aged 21 years and over). This difference-in-differences analysis is necessary in order to extract the effect of marijuana access, separate from the effect of alcohol access (since alcohol becomes legally available at age 21 as well).

Wright and Krieg use student-level data from 2003 to 2017, which allows them to control for student and class (course-by-instructor-by-quarter) fixed effects, as well as dealing with the 'seasonality' of grading (more on that in a moment). Their sample contains over 1.1 million student-course grade observations, from over 29,000 students. The key change in average grades between the period before, and the period after, marijuana legalisation in 2012 are captured in Figure 1 from the paper:

There are a few things to note about this figure. First, you can see the seasonality in grades. Average grades are highest in the summer quarters (Western Washington University operates four quarters per year, rather than semesters or trimesters). Second, there is a distinct drop in average grades between just before, and just after, marijuana legalisation. Third, there is obvious grade inflation both before, and after, marijuana legalisation (notice the upward-sloping dashed trend lines).

Wright and Krieg control for the 'seasonality' and grade inflation through their use of fixed effects. Moreover, they:

...include various controls for experience designed to capture expected grade changes as a student makes progress toward degree completion. We include these controls to separate phenomena such as changes in motivation as students approach the end of their college career from the effect of turning 21, which also tends to happen near degree completion. These experience controls include the overall number of accumulated credits, the number of credits a student has accumulated within the course’s academic department, and student age at the beginning of the term (in months)...

They then find that:

Prior to legalization, students’ grades are estimated to fall by approximately 0.03 standard deviations after turning 21 relative to their earlier grades. This decline is nearly identical to Lindo et al.’s estimate of the effect of turning 21—an effect that they (and we) attribute to legal alcohol access. After marijuana legalization, our estimates indicate that the post-21 effect grows by about half to 0.046 standard deviations, suggesting that legalization further reduces student performance by 0.016 standard deviations.

That is quite a substantial effect. Incidentally, the Lindo et al. paper that they refer to is one that I discussed here. Wright and Krieg then look at specific grades and find that:

...legal access to alcohol decreases the likelihood that a student receives an A grade and increases the likelihood that a student receives a C, D, or F grade—exactly the same pattern attributed to legal alcohol access by Lindo et al. (2013). Legal access to marijuana appears to exacerbate these shifts in the grade distribution with a statistically significant (at the 5% level) increase in the probability that a student earns a D or F grade by 0.3 percentage points. In our sample, 4.4% of students receive a D or F grade so an increase of 0.3 percentage points equates to an increase in the probability of receiving a D or F by about 7%.

So far, so unsurprising (at least, in light of the other literature on this topic). Wright and Krieg then go on to show that the effect is almost entirely concentrated among male students, among students in quantitative (rather than non-quantitative) classes, and that when the sample is separated into high-ability and low-ability students (based on students being above or below the median SAT or ACT scores) that:

...while the grades of both ability groups were negatively impacted by legal alcohol access, only low-ability students’ grades suffered after gaining legal marijuana access. The point estimates for low-ability students suggest that the marijuana effect was nearly as large as the alcohol effect (−0.024 vs. −0.028 standard deviations) for this group.

Finally, Wright and Krieg show that:

...after gaining legal access [to] marijuana, students attempt fewer course credits and enroll in courses that are expected to offer higher grades.

Now, the headline effect (a reduction in student performance of 0.016 standard deviations) is a bit smaller than that observed in this earlier study by Marie and Zölitz (ungated version here, and I blogged about it here). However, the difference may be entirely down to the nature of the analysis in this paper. Since Wright and Krieg don't know which students are marijuana users, their analysis is essentially an 'intent-to-treat' analysis. That is, it looks at the average treatment effect across both students who do, and students who do not, use marijuana. Once you take that into account, Wright and Krieg note that:

Using this as our estimate for the proportion of students who consumed marijuana in response to the change in policy, the treatment effect on the treated would be about 0.23 standard deviations (0.016/0.07).

That effect is very similar in size to the effect reported by Marie and Zölitz. Between all these studies then, we are getting a clear picture that legalised access to marijuana has an appreciable negative impact on student performance. It isn't irrational for students to use marijuana, but the debate on legalisation should take these negative impacts into account.

Read more:

Saturday, 20 January 2024

Proud to pay, and yet they don't

This story in The Guardian caught my attention this week:

More than 250 billionaires and millionaires are demanding that the political elite meeting for the World Economic Forum in Davos introduce wealth taxes to help pay for better public services around the world.

“Our request is simple: we ask you to tax us, the very richest in society,” the wealthy people said in an open letter to world leaders. “This will not fundamentally alter our standard of living, nor deprive our children, nor harm our nations’ economic growth. But it will turn extreme and unproductive private wealth into an investment for our common democratic future.”

The rich signatories from 17 countries include Disney heir Abigail Disney; Brian Cox who played fictional billionaire Logan Roy in Succession; actor and screenwriter Simon Pegg; and Valerie Rockefeller, an heir to the US dynasty.

“We are also the people who benefit most from the status quo,” they said in a letter titled Proud to Pay, which they will attempt to deliver to world leaders gathered in Davos in Switzerland on Wednesday. “But inequality has reached a tipping point, and its cost to our economic, societal and ecological stability risk is severe – and growing every day. In short, we need action now.”

A new poll of the super-rich shows that 74% support higher taxes on wealth to help address the cost of living crisis and improve public services.

I'm confused. If these billionaires (let's call them the 'willing wealthy') want to give more of their wealth to the government, there is literally nothing stopping them from doing that right now. Governments don't need to do anything. The willing wealthy who are 'proud to pay' can each cut a cheque right now. It gets even better for the willing wealthy though. Since they are not legally obligated to make this extra payment, they can even choose which government to pay it to. 

So, what is stopping the willing wealthy from making extra payments to the government? I can offer you two cynical explanations for this behaviour.

First, if the willing wealthy give up some of their wealth through taxes, rather than donating it to the government, then they get to maintain their current status relative to other wealthy people. If everyone is made a bit poorer, at the same rate, then the willing wealthy's ranking among wealthy people remains unchanged. Moreover, they will still be much richer than the average person, so they get to keep their wealthy-person lifestyle. [*] Nothing much really changes for them. However, if the willing wealthy give up some of their wealth through donations to the government, they will lose status relative to the wealthy people who don't give up anything. [**]

Second, offering to give up some of your wealth is a great way to virtue signal: Look at all these selfless wealthy people, willing to make a great sacrifice. Why won't governments listen to them?

The willing wealthy know that governments aren't going to call their bluff. Governments are unlikely to implement a wealth tax, or raise the top marginal income tax rate. The willing wealthy are a minority among wealthy taxpayers and political donors. Governments aren't going to piss off their donors. But the willing wealthy can get a lot of good media coverage of their willingness to sacrifice their wealth. What better way to take a target off your back, than to move it from the wealthy (who are willing to be taxed) to the government (who are unwilling to tax them)?

To all this, I say: Put your money where your mouth is, Abigail Disney. Cut a cheque to the IRS today. If you really want the government to have more of your wealth, then give it to them. Put up, or shut up.

*****

[*] Also, if every wealthy person has a bit less wealth, then the demand for luxury goods and services that the wealthy buy will decrease by a little, decreasing the price of those goods and services. it becomes a little bit less expensive at the margin to maintain the wealthy-person lifestyle. I don't think this is much of a motivating factor though.

[**] Note that this is different from donations to other worthy causes. Philanthropy can increase status, but I don't think anyone (wealthy or otherwise) is going to characterise making donations to the government as philanthropy.

Friday, 19 January 2024

This week in research #6

Here's what caught my eye in research over the past week (which, to be honest, was fairly quiet):

  • Vij et al. (open access) explore Australian employees' preferences for working from home, and find that the average worker would be willing to forego roughly 4-8 per cent of their annual wages to have the ability to work remotely some workdays and/or workhours
  • McKenzie looks at the question of if migration is so beneficial, why don’t more people do it?
  • Navon and de Silva (open access) use pedestrian count data to derive measures of local economic activity for Melbourne

New from the Waikato working papers series:

  • Tucker and Xu follow up on their previous working paper from a couple of weeks ago, showing experimentally that speculation plays a critical role in bubble formation, and thus does matter

Thursday, 18 January 2024

The case for more deception in experimental economics

Generally, in research, it is best to avoid deception. However, there are genuinely some cases where research would be difficult or impossible without some aspect of deception. For instance, how could you research whether people can tell the difference between dog food and pâté (ungated earlier version here), or between bottled water and tap water, if you couldn't do a blind taste test (which necessarily involves some deception). Similarly, in some research I conducted last year (which I will blog about in the future), we had alcohol delivered from a number of different delivery firms, to see which ones (if any) would ask for ID when the alcohol was delivered. [*]

Clearly, the extent of deception can vary from the relatively benign to the seriously problematic. However, in order to avoid seriously problematic cases of deception, is it necessary to have effective prohibition on deception in research? That is essentially the question discussed in this 2019 article by David Just (Cornell University), published in the journal Food Policy (sorry, I don't see an ungated version online). Just focuses on experimental economics, where:

The earliest published books on experimental economic methodology each give strong warnings to those entering the field that they must never use deception in a laboratory experiment...

Why is this effective prohibition in place? Just notes that:

The rationale for the prohibition on deception generally invokes both reputation effects and public goods. If subjects have participated in experiments in which they are deceived, they may be likely to anticipate deception in the next economic experiment they participate in, potentially changing their behavior. Moreover, if economic experimentalists generally gain a reputation for engaging in deception (perhaps if some economic experiments that engage in deception become well known) then the general pool of subjects may fail to respond to incentives as they normally would because of the potential for deception. Thus, researchers engaging in deception for their own benefit from a novel publication potentially have a negative impact on all other researchers in the field by exhausting the finite resource of trust on the part of participants for the researcher.

However, Just also notes that there are problems with this argument. First, it is not evidence-based, which is kind of ironic:

Experimental economics was created specifically to bring internally valid evidence to bare on theories of economic behavior. It is perhaps the height of irony that the prohibition on deception at once is based primarily on untested theories of behavior...

Second, since psychologists already employ deception in experiments, and generally draw from the same pool of (typically undergraduate student) subjects, then the pool of subjects is already tainted by deception. On this point, Just says that:

...I worry that the distinction between economic and psychological experiments (where deception is permitted) is only salient in the minds of economists and not experimental subjects. Notably experimental laboratories in both fields draw subjects from both the student population and the general population. Unless these participants distinguish between behavioral experiments conducted by psychologists and those conducted by economists our prohibition on deception is useless.

In other words, neither the reputation effects nor the public goods effects are strong enough arguments against deception in experimental economics. That isn't to say that it should be open season for deceptive experiments. However, the effective prohibition on deception is something that should be seriously reconsidered.

****

[*] In the case of the alcohol delivery research, I wasn't fully convinced that we were being deceptive, as we were making genuine purchases in exactly the same way that other customers did. The only difference was that we were going to drink the alcohol after the purchase. Anyway, the University's Ethics Committee disagreed, so I guess it was deceptive at some level.

Wednesday, 17 January 2024

Legalised marijuana and rational student time use

In previous posts, I have written about how access to alcohol has very little effect on students' academic performance, but legalised marijuana appears to be associated with worse performance. Why the difference? It may come down to how access to these products affect students' time use. So, I was interested to read this 2018 article by Yu-Wei Luke Chu (Victoria University of Wellington) and Seth Gershenson (American University), published in the journal Economics of Education Review (ungated earlier version here). They use data from the American Time Use Survey, and compare the time use of high school and college students in states with and states without medical marijuana laws (MML), before and after the laws were implemented (in a difference-in-differences strategy). The time use data looks at the number of minutes spent on each activity on the particular day of the survey (the diary day).

Chu and Gershenson find small and insignificant effects on high school students' time use (which should be no surprise, because that age group wouldn't have legal access to marijuana even after medical marijuana laws come into effect). However, for college students they find that there is:

...a decrease of 25 minutes in education-related activities such as attending class and doing research or homework after the passage of MMLs, or equivalently, a 23% (25/107) decrease relative to the pre-MML average. The Poisson estimates... suggest that college students spend 20%... less time on education activities.

In contrast, college students spend more time on leisure activities, where:

...the OLS estimates suggest an increase of approximately 30 min in leisure activities after the passage of MMLs.

So, after medical marijuana laws come into force, college students spend nearly half an hour less time on education activities, and an equivalent amount of time more on leisure activities. In other words, it appears that they substitute education time for leisure time.

When Chu and Gershenson look separately at the extensive margin (whether the student spent any time that day on the activity) and the intensive margin (how much time students who spent any time that day spent on the activity), they find that:

...behavioral changes on the extensive margin are an important channel through which total education time decreased... the OLS estimate indicates that MMLs decrease the likelihood of spending any time on education activities by 7.5 percentage points (19%)... In contrast... the increase in leisure time among college students... is primarily driven by behavioral changes on the intensive margin.

So, college students appear to respond to medical marijuana laws by engaging in education activities on fewer days (and having more days where they do no studying at all), and spend more time on leisure on all days. However, before we get too far ahead of ourselves, Chu and Gershenson also show that:

...behavioral responses during summer can account for most of the changes in education and leisure time. The reduction in education time is mostly driven by changes in summer time use... A similar pattern emerges for leisure time.

In other words, it may be that legalised marijuana induces college students to give up summer school. Finally, Chu and Gershenson show that the results are concentrated among part-time, rather than full-time, students.

All of this is consistent with rational behaviour for students. For students who are part-time, and students who are in summer time, the cost of devoting more time to 'leisure activities' is lower than for full-time students or students in regular semester times. So, if the benefit of 'leisure activities' increases, part-time and summer students are more likely to shift their time towards leisure activities than full-time students or students during a regular semester.

Of course, this study only evaluated the introduction of medical marijuana laws. Full legalisation of marijuana use might have different, and potentially larger effects. With legalisation progressing across states, no doubt we will see analyses along those lines in the future.

Read more:

Tuesday, 16 January 2024

Are school uniforms, or student attendance, worth more to schools?

I've written before about school uniform monopolies and the pricing of school uniforms (see here and here). With the school year about to start, uniform costs have been back in the news this week, but not for the reason you think. As the New Zealand Herald reported today:

A Rotorua school that took a $40,000 swing and provided all students free uniforms and stationery says it has paid off in attendance – and it’s prepared to do the same again.

Kaitao Intermediate School students each receive one formal uniform, one sports uniform and all stationery.

The school, which had about 290 enrolments for 2024 and expects that to grow, uses its annual Ministry of Education bulk operational funding to cover the costs, approved by the Board of Trustees.

School principal Phil Palfrey said providing necessities free meant there was “no reason” students could delay starting school on day one of the term...

The school introduced what he described at the time as a “radical” change last year to ease financial pressures on parents amid the cost-of-living crisis. It had also hoped to improve attendance and engage students not yet enrolled in school.

Palfrey said it resulted in an increase in students starting school during the first three weeks of the term. 

It's interesting that a school would willingly give up $40,000 in order to increase student attendance at the start of the school year. Since the school would not be willing to give up its monopoly over selling uniforms unless the perceived benefits exceed the costs, we can use this information to infer how much the school values student attendance.

The school roll is 290, so that $40,000 in foregone uniform and stationery income equates to $137 per student. If we take the (very) conservative assumption that no student would attend school in the first three weeks of the year without a uniform, then that $137 'buys' three weeks of school attendance. That equates to about $46 per student-week. Given the assumption that no student would attend if they had to pay anything, this represents a lower-bound of the value of student attendance to the school.

Interestingly, it also demonstrates the meaningful credit constraints that households face at the start of the year. The school has discovered that eliminating these up-front costs (presumably $137 per student on average) has increased school attendance at the start of the year. For schools in lower socio-economic areas, a cost of $46 per student-week to increase attendance at the start of a school year seems like a bargain to me. In comparison, the free school lunch program saves families $31 per week per child (see here - $62 per week for a two-child family), and is generally regarded as successful (although, despite much rhetoric like this, I haven't seen a credible cost-benefit analysis on that policy).

School attendance at all levels has taken a hit since the pandemic and hasn't recovered to pre-pandemic levels. We should be exploring ways to improve it, and especially getting kids off to a good start at the beginning of the school year. If ending school uniform monopolies is one way to achieve this, then that simply adds more weight to my earlier arguments against those monopolies.

Read more:

Monday, 15 January 2024

The reports of the death of life satisfaction may have been greatly exaggerated

The measurement of subjective wellbeing (or life satisfaction, or happiness) has attracted a lot of criticism over the last few years (for example, see here and here). The problems arise mostly because we cannot observe people's true happiness, and so instead we use a survey proxy that is typically measured using ordinal categories (for example, very happy, somewhat happy, somewhat unhappy, very unhappy, etc.). Because the way that the proxy measure of happiness maps to 'true happiness' is unknown, researchers who make different distributional assumptions can conclude almost anything. At least, that's the short version of one of the arguments against the current measurement of subjective wellbeing.

However, we may now have a solution of sorts to this problem. As Shuo Liu (Peking University) and Nick Netzer (University of Zurich) explain in this recent article published in the journal American Economic Review (ungated earlier version here), it may be possible to use the length of time a respondent takes to answer the life satisfaction question, to get a measure of the intensity of their happiness (or otherwise). As they explain:

In this paper, we argue that the use of survey response time data can help to solve the problem. Response time is the duration that a survey participant needs to answer a given question. To understand the logic of our argument, consider a happiness survey with just two response categories, “unhappy” and “happy.” Suppose you answer this survey at a moment when you feel very happy. Most likely, you will find it easy to respond “happy” and you will do so quickly. Now suppose you answer the survey at a moment when you feel only moderately satisfied. You may still end up responding “happy” but most likely it will take you longer to decide. The observable distribution of response times among the survey participants who respond to be happy then contains information about the unobservable distribution of happiness within that response category, and analogously for the “unhappy” category. Response time data can provide precisely the evidence that was missing for identification.

Liu and Netzer note that this 'chronometric effect' has been observed in many previous studies, but hasn't previously been applied to the measurement of happiness. They then demonstrate how the use of response times can improve measurement using data from a survey of 8000 MTurk research participants. Specifically, they:

...implemented two versions of the survey, one with two answer categories and one with three answer categories. In both versions of the survey, each substantive question was accompanied by a follow-up question in which participants were asked to refine their previous answer. For example, a subject giving the highest possible response “rather happy” in the initial question about overall life happiness subsequently had the choice between “very happy” and “moderately happy” in the follow-up question.

Conducting the survey online makes it easy to record response times, which we define as the time between the display of the question and the moment when the participant clicked on her answer. To account for individual heterogeneity in response speed, we follow our theoretical analysis and normalize the raw response times by subtracting (in logs) each subject’s response time in the sociodemographic question about marital status, where there are arguably no uncertainties or varying intensities about the correct answer, and which was also answered quickest on average.

Essentially, research participants who were happier should be more certain about being happy, and answer the first happiness question in less time than those who were less certain about being happy. Those happier participants should also be more likely to answer in the follow-up question that they are very happy. And indeed, that is what Liu and Netzer find:

We find that, among subjects who initially gave an identical answer, those who reveal a more extreme position in the follow-up question responded faster on average in the initial question. More specifically, we consider all subjects who responded in the same extreme category in an initial question (e.g., “rather happy”) and partition them into two subgroups based on their response in the follow-up question. Those who give a more extreme response in the follow-up (e.g., “very happy”) should have larger values of the latent variable than those who give a more moderate response (e.g., “moderately happy”). The chronometric effect then predicts that the former should have responded more quickly in the initial question than the latter. We find this prediction confirmed in our data, for both extreme response categories in all seven substantive questions and both versions of the survey.

Liu and Netzer then go on to show similar results when the first question has three levels rather than two levels, although they note that the statistical power is lower in that case.

Overall, these results should provide some comfort for users of subjective wellbeing data, as Liu and Netzer show that the previous concerns about distributional assumptions may be overstated. And, they have provided a way forward, although it is fair to note that this requires that the data be collected digitally (so that response times can be easily captured). Fortunately, it does not require that the data be collected online (which we should be wary of now, as I noted here). So, collection by surveys completed on a tablet or similar should be fit for purpose. Then, either the response time can be used as Liu and Netzer do, or researchers can at least test whether such an adjustment to the underlying subjective wellbeing assessment is necessary.

So, it appears that life satisfaction is not dead. At least, not yet.

Read more:

Sunday, 14 January 2024

Angola plays its dominant strategy in defecting against OPEC

The Financial Times reported last week (paywalled):

Angola, Africa’s second biggest oil producer, has said it is leaving Opec after disagreements over its production targets, delivering a blow to the oil cartel chaired by Saudi Arabia.

The decision comes after the producer group lowered Angola’s oil output target last month as part of a series of cuts led by Saudi Arabia to help prop up prices.

OPEC is an example of a cartel. Cartels can arise when a market is an oligopoly - a market where there are many buyers, but few sellers. A cartel essentially acts like a monopoly seller - it is able to use market power to extract greater economic rent from the market (in the form of higher profits, arising from higher prices), than the countries would be able to extract if they were competing with each other. The cartel can be maintained because there are few sellers, so it is relatively easy for them to coordinate their actions. In this case, OPEC coordinates to raise prices by restricting production.

However, there is always an incentive for each cartel member to cheat on the cartel agreement, or to leave the cartel entirely (as Angola has done). To see why, we can apply some game theory. Let's say that there are two players - Angola and 'the rest of OPEC'. Each player has two strategies - high production (which leads to lower prices and lower profits for oil producers), or low production (which leads to higher prices and higher profits). If one player has high production and the other low production, the high production player benefits more. However, if both players have high production, both are worse off. These outcomes and payoffs are illustrated in the diagram below (the payoff numbers represent profits, but are just made up to illustrate this example).

To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:

  1. If the rest of OPEC chooses high production, Angola's best response is to choose high production (since 2 is a better payoff than 0) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If the rest of OPEC chooses low production, Angola's Arabia's best response is to choose high production (since 4 is a better payoff than 3);
  3. If Angola chooses high production, the rest of OPEC's best response is to choose high production (since 10 is a better payoff than 9); and
  4. If Angola chooses low production, the rest of OPEC's best response is to choose high production (since 15 is a better payoff than 12).
Note that Angola's best response is always to choose high production. This is their dominant strategy. Likewise, the rest of OPEC's best response is always to choose high production, which makes it their dominant strategy as well. The single Nash equilibrium occurs where both players are playing a best response (where there are two ticks), which is where all of OPEC (including Angola) chooses high production.

Notice that both players would be unambiguously better off if they chose low production. However, both will choose high production, which makes them both worse off. This is a prisoners' dilemma game (it's a dilemma because, when both players act in their own best interests, both are made worse off).

That's not the end of this story though, because the simple example above assumes that this is a non-repeated game. A non-repeated game is played once only, after which the two players go their separate ways, never to interact again. Most games in the real world are not like that - they are repeated games. In a repeated game, the outcome may differ from the equilibrium of the non-repeated game, because the players can learn to work together to obtain the best outcome.

And that is what happens when a cartel forms. If all of OPEC (including Angola) works together and agrees to choose low production, both players benefit. That is what they were doing, up until Angola chose to leave OPEC. The problem here is that both players choosing low production is not an equilibrium. If Angola knows that the rest of OPEC is choosing low production, it is better off defecting from the agreement and choosing high production. Angola profits more that way (at least, in the short term).

So essentially, by leaving OPEC (and thereby choosing high production), Angola is simply playing its dominant strategy.

Read more:

Friday, 12 January 2024

This week in research #5

It's been a busy week. Here's what caught my eye in research:

  • Anderson builds up a non-parametric gravity model (quite technical, but likely of interest to trade and migration researchers)
  • Samahita and Devereaux look at gender inequality in conference acceptance using data from the Irish Economic Association annual conference from 2016 to 2022, and find no gender gap in acceptances; however, male reviewers give female authors lower review scores (open access)
  • Ramalingam and Stoddard experimentally test whether experiencing inequality increases cooperation (in terms of contributions to public goods), and find that it doesn't (ungated earlier version here)
  • Mills discusses the economics of time travel (open access)
  • Ezcurra shows that exposure to ultraviolet radiation is a predictor of national-level state capacity (open access) - related to this post, where I was sceptical of some similar earlier research

The Journal of Economic Behavior and Organization released a special issue on the economics of beauty, which included the following:

  • Gründler, Potrafke, and Wochner find that attractive MPs are more likely to be absent from the German parliament and less active in labour-intensive background work than others (ungated earlier version here)
  • Adamopoulou and Kaya use Add Health data on US high school students to show that, somewhat surprisingly, for boys both physical and personality attractiveness positively affect performance and peer characteristics also matter (more attractive peers lowers performance), but for girls only personality attractiveness matters (open access)
  • Babin, Chauhan, and Kistler show that professional e-sports competitors rated as more attractive are more likely to receive a contract in the following year, but there is no lifetime earnings premium
  • Baert, Herregods, and Sterkens use an experiment to show that job candidates with body art are perceived as less pleasant to work with, less honest, less emotionally stable, less agreeable, less conscientious and less manageable (open access) - this would have been more convincing as a field experiment (see here or here, for example)

Wednesday, 10 January 2024

Book review: The Next Fifty Things that Made the Modern Economy

What do bricks, mail-order catalogues, pensions, vulcanised rubber, the gyroscope, and slot machines have in common? They all feature in Tim Harford's book The Next Fifty Things that Made the Modern Economy. As you might expect, the book is a sequel to Fifty Inventions that Shaped the Modern Economy (which I reviewed here), and is written in the same engaging style as the first book.

Every chapter focuses on one 'thing' and explains its effects (positive or negative) on the modern economy. Unlike the first book though, Harford doesn't limit himself to 'inventions', although if defined sufficiently broadly, almost all of the things were invented at some point, except for fire and oil. However, as Harford explains in the introduction:

In selecting the fifty-one subjects of this book, my aim has been to tell stories that will surprise you, about ideas that have had fascinating consequences. There are plenty of other books about inventions that changed the world; this book is about inventions that might change the way you see that world.

It mostly succeeds in that aim. Each chapter also ranges a bit more widely than the chapter title might suggest. For instance, the chapter on tulips is really about asset bubbles more generally, but uses 'tulipmania' as a motivating example. The chapter on Santa Claus is more about the commercialisation of Christmas. Many of the chapters (like the opening chapter on the pencil) covered ground that I already knew relatively well (probably because I follow Harford's blog). Personally, I particularly enjoyed the chapters on canned food, cellophane, and the Langstroth hive.

This was a perfect book to read during a summer holiday. It is not too taxing, and the thought-provoking and surprising factoids can be a family conversation starter. Like most (if not all) of Harford's books, I highly recommend this one.

Tuesday, 9 January 2024

Family leave and the gender wage gap

The gender wage gap has been decreasing slowly and steadily over time. At least, that's what I thought until I read this 2023 NBER Working Paper by Peter Blair (Harvard University) and Benjamin Posmanick (St. Bonaventure University). They present the following graph of the gender wage gap in the US (for White women, compared with White men, between 1975 and 2015:

Note that the upward trend here represents a decrease in the gender wage gap (the y-axis is negative). What is apparent from the graph is that the gender wage gap in the US decreased relatively quickly from 1975 to 1993, and then the decrease suddenly and dramatically tailed off. What happened?

Blair and Posmanick single out the Family and Medical Leave Act, which was passed in 1993, and "which guarantees 12 weeks of unpaid, job-protected leave to qualified workers for covered family or medical circumstances". The birth of a child is one of the 'family or medical circumstances' that is covered. However, a lot of other changes to welfare and other things happened in 1993, so Blair and Posmanick smartly avoid looking purely at 1993. Instead, they conduct an event study, taking advantage of the fact that twelve states and the District of Columbia had implemented family leave policies prior to 1993. So, in their main analyses, they look at how the gender wage gap changed between the period before, and the period after, implementing family leave, for the states (and D.C.) that had these policies in place prior to 1993. They focus on the impact on White women (compared with White men), but the appendix presents wage gaps (compared with White men) for Black men and Black women as well. It is reassuring (in terms of the validity of their results) that the results look similar (albeit not as great) for Black women, and that there is no change in the trend of a declining wage gap for Black men (consistent with the results being specific for the gender wage gap, rather than picking up some change in welfare that might affect disadvantaged groups more generally).

They find that:

...prior to the leave policy the gender wage gap experienced by white women was falling at a rate of 0.70 percentage points per year (p-value <0.001). In the post period, the rate of gender wage convergence falls by 0.53 percentage points per year to 0.17 percentage points per year. The decline is statistically and economically significant, and the post-leave rate of gender wage convergence is marginally different from zero.

In other words, the rate of decline of the gender wage gap decreased by 75 percent (from 0.70 percentage points per year to 0.17 percentage points per year). An interesting implication of their results, which they don't address, is what it implies about the length of time required for the gender wage gap to be eliminated (based on an assumption of a linear decline). The gender wage gap for White women was 23.8 percent in 1993 (from Appendix Table A2). So, at the rate of decline from 1975 to 1992, the gender wage gap would be eliminated in a further 34 years (that is, in 2025). But, after the family leave policy was enacted, that extends out to 140 years (that is, in 2133).

The results that Blair and Posmanick obtain when looking at the full sample of states (where most states got the family leave policy in 1993) are similar, but paint a worse picture:

We find that the gender wage gap faced by white women declined by a statistically significant 0.70 percentage points per year prior to the policy change, which is identical to the pre-leave rate of gender wage convergence that we estimated using only the state variation... After the policy change, the rate of wage convergence for white women declines by 0.67 percentage points to 0.03 percentage points.

In those results, the rate of decline of the gender wage gap decreased by nearly 96 percent (from 0.70 percentage points per year to 0.03 percentage points per year). Don't even ask how long it would take to eliminate the gender wage gap at that rate. Ok, do ask. It's 793 years.

Blair and Posmanick then go on to show that the family leave policy can explain 94 percent of the unexplained change in the gender wage gap (the 'Gap Effect') between 1993 and 2015 (after accounting for the explained change due to changes in observable factors like age, education, and occupation). On this point they note that:

As articulated in Blau and Kahn (2006): "The Gap Effect measures the effect of changing differences in the relative positions of men and women in the male residual wage distribution, including the effect of an improvement in women’s unmeasured characteristics or a reduction in the extent of discrimination against women.” Given the differences in family-leave taking between men and women, family leave policies may simultaneously decrease the unmeasured characteristics of women in the labor market and increase the extent of discrimination women face.

Overall, it is clear from the results of this paper that family leave policies are a case of unintended consequences. They are designed to make the labour market more flexible for parents, but they result in a stymieing of efforts to reduce or eliminate the gender wage gap. Given that the family leave provisions in the US are far less generous than in other Western countries in Europe and Australasia, it makes me wonder how much of an effect family leave has on maintaining the gender wage gap in those countries.

[HT: Marginal Revolution, early last year]

Sunday, 7 January 2024

David Colander, 1947-2023

I was saddened this weekend to hear that David Colander, Distinguished Professor Emeritus of Economics at Middlebury College, passed away in the first week of December last year. I didn't know David well, but some ten years ago he and I, along with Mary Hedges, guest edited an issue of New Zealand Economic Papers on innovation in teaching undergraduate economics. We co-wrote the editorial, although David was generous in crediting Mary and I for carrying much of the workload towards the end (if I remember correctly, he was on safari at the time we were writing the editorial).

Aside from co-editing, it was within the broader context of economics teaching that I interacted with David, albeit infrequently. He was Associate Editor of Content for the Journal of Economic Education for a number of years, and alongside being a generous mentor to many authors and students, was the author of an excellent introductory economics text. Although I never assigned that textbook in my classes, I did make use of a number of bits from it. This included, on the first day of teaching my ECONS101 and ECONS102 classes, presenting David's definition of economics (alongside several others). David's definition immediately set his textbook apart from the rest of the field:

Economics is the study of how human beings coordinate their wants and desires, given the decision-making mechanisms, social customs, and political realities of the society.

The focus on embedding economics within a social system was emblematic of how David saw things, at least from my interactions with him. David was also well-known for his contributions to the history of economic thought (an area with which I admittedly don't have great familiarity).

Middlebury College has an excellent obituary, that does much more than I ever could to capture how much David meant to the economics community and his colleagues and students. He will be missed.

Friday, 5 January 2024

This week in research #4

Here's what caught my eye in research over the past week:

  • Tillinghast, Mjelde, and Yeritsyan found that class GPAs at the College of Agriculture at Texas A&M University increased by 0.2 points in Spring 2020 because of COVID-19 and then approximately 0.2 points in the subsequent two semesters (open access)
  • St Amour presents a life-cycle model of the value of a statistical life year, then derives estimates of various measures of the value of life using data from the US Panel Study of Income Dynamics (see also this post on a related point)
  • Gaenssle looks at whether the degree of nudity affects the income of Instagram stars, and finds that it does - for both male and female stars (open access)
  • Yamada et al. find that, among female sex workers in Myanmar, higher risk tolerance is associated with a lower transaction price (open access)

New from the Waikato working papers series:

  • Tucker and Xu revisit an important 2001 paper in experimental finance on the formation of bubbles in asset markets

Thursday, 4 January 2024

Christian missions and HIV in Africa

Spreading Christianity was seen by the colonial powers as a way of civilising the native populations in Africa. Indeed, in 1857 David Livingstone wrote that "neither civilization nor Christianity can be promoted alone. In fact, they are inseparable" (see here). Among the many effects of colonisation, the spread of Christianity is seen as one of the few positive aspects (or, at least, one of the least negative aspects). Christian missions were associated with increased availability of education and (Western) health care. However, this may not have meant that health improved along all dimensions. This 2020 article by Julia Cagé (Sciences Po) and Valeria Rueda (University of Nottingham), published in the Journal of Demographic Economics (ungated earlier version here) presents evidence that historical Christian missions were associated with higher prevalence of HIV.

Cagé and Rueda use data on the locations and characteristics of Protestant missions from 1903 (from the Geography and Atlas of Christian Missions) and data on the locations and characteristics of Catholic missions from 1929 (from the Atlas Hierarchicus), in each case distinguishing between missions with and missions without health facilities. That allows them to compare outcomes for people living closer or further away from historical Christian mission locations with and without health facilities. The key outcome variable is HIV infection status, as recorded in the Demographic and Health Surveys from 2003 to 2013 (which includes over 344,000 individuals across 17 African countries).

In their main analysis, they find that:

...a 10% increase in distance to any mission is associated with a 0.003 unit lower probability of an HIV-positive result... Ceteris paribus, at the median distance to a mission, a 15 km increase in distance decreases the average probability of HIV positivity by approximately 5%.

So, people living closer to historical Christian missions are more likely to be infected with HIV. However, the story doesn't end there, as:

...a 10% increase in the distance to a mission with a health investment is associated with a 0.0005-unit increase in the probability of HIV positivity... This result suggests that, ceteris paribus, at the median distance to a mission that invested in health, a 15 km increase in distance to the health investment increases the average probability of HIV by approximately 0.7% to 1.2%.

So, Christian missions are associated with higher HIV prevalence, but this is offset if the mission had a health facility. The results appear to be somewhat greater for Protestant missions than for Catholic missions. Cagé and Rueda show that their results are robust to various alternatives, including limiting the sample to people living in more urban areas, and the results are similar when using a matching approach (although the sample size is much smaller when relying on the matched sample).

Here's where things get interesting though. The results are not generalisable across all health conditions, as when Cagé and Rueda look at anaemia or stunting, they find that:

...unlike for HIV, proximity to a mission does not statistically significantly correlate with these health outcomes. If anything, we observe improved outcomes (less positive results of anemia or stunted growth), but the relationship is not significant.

So, what is it about HIV that sets the results apart? Cagé and Rueda argue that there are:

...two possible countervailing effects of missions on HIV prevalence. On the one hand, their early investments in health facilities have a positive long-term impact on HIV prevalence, through the persistence of infrastructure and safer sexual behaviors. On the other hand, missionaries left a profound cultural imprint: conversion to Christianity increased the risk of contagion by changing family structures and increased exposure to religious institutions that have struggled to effectively address the epidemic.

Then, they find that sexual behaviours differ markedly for Christians and non-Christians in the DHS sample:

We observe that despite being more educated on average than non-Christians, Christians have riskier sexual behaviors. They have more sexual partners over their lifetime and are more likely to use the services of sex workers. Furthermore, they are also less likely to be abstinent before marriage. Despite being more likely to know that condoms lower the chances of transmitting HIV, they are less likely to know where to find them.

And then, comparing Catholics and Protestants (while noting that the categories are not perfectly separable in the survey), they find that:

Catholics exhibit certain riskier behaviors, like a larger age gap inside the household, or a larger number of sex partners. Protestants are nonetheless more likely to use the services of sex workers over their lifetime, which is a very strong determinant of HIV transmission, and less likely to know that condoms lower the chances of HIV contamination. Although it is statistically significant, the difference is quantitatively very small.

This, combined with the greater success of Protestant conversion in African than Catholic conversion, may explain the larger impact of Protestant missions than Catholic missions in their initial results.

So, it appears that Christian missions have had a long-term impact on health in Africa, and not entirely in a positive way. What can we learn from this? Cagé and Rueda point out that:

...our results may help us reflect on contemporary HIV prevention policies. In the United States, religious conservatives strongly support abstinence-until-marriage (AUM) as the central element of HIV prevention efforts, and this policy periodically receives a large share of the Federal funding... Our long-term perspective suggests that a focus only on “Christianizing” marriage patterns and sexual behaviors is unlikely to be successful.

Indeed. Add this to the evidence base against an abstinence-only approach to the HIV pandemic.

Tuesday, 2 January 2024

Robert Solow, 1924-2023

I'm a little late to this sad news, but 1987 Nobel Prize winner Robert Solow passed away on 21 December last year. His name will be known to many students of introductory macroeconomics through the Solow-Swan neoclassical growth model, developed independently in the 1950s by Solow and by Trevor Swan. The model is a mainstay of teaching in macroeconomics, and although it has long since been superseded by endogenous growth models, it still provides useful insights for students. Foremost among those insights is that technology (the so-called 'Solow residual' in a growth accounting model) is the driver of economic growth, rather than growth of labour or capital. I also found in my own teaching that students appreciated the model as providing an explanation for why growth rates in low-income countries most often exceed those in high-income countries.

Although I'm a microeconomist at heart, I have a fondness for Robert Solow's work. When I was an honours student, I wrote a long essay for my graduate macroeconomics class on growth theory. That essay received high praise from my lecturer and was a contributor to my decision to pursue a PhD in economics (although ironically, my PhD studies were in applied microeconomics, and I haven't really been involved in macroeconomic research ever since).

On a side note, we remain in a situation where no Nobel Prize winner in economics has celebrated a 100th birthday. Solow passed away aged 99, slightly younger than Maurice Allais was when he died in 2010. The oldest living Nobel Prize winner now is Vernon Smith, who turned 97 on 1 January.

For such an eminent economist, I'm surprised at an apparent lack of memorials across the blogosphere (perhaps the time of year has something to do with it). However, there are good obituaries at the New York Times, Washington Post, MIT News, and Time. As a read-through of those obituaries will tell you, Solow's contributions went beyond his own - he was the thesis advisor for four other Nobel Prize winners (Akerlof, Stiglitz, Diamond, and Nordhaus). Solow clearly made his mark on the discipline of economics. He will be missed.