Saturday, 31 October 2020

Professor gender and the gender gap in STEM

I've written a number of posts on the gender gap in economics, but also in STEM (science, technology, engineering, and maths) (see here for example). In some of those posts on economics, the research I've been discussing has highlighted the potential effect of role models on the gender gap.

If role models matter, then you'd expect the gender of professors to matter. The problem with evaluating that empirically is that students often have some choice over who their professors are, particularly in large classes early in their studies, where there may be many sections to choose from, or the same papers may be taught in multiple semesters. This 2010 article by Scott Carrell, Marianne Page (both University of California at Davis), and James West (US Air Force Academy), published in the Quarterly Journal of Economics (ungated earlier version here), is able to overcome that problem.

Carrell et al. use data from the US Air Force Academy, where:

...students are randomly assigned to professors for a wide variety of mandatory standardized courses.

That means that there will be no problems of selection bias, where students choose their own professors. In addition:

...course grades are not determined by an individual student’s professor. Instead, all faculty members teaching the same course use an identical syllabus and give the same exams during a common testing period.

That means that there is also no bias arising from differential grading between different professors. Carrell et al. have data from 9015 students from the USAFA graduating classes of 2001 through 2008. They focus their attention on mathematics, physics, chemistry, engineering, history, and English classes, and look at how the gender of the professor in each introductory class affects students' grades, the likelihood of taking further classes in the discipline, and the likelihood of graduating with a STEM degree. They find that:

...professor gender has only a limited impact on male students, it has a powerful effect on female students’ performance in math and science classes, their likelihood of taking future math and science courses, and their likelihood of graduating with a STEM degree. The estimates are robust to the inclusion of controls for students’ initial ability, and they are substantially largest for students with high SAT math scores.

Specifically, for female students:

...having a female professor reduces the gender gap in course grades by approximately two-thirds.

That is quite substantial, given that after controlling for mathematical ability, female students on average perform 15 percent of a standard deviation worse than male students, in STEM courses. However, it is notable that they also find that:

...at the top of the distribution... having a female professor completely closes the gender gap...

The effect on taking additional STEM classes and graduating with a STEM degree is also concentrated amongst female students with greater mathematical ability, and eliminates the gender gap in those measures as well.

However, the same effect of professor gender are not apparent in the humanities:

 In contrast, the gender of professors teaching humanities courses has, at best, a limited impact on students’ outcomes.

 All of this suggests that, if we want to narrow the gender gap in STEM, particularly amongst the most able female students, universities would need to employ more female teaching staff. However, Carrell et al. bury a very important caveat in a footnote on the last page of their article:

Note that the impact of female professors may reflect the high quality of faculty at the USAFA, and that substituting lower-quality female professors for high-quality male professors is not a policy that would be recommended by the authors.

Universities need to fully understand the trade-offs before they jump into a policy response. And also, USAFA is a pretty unique environment, so it would be good to know if similar results are obtained in other contexts. However, this research adds to the increasing evidence that the impact of professor gender on the performance and academic decision-making of female students.

Read more:


Friday, 30 October 2020

Should fossils be treated like shipwrecks?

In a new article published in the journal Contemporary Economic Policy (ungated earlier version here), Paul Hallwood and Thomas Miceli (both University of Connecticut) discuss the law and economics of paleontological discovery - that is, fossil hunting. Hallwood and Miceli note that there are three groups of fossil collectors that have different incentives, in terms of the search for fossils and/or the recovery of scientific knowledge (the public value) from those fossils:

...professional paleontologists, commercial collectors who look to sell finds for a profit, and amateur collectors who do it out of interest rather than profit.

One important point is that recovering fossils leads to both public and private value:

Private good value is realized when the pecuniary value of a fossil is realized, or if it enters directly into a private collection; public good values are created when a fossil-type and the information associated with it are added to the stock of scientific knowledge in the field of paleontology.

In other words, there are positive externalities associated with recovering some (but not all) fossils. The problem with goods that have positive externalities is that, because the social benefit (what Hallwood and Miceli term the public good value) of recovery exceeds the private benefit (private value), private fossil hunters would engage in too little fossil hunting relative to the socially efficient (welfare maximising) amount of search. So, too few fossils would be discovered. On top of that, commercial collectors have little incentive to care about the scientific value of their finds, and so would not protect them in a way that preserves knowledge.

As a policy-maker, it may be tempting to try and capture the maximum public benefit by mandating that fossils must be given to museums or public academic institutions. However, that approach actually makes the problem of under-discovery worse, because then there is no chance of a private reward for recovery. On the other hand, having no restrictions on what happens to fossils after they are found would lead to more search and more fossils being found and excavated, but an inefficiently small amount of scientific knowledge being generated (because uncareful and low-cost excavation of fossils destroys much of the scientific value, which comes from their careful preservation in context). An example of this that I use in my ECONS102 class is when early palentologists paid Chinese peasants for each fragment of fossil they found. Unsurprisingly, the peasants found large fossils and then broke them up into smaller fragments, in order to earn a greater reward.

The article works through the various incentives mathematically and graphically, then looks at the current state of the law in the U.S., which could do with some improvement. Hallwood and Miceli conclude that:

Our analysis has shown that laws protecting scientific value are warranted based on the public good nature of fossil values, but this factor potentially creates an offsetting disincentive for profit-motivated collectors to engage in search, which is an essential prerequisite to recovery. Thus, if the scientific community needs to rely at least to some extent on private collectors to locate important fossils, there must be some recognition of the incentives that those searchers face. Federal legislation in fact pays little regard to this factor.

Their solution is interesting, but fairly intuitive:

...we believe there are lessons to be learned from the common law and state legislation pertaining to recovery of historic shipwrecks, which present many of the same economic issues.

As they note in one of the footnotes to the paper, in relation to the recovery of shipwrecks:

...the admiralty courts aim to balance the recovery of private (treasure) values and public (scientific archaeological) values from sunken wrecks. Neither is prioritized over the other. Second, property rights to work over sunken wrecks are granted to private companies working on federal (submerged) lands. Third, the admiralty courts promote the collection of sound archaeological knowledge from a historic shipwreck through two devices: (1) there is variation in the percentage of treasure value retained by a salvage company depending on the quality of work performed; and, (2) in cases of poor or nonexistent archeological work, applications for permits in the future can be denied.

So, it seems that, in theory at least, we possibly should treat fossils like shipwrecks, if we want both the optimal amount of search for fossils and the recovery of the optimal amount of scientific knowledge. 

Wednesday, 28 October 2020

In the battle between theories of sex, the red queen hypothesis dominates

I recently read this 2017 article by Motty Perry (University of Warwick), Philip Reny (University of Chicago), and Arthur Robson (Simon Fraser University), which seemed like a fairly random addition to an issue of The Economic Journal (open access). I say random because the title is "Why sex? And why only in pairs?", and the paper details a mostly theory-driven explanation (along with some simulation results) of why reproductive sex occurs between couples, and not between three or more parents. Given that there is a Journal of Mathematical Biology, The Economic Journal seemed like an odd fit to me.

Nevertheless, this was an interesting paper to read. Perry et al. outline attempt to unravel the following puzzle:

The breadth and variety of methods by which different species reproduce through sex is nothing short of remarkable. Nonetheless, sexual reproduction displays a stunning regularity. We can state that:

Each sexually produced offspring of any known species is produced from the genetic material of precisely two individuals. That is, sex is always biparental.

The obvious, but overlooked, question is, why? In particular, why are there no triparental species in which an offspring is composed of the genetic material of three individuals?

They first note that:

...a complete theory of sex must strike a delicate balance. On the one hand – as is well known – it must explain why genetic mixing is sufficiently beneficial so that biparental sex overcomes the twofold cost of males it suffers because an equally sized asexual population would grow twice as fast (Maynard Smith, 1978). On the other hand – and this point is central here – genetic mixing must not be so beneficial that a further increase in fitness would be obtained from even more of it through triparental sex.

They then outline the two main competing theories: (1) the mutational hypothesis; and (2) the 'red queen' hypothesis. The mutational hypothesis contends that sex is advantageous (compared with asexual reproduction) because "it halts the otherwise steady accumulation of harmful mutations". In contrast, the 'red queen' hypothesis contends that each species is in a constant battle with parasites (or other dangerous organisms), and asexual reproduction, where each offspring is genetically identical to the parent, is highly vulnerable to extinction at the hands (or tentacles, or whatever) of a new parasite evolved to attack the specific genetic makeup of the species.

The problem with sexual reproduction (when compared with asexual reproduction) is the so-called 'two-fold cost of sex':

...that a sexual population with a one to one ratio of (unproductive) males to females produces half as many offspring as an equally sized asexual population... The simple reason for this is that every individual in the asexual population can reproduce whereas only half of the individuals in the sexual population – the females – can do so.

Now, a successful theory of sexual reproduction must explain why biparental sex is advantageous over asexual reproduction. However, it must also explain why triparental sex is not even more advantageous.

Using a fairly simple narrative approach, supported by a minimum of necessary mathematical detail. Perry et al. go on to show that:

Under the mutational hypothesis, triparental sex always dominates biparental sex and high genomic mutation rates only serve to increase this advantage. With all three options available, either asexuality would be best or triparental sex would be best. Accordingly, biparental sex should not be observed.

In contrast, there is a ray of hope with the red queen hypothesis. Using a deliberately simplified red queen model, we have shown that biparental sex can have even an overwhelming advantage over asexuality, yet there is no further gain from more than two parents.

So, there you have it. The (coolly-named) red queen hypothesis seems to be the winner.

Tuesday, 27 October 2020

Sports, large gatherings, and the coronavirus pandemic

Watching the NFL this season has been a bit eerie. The piped-in artificial crowd noise is weird, and the shots of the empty (or near-empty, depending on the location) stadiums really brings home just how far from the norm we are this year.

Large gatherings, including sports events, were one of the first targets of public health authorities trying to contain the coronavirus pandemic. How effective was that approach? A recent working paper by Coady Wing, Daniel Simon, and Patrick Carlin (all Indiana University) provides an initial answer. They study the differences between counties (and metropolitan statistical areas [MSAs]) that hosted more, or fewer, NHL and NBA games between 1 January and 12 March (when both leagues were shut down), in terms of the number of COVID-19 cases and deaths. They also look at the relationship with the number of NCAA Division 1 basketball games.

They find that:

...hosting one additional NBA or NHL game results in an additional 428 COVID-19 cases and 45 COVID-19 deaths in the county where the game was played. This amounts to 783 COVID-19 cases and 52 COVID-19 deaths for the MSA as a whole. There were about 22 NHL and NBA games played in the average MSA. In total, then, these 22 games between January and mid-March resulted in more than 17,000 cases and nearly 1160 deaths per MSA. In contrast, we find that men's college basketball games only resulted in an additional 31 cases and 2.4 deaths per MSA. On average there were about 16 college games in an MSA, resulting in about 496 additional cases and 38 additional deaths per MSA.

Needless to say, those effects are quite substantial. Also:

Using a more conservative (age-adjusted) VSL of $2 million implies that each NLH/NBA game costs about $104 million in fatalities, and each NCAA game costs about $5 million in fatalities... Using this more conservative VSL, our results indicate that the roughly 22 NHL and NBA games played in January-mid March created about $2.3 billion worth of fatalities per MSA, while the roughly 16 NCAA games created about $80 million worth of fatalities per MSA...

...fatality costs per game ($104 million) are nearly 40 times greater than spending per game. Even assuming that total consumption benefits are four times greater than spending (i.e., $3 of consumer surplus for every dollar of spending), the fatality costs are nearly ten times greater.

These latter results are not fully explained in the working paper version, but the calculations seem relatively straightforward. A reasonable question is whether these results are simply correlations, or whether the sports games caused the increase in coronavirus cases and deaths. Wing et al. argue that their results are plausibly causal estimates:

Because NHL, NBA, and NCAA schedules were created prior to the outbreak, we can be con dent that variation in home games due to scheduling was not influenced by local expectations about the state of the epidemic.

Since they limit their analysis to comparing counties (or MSAs) that have existing professional sports teams, and tend to be larger and more urban than counties without professional sports teams, their claim about causality is fair, although I would have liked to have seen some more robustness checks. For instance, they could have run a placebo test, using the number of scheduled home games for the 2019 or 2018 season within the same period. If the results are the same or similar, then the relationship between home sports games and coronavirus is either spurious, or driven by some other factor that their analysis fails to control for.

I'd expect some pushback from reviewers when this paper gets sent to peer review prior to journal publication, especially on the claims about causality. In terms of telling us about current sports events and their effects on coronavirus cases, the paper might be little help. As the authors note in their conclusion:

...our estimates are based on data generated from games played before social distancing policies and other adaptive behaviors were implemented. It is possible that sporting events would lead to less transmission if people were wearing masks and were seated in a socially distanced manner.

To the extent that social distancing is observed, the effects of more recent sporting events can be expected to be much lower.

[HT: Marginal Revolution]

Monday, 26 October 2020

Book review: The Winner-Take-All Society

As I mentioned earlier in the week, I've been reading The Winner-Take-All Society by Robert Frank and Philip Cook. The book focuses on what Frank and Cook refer to as 'positional arms races' - the tendency for winner-take-all markets to lead to a prisoners' dilemma, where people engage in costly, but ultimately fruitless, investment in order to win the arms race. These winner-take-all markets (and the positional arms races they incentivise) have led to a number of negative consequences, as they summarise in the introduction:

Winner-take-all markets have increased the disparity between rich and poor. They have lured some of our most talented citizens into socially unproductive, sometimes even destructive, tasks. In an economy that already invests too little for the future, they have fostered wasteful patterns of investment and consumption. They have led indirectly to greater concentration of our most talented college students in a small set of elite institutions. They have made it more difficult for "late bloomers" to find a productive niche in life. And winner-take-all markets have molded our culture and discourse in ways many of us find deeply troubling.

That paragraph neatly summarises several chapters of the book, which look at the effects of winner-take-all markets in all of those areas. And far from simply looking at the 'superstar and tournament effects' that exist at the very top of the income distribution, Frank and Cook devote an entire chapter to 'minor-league superstars' such as doctors, administrators, and accountants (no mention of economists, though!).

For someone like myself, who has read a lot of Robert Frank books, there is little that is new. That isn't surprising, because this book was published in 1995, and I have read a lot of more recent work by Frank. The examples, while dated, still do a good job of illustrating the points that are being made (although I wonder how many younger readers would know about the Lorena Bobbitt story, without looking it up. The book has some very quotable bits, such as this:

Social critics have long identified advertising as perhaps the largest and most conspicuous example of pure social waste in a market economy.

And this critique of the free market:

...although Adam Smith's invisible hand assures that markets do a speedy and efficient job of delivering the goods and services people desire, it tells us nothing about where people's desires come from in the first place. If tastes were fixed at birth, this would pose no problem. But if culture shapes tastes, and if market forces shape culture, then the invisible hand is untethered. Free marketeers have little to cheer about if all they can claim is that the market is efficient at filling desires that the market itself creates.

The latter sections of the book discuss ways that society has adapted to curtail positional arms races, including through rules, social norms, contracts, and public policy. However, clearly those attempts have not been entirely successful, or there would be no need for the book!

The most disappointing aspect of the book may be the proposed solutions to the problems, where Frank and Cook focus on progressive consumption taxes. To me, they ignore the obvious problems of such a tax. It isn't as simple as taking a person's (or household's income) and subtracting savings in order to work out their consumption, because people (or households) could consume from past accumulated savings (in which case, the tax authority has to be able to determine how much savings the person (or household) had at the start and the end of the year), or from capital gains (in which case, the tax authority needs to be able to assess all realised capital gains). While they make their proposal seem very simple in theory, it would turn out to be anything but simple in practice.

Nevertheless, this is a good book that covers one of the sources of higher inequality over the last couple of decades, and is important source material for understanding more recent books by Robert Frank, such as the excellent Falling Behind: How Rising Inequality Harms the Middle Class.

Saturday, 24 October 2020

What Zespri could learn from slave redemption

The National Business Review reported on Thursday (paywalled):

Zespri’s board has given the green flag to trial buying fruit grown illegally in China, alongside continuing legal action against one Chinese nursery.

The kiwifruit exporter cooperative has estimated that as much as 4000 hectares of its star cultivar SunGold has been planted illegally in China, mostly in the Sichuan and Shaanxi provinces. The most mature orchards (four-plus years old) are producing about 80,000 trays per hectare and some are producing fruit that is comparable with or exceeds the Zespri standard...

In its September Canopy newsletter, Zespri said that in discussions with the governments of the two countries, it had received “strong advice” to investigate a win-win solution to help mitigate further plantings and maintain the value, and one option was partnering with local Chinese growers.

Chair Bruce Cameron said that the limited trial would involve buying up to 200,000 trays of fruit from a small number of growers. The aim, he said, was to understand “the potential for future commercial arrangements in China, the extent to which cooperation may facilitate enforcement activities for plant variety rights, and the [associated] opportunities and challenges”.

This reminded me of the story of slave redemption in Sudan in the 1990s (which I blogged about here). When well-meaning charity organisations began buying slaves in order to free them, that had a number of unintended consequences. As shown in the diagram below, the addition of charities as a new buyer in the market increased the demand for slaves from from D0 to D1, increasing the price of slaves (from P0 to P1), and importantly it increased the quantity of slaves traded (from Q0 to Q1). So slave redemption actually increased the quantity of slaves traded - the opposite of what was intended.

Now, the case of Zespri is similar to this. If they start buying illegally-grown Chinese kiwifruit, this increases the demand for kiwifruit in China. This increases the profit opportunities for farmers for growing illegal kiwifruit, and so more illegal kiwifruit will be grown.

Admittedly, Zespri is in a tough position, because China isn't known for strong protection of intellectual property rights (which includes 'plant variety rights'). However, further stimulating the market for illegal kiwifruit seems to me like a losing proposition for Zespri.


Friday, 23 October 2020

The labour market returns to apprenticeships in England

One of the Labour Party's policies in the lead up to the New Zealand election was a shift to free apprenticeships. A reasonable question is whether apprenticeships pay off for students - are they better off doing an apprenticeship than not? A recent article by Chiara Cavaglia, Sandra McNally, and Guglielmo Ventura (all London School of Economics), published in the journal Oxford Bulletin of Economics and Statistics (open access), looked at the labour market returns to apprenticeships in England.

One of the challenges of this type of research is determining the counterfactual - what would have been the labour market outcomes for young people who completed an apprenticeship, if they hadn't complete the apprenticeship. Of course, we can't know for sure, so the empirical strategy is really important. Cavaglia et al. compare:

...people with the same highest level of vocational qualification (Level 2 or Level 3), some of whom started an apprenticeship (the treatment group) and others who did not (i.e. the comparison group achieved their vocational qualification only within a classroom setting).

Even with the right comparison group, a basic comparison between the groups in (for example) labour earnings, would not tell you whether it was the apprenticeship that caused the difference in earnings. To identify the causal impact of apprenticeships, Cavaglia et al. use an instrumental variables approach, using as an instrument the extent to which each young person's school peers undertook an apprenticeship. This approach also has the advantage of overcoming bias due to unobserved variables like each young person's non-cognitive ability or motivation. As they describe it:

One way to address causality is to make use of variation in the probability of starting an apprenticeship that is not otherwise correlated with earnings. A plausible source of variation is (within school) cohort-to-cohort variation in the extent to which peers take up apprenticeships between the age of 16 and 18. This variation might exist because of increased exposure to information about apprenticeships via peers in the same grade... In this context, having a friend who starts an apprenticeship might provide additional information on opportunities, the application process and on future prospects.

So, young people are more likely to undertake an apprenticeship if their peers do so, but their peers undertaking an apprenticeship shouldn't affect the young person's labour market outcomes directly. That makes it a suitable instrument. Using this strategy, and linked education and tax records data for all cohorts of students in England who left schooling (at age 16) between 2002/03 and 2007/08 (about 17 million observations), Cavaglia et al. find:

...a very strong relationship between starting an apprenticeship and log earnings at age 23. For men, apprenticeships raise earnings by 30% and 46% for those educated up to Levels 2 and 3 respectively. For women, they raise earnings by 10% to 20% for the respective groups.

However, they find that the earnings premium for an apprenticeship is much smaller (but still significant, especially for men) by age 28. Much of the gender difference in returns is due to differences in the industries in which men and women undertake apprenticeships. Specifically:

There is relatively little overlap in the most popular ten sectors for men and women. Where there is overlap, the earnings premium to having started an apprenticeship tends to be higher for men: at Level 2 - Administration (20% v 6%); Retailing and Wholesaling (12% v 9%), Sports, Leisure and Recreation (11% v 8%); at Level 3 – Administration (21% v 5%), Business Management (25% v 14%), Sport, Leisure and Recreation (24% v 11%). Since we observe a gender earnings gap within sectors that are popular both among men and women, the sector of vocational education cannot entirely explain why the earnings premium to having started an apprenticeship is higher for men than for women.

The gender gap in the returns to an apprenticeship persists after controlling for the industry that the apprenticeship was in, as well as employer characteristics. More research is clearly needed if we want to understand this gender gap.

Overall though, it is clear that the benefits of apprenticeships, in terms of labour market returns, are high, for both men and women. The paper doesn't evaluate whether those benefits outweigh the costs, but if the costs to students are zero (as they soon may be in New Zealand), you can be sure that the private benefits exceed the private costs.

Wednesday, 21 October 2020

Adam Smith on positivity bias

There's an inside joke among economists that there has been nothing new in economics since Adam Smith. Personally, I'm continually surprised at how many 'modern' economic concepts can be traced back to Smith.

Recently, I've been reading The Winner-Take-All Society by Robert Frank and Philip Cook (book review to come soon). They pointed me to this passage from Smith's 1776 book The Wealth of Nations, where Smith seems to anticipate the behavioural economics idea of positivity bias:

The over-weening conceit which the greater part of men have of their own abilities, is an ancient evil remarked by the philosophers and moralists of all ages. Their absurd presumption in their own good fortune, has been less taken notice of. It is, however, if possible, still more universal. There is no man living who when in tolerable health and sprits, has not some share of it. The chance of gain is by every man more or less over-valued, and the chance of loss is by most men under-valued, and by scarce any man, who is in tolerable health and spirits, valued more than it is worth.

Positivity bias (or optimism bias) occurs when we overestimate our own abilities. It occurs when we either overestimate the benefits of things that we do, or underestimate the costs (including opportunity costs) of things that we do, simply because we are the ones doing them. After all, we are awesome! Positivity bias leads us to invest in things we shouldn't invest in (i.e. by 'over-valuing the chance of gain), and to take risks that we shouldn't take (i.e. by 'under-valuing the chance of loss). Although the ideas of behavioural economics are relatively recent, this is at least one idea that can be traced all the way back to Adam Smith.


Tuesday, 20 October 2020

Economists don't believe in civic honesty, but they should

This article by Alain Cohn (University of Michigan), Michel André Maréchal (University of Zurich), David Tannenbaum (University of Utah), and Christian Lukas Zünd (University of Zurich), published in the journal Science (open access), caused a bit of a stir last year. I've only just had the chance to have a proper read of it.

Cohn et al. conducted a field experiment to test the level of civic honesty in 355 cities across 40 countries. Specifically, they:

...turned in “lost” wallets and experimentally varied the amount of money left in them, which allowed us to determine how monetary stakes affect return rates across a broad sample of societies and institutions...

Wallets were turned in to one of five types of societal institutions: (i) banks; (ii) theaters, museums, or other cultural establishments; (iii) post offices; (iv) hotels; and (v) police stations, courts of law, or other public offices...

Our key independent variable was whether the wallet contained money, which we randomly varied to hold either no money orUS$13.45 (“NoMoney” and “Money” conditions, respectively)... Each wallet also contained three identical business cards, a grocery list, and a key.

They found that:

...our cross-country experiments return a remarkably consistent result: citizens were overwhelmingly more likely to report lost wallets containing money than those without. We observed this pattern for 38 of our 40 countries, and in no country did we find a statistically significant decrease in reporting rates when the wallet contained money. On average, adding money to the wallet increased the likelihood of being reported from 40% in the NoMoney condition to 51% in the Money condition (P < 0.0001).

Here's the key result graphically (note that New Zealand is among the countries with the highest level of civic honesty):


Cohn et al. also tried leaving an even larger amount (US$94.15) in wallets in three countries (the U.S., the U.K., and Poland), and that increased civic honesty even more. They then conducted a survey in those three countries, asking people what they expected to happen. They found that:

Respondents predicted that rates of civic honesty would be highest when the wallet contained no money (mean predicted reporting rate M = 73%, SD = 29), lower when the wallet contained a modest amount of money (M= 65%, SD = 24), and lower still when the wallet contained a substantial amount of money (M= 55%, SD = 29).

So, clearly the average person on the street doesn't think that people are as honest as they actually are. And then comes the real 'gotcha' moment in this paper. Cohn et al. ran a similar survey with a sample of "279 top-performing economists", and found that:

...respondents on average predicted that rates of civic honesty would be higher in the NoMoney and Money conditions (M=69%, SD=25 and M=69%, SD=21, respectively) than in the BigMoney condition (M = 66%, SD = 23). These predictions were again significantly different from the actual changes we observe across conditions (P < 0.001 for all pairwise comparisons).

So, apparently economists don't believe in civic honesty. So, what explains this unanticipated (at least, by the public in general, and by academic economists) result? Cohn et al. create a narrative model to explain, where:

...civic honesty is determined by the interplay between four components: (i) the economic payoff of keeping the wallet, (ii) the fixed effort cost of contacting the wallet’s owner, (iii) an altruistic concern for the owner’s welfare, and (iv) the costs associated with negatively updating one’s self-image as a thief (what we call theft aversion).

It is likely to be (iii) and (iv) that explain the desire to return the wallet. In relation to (iv), Cohn et al. rely on their three-country survey, and find that:

Respondents reported that failing to return a wallet would feel more like stealing when the wallet contained a modest amount of money than when it contained no money and that such behavior would feel even more like stealing when the wallet contained a substantial amount of money (P ≤ 0.007 for all pairwise comparisons)...

It's not the most persuasive evidence, particularly since it doesn't preclude (iii) from having an even larger effect. Some further research will be needed to unpack which of those two effects is largest.

However, this paper has highlighted an important issue. As I noted when I reviewed George Akerlof and Rachel Kranton's excellent book Identity Economics, the omission of identity from economic models is a potentially important omission. If the surveyed economists had recognised that the way that people view themselves is an important component of their utility function, and that self-perception as a 'thief' reduces people's utility, then perhaps these results would have been better anticipated, and the surveyed economists wouldn't have looked quite as foolish. Would it be too much to hope for that economists have learned from this?

[HT: Marginal Revolution, last year]

Monday, 19 October 2020

The 'Heckman Curve' takes some heat

The 'Heckman Curve' is the idea that investments in educational interventions have lower net value among older target populations than among younger populations. At least, that is the implication of this curve, taken from the Heckman paper published in the journal Science in 2006:


I refer to the Heckman curve in passing in the topic on the economics of education in my ECONS102 class. But, I hadn't realised until recently that I had the curve wrong (more on that later in the post). It was actually reading this recent article by David Rea (Victoria University of Wellington) and Tony Burton (Auckland University of Technology), published in the Journal of Economic Surveys (open access), that drew my attention to my error. Rea and Burton present a strong critique of the Heckman Curve, based on cost-benefit data on 339 interventions collated by the Washington State Institute for Public Policy. Looking at how benefit-cost ratios vary across the age of the target population of interventions, Rea and Burton find that:
...there does not appear to be any clear relationship between the age of the treatment group and program cost effectiveness.
Indeed, here's Figure 3 from the Rea and Burton paper, showing the lack of a relationship bearing any resemblance to the figure at the top of this post:


To be fair to Heckman, most of the data that Rea and Burton use doesn't actually pertain to educational interventions. However, when they limit their sample to the 110 educational programmes, the key results hold. There is no Heckman Curve in this data.

That wasn't to be the end of this story though, as Andrew Gelman reported back in August (based on an email from David Rea), James Heckman had written a response to Rea and Burton's critique, which was to be published in the Journal of Economic Surveys. Rea and Burton had written a rejoinder to the reply, also to be published. Then, Heckman inexplicably withdrew his reply, but not before the rejoinder had appeared on the JES website as an early view. It's not there now, but you can read it in its entirety in Gelman's post. We are left to wonder what it is that Heckman said in his reply - I guess we may never know. It is disappointing that the debate will not be available in its entirety in the published record.

Anyway, coming back to my error. I've always interpreted the 'Heckman Curve' not as having age on the x-axis, but prior education. I've been interpreting it as a manifestation of diminishing marginal returns to education, regardless of the age at which that education occurs. So, basic literacy and numeracy programmes have large positive effects when enacted at young children, but also when enacted among adults. When you look at the types of programmes that are included in the dataset that Rea and Burton use, it is clear that the adult education programmes are targeted at low-education adults, or at least at adults whose first experience of education probably didn't lead to the best outcomes. So, it is of no surprise to me that the observed benefit-cost ratios have no apparent relationship with age. The programmes at young ages are often much less targeted, and if you re-scaled the x-axis to follow prior education (or prior effective education, appropriately defined), you may well see the declining curve that Heckman envisaged. Of course, that doesn't allow you to tell quite as compelling a story as this, from the Heckman paper in Science:

Early interventions targeted toward disadvantaged children have much higher returns than later interventions such as reduced pupil-teacher ratios, public job training, convict rehabilitation programs, tuition subsidies, or expenditure on police. At current levels of resources, society overinvests in remedial skill investments at later ages and underinvests in the early years.

[HT for the Gelman post: Marginal Revolution]

Sunday, 18 October 2020

Ethnic segregation in Sao Paulo schools, and its relationship with employment and wages

Following Thursday's post about ethnic segregation and spatial inequality in Europe, I was interested to dig out this 2017 article from my to-be-read pile, by Gustavo Fernandes (Fundacao Getulio Vargas, Brazil), published in the journal World Development (sorry, I don't see an ungated version online). Fernandes used data from the 2005 School Census and the 2010 Population Census for the city of Sao Paulo, and looked at the association between segregation within public and private schools, and employment and wages for those aged 18-35. As motivation, he notes that:

The belief that Brazil has benefitted from an absence of racial and ethnic problems has been widely accepted over the last century. Brazil has often been described as a racial democracy.

Part of the motivation, then, is to debunk this 'myth'. I'm not quite sure that this counts as debunking though:

...our results show that Sao Paulo is not a city with a high degree of segregation, especially when compared to the U.S. In the city, approximately 21.29% of students would have to change schools to a new institution in order to achieve an equal composition of students by color among the entire student population of the city.

That's a fairly low level of segregation, compared not just with the U.S., but with many other countries (for instance, there's a lot of concern about segregation in the New Zealand school system). But is segregation related to inequality? Fernandes finds that, for Sao Paulo:

...segregation is correlated with the level of development in the region, which positively affects the expected returns of brancos and amarelos and negatively affects those of pretos e pardos. This result appears to be explained by the predominance of brancos and amarelos in private schools, despite the fact that most of the population of white students attends public schools. However, the effect of segregation becomes negligible when analyzing only the outcomes of students within the public school system.

The predominance of whites in private schools may be the main reason for the deep economic inequality found in Sao Paulo among races. Those schools provide a higher quality of education in comparison to public schools. They may also offer access to social networks that lead to better jobs. Both factors can exponentially increase the average income of the entire white population, resulting in large disparities between the wages of whites and the wages of pardos and pretos.

The brancos and amarelos (whites and Asians, respectively) tend to make up the majority of the class in private schools, and it is private school segregation (and not public school segregation) that is most associated with young adult employment and wages.

Ultimately, this paper demonstrates a result that is the opposite of the paper I discussed last Thursday, where greater segregation was associated with lower spatial inequality. It is impossible to reconcile the results given the wide difference in methods (not least the difference between cross-country analysis at the regional level, and small-area analysis of neighbourhoods within a single city in Brazil). However, this does demonstrate that more research on this topic is needed.


Thursday, 15 October 2020

Ethnic segregation and spatial inequality

For the last few years one of my PhD students, Mohana Mondal, has been looking into ethnic segregation in Auckland (see this earlier post on some of her work). I've also maintained an interest in income inequality. So, I was really interested to read this 2017 article by Roberto Ezcurra (Universidad Publica de Navarra) and Andres Rodriguez-Pose (London School of Economics), published in the Journal of Economic Geography (ungated earlier version here), which links those two ideas. Specifically, Ezcurra and Rodriguez-Pose look at whether ethnic segregation (the concentrate of different ethnic groups within a country) matters for spatial inequality (income inequality between regions or areas of a country).

They use data on a cross-section of 71 countries where they have regional-level data on ethnic groups and region-level GDP per capita. After controlling for various factors known to affect spatial inequality such as the average size of regions, the degree of ethnic fractionalisation of the population (which is basically a measure of how many different ethnic groups there are in a country), the stage of economic development, trade openness, country size and whether a country is a transition country, they find that:

The coefficient of the index of ethnic segregation... is in all cases positive and statistically significant at the 1% level. This implies that more ethnically segregated countries have on average higher levels of spatial inequality...

This holds both for a basic regression specification, but also for an instrumental variables regression, where they attempt to demonstrate a causal effect of ethnic segregation on spatial inequality (as an instrument, they use segregation predicted using the ethnic composition of neighbouring countries). They also show that their results are robust to using alternative measures of segregation and inequality.

Ezcurra and Rodriguez-Pose then go on to investigate potential transmission channels that might explain this relationship. They find that:

...once political decentralisation and government quality are controlled for, the coefficient of the index of ethnic segregation still remains positive, but its effect on spatial inequality is statistically significant only at the 10% level... While not conclusive, these findings suggest the possibility that political decentralisation and government quality could be possible transmission channels linking ethnic segregation and spatial inequality.

The argument is that countries with more ethnic segregation are more likely to decentralise authority to their regions, which increases inequality between the regions.

This is a nice paper, but there are a couple of aspects of the research where some further work is needed. First, this research was based only on cross-sectional data. I would like to see some analysis that included a time dimension before I would conclude definitively that this relationship is causal. Second, the instrumental variables analysis seems fine on the surface, but only until you read this bit:

...the instrument used in the article predicts zero segregation for island countries...

It's pretty difficult to defend an instrument that results in such a wildly off-the-mark prediction. Certainly, you wouldn't want to predict zero ethnic segregation in New Zealand or Australia. I wouldn't expect an alternative conception of the instrument to change the results by a lot, but I think it is worth exploring. So, while this article is interesting, there is definitely more research required in this area.

 

Tuesday, 13 October 2020

Nobel Prize for Paul Milgrom and Robert Wilson

Last night, the 2020 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) was announced as being awarded to Paul Milgrom and Robert Wilson (both Stanford), "for improvements to auction theory and inventions of new auction formats". Based on some informal conversations today, I wasn't the one in my corridor to raise an eyebrow and wonder whether Milgrom didn't already have a Nobel Prize. Needless to say then, the award is well deserved and probably overdue.

This prize is just the latest in a long run of awards for advances in game theory and related work, but this time at the intersection of theoretical and applied economics. Auction theory, as developed by Wilson and Milgrom, is the basis for allocating radio spectrum (for mobile phone signals, for example), for electricity spot markets, and for advertising on Google. It was interesting to note Wilson's original work on the "winner's curse", which is something I have blogged on before (see for example here and here).

Marginal Revolution has good coverage on their broader research, with separate posts for Milgrom and Wilson, pointing to their key contributions. A Fine Theorem also has an excellent post on the topic. The Nobel Committee's website also has a good overall summary.

Milgrom was Wilson's PhD student, but it is worth noting that Wilson was also the PhD advisor for two other earlier Nobel Prize winners, Bengt Holmstrom (2016 winner) and Alvin Roth (2012 winner). That is quite some achievement!

Monday, 12 October 2020

The impact of the world fertiliser market on the fertiliser market in New Zealand

The New Zealand Herald reported today:

Fertiliser prices - a key input cost for farmers - are on their way up.

The international price for DAP - the product that combines ammonium and phosphate - have been bouncing off a low point as major world producers slowly respond to increased demand.

"It's been low for a while and at those low prices, manufacturers around the world - the big players - probably were not making a lot of money," a spokesman for fertiliser co-op Ravensdown said.

"So we are seeing a controlled, disciplined rise, because demand tends to go up in places like Brazil and India at this time of year," he said.

In the world market for fertiliser, there has been an increase in demand. This is shown in the diagram below. The market started at equilibrium with the price PW0 and the quantity of fertiliser traded was Q0. The increase in demand from D0 to D1 leads to a new equilibrium, with a higher price (PW1), and a greater quantity traded (Q1).

Now consider the impact on the New Zealand market. New Zealand is an importing country. That means that the world price of fertiliser is below the domestic equilibrium price, as shown in the diagram below. The original world price, PW0, is below the equilibrium price PA. At that price, domestic producers of fertiliser will only supply QS0 fertiliser, and domestic consumers of fertiliser (i.e. farmers) will demand QD0 fertiliser. The difference between QD0 and QS0 is the quantity of fertiliser imported (the red line M0). When the world price increases from PW0 to PW1, farmers face a higher import price and will demand less (QD1), while domestic producers of fertiliser find it profitable to produce more, and increase production to QS1. The quantity of fertiliser imported declines (to the blue line M1).

The Herald article notes the price changes in both the international and domestic markets:

Over the month or so, DAP prices have lifted to US$350 a tonne from $300/tonne.

Ravensdown has lifted its price to around NZ$780 a tonne from around $750/tonne in August.

Sunday, 11 October 2020

The unintended consequence of child safety seat laws on fertility

Since the 1970s, child safety seats have been made mandatory in most western countries. And since then, the age at which children are allowed to 'graduate' from a safety seat to wearing a seat belt without the safety seat has only increased. That creates a problem if you have a large family, because mid-sized and smaller cars are simply not large enough to fit more than two safety seats in the back. That places an additional cost on families that want to grow beyond two children, since they would need a larger vehicle such as a people-mover or minivan. That significantly increases the cost of having a third small child, so a reasonable question is, have these child safety seat laws affected fertility rates?

That is the question that this new working paper by Jordan Nickerson (MIT) and David Solomon (Boston College) seeks to answer. They use data from the U.S. Census (1990 and 2000) and the American Community Survey (2000-2017), and construct a retrospective panel of women aged 18-35 based on their age and the ages of their children at the time of the survey. That results in nearly 70 million female-year observations. Using that dataset, they then look at the effect of child safety seat laws on the probability of a woman having a third child, when they already have two children that would need child safety seats. They find that:

...when a woman has two children below the car seat age, her chances of giving birth that year decline by 0.73 percentage points. This represents a large decline, as the probability of giving birth for a woman age 18-35 with two children already is 9.36% in our sample.

Of course, this is correlation rather than causation, but Nickerson and Solomon then try to exclude other explanations, showing that:

...we do not find significant effects of car seat laws at other birth margins where car-seat-related crowding is unlikely to be an issue. For instance, there are no significant differences in birth rates when the woman has only one child total of car seat age, or two children total where only one child is required to be in a car seat.

And:

We find that the estimated effects are driven entirely by households with access to a car, consistent with car usage mattering directly. The effect is also concentrated in households where there is an adult male in the household, increasing the likelihood that both front seats are occupied by adults.

So, the results seem plausible. Fertility rates did decline as car safety seat laws expanded in scope. How many births were 'prevented' as a result? Using a simulation model based on their regressions, Nickerson and Solomon find that:

...switching from an eight-year-old mandate to a four-year-old mandate would result in the average woman having 0.0076 more children. In 2017, we estimate that car seat laws lead to a permanent reduction of approximately 8,000 births, and have prevented 145,000 births over our sample period, with 60% of this effect being since 2008. By contrast, if current laws had applied over the whole sample, we estimate there would have been a further 350,000 fewer births.

So, there may have been around 145,000 fewer children born as a result of these laws. How many lives did the car seat laws save? It appears the impact of the laws, if any, was slight, based on car crash data going back to 1975:

Using similar fixed effects specifications as our birth rate tests, we find that the estimated impact of car seat laws on deaths of children below age eight is miniscule. Our best estimates are that existing car seat mandates prevented 57 fatalities nationwide in 2017, with the most favorable estimates being 140 fatalities prevented. In the vast majority of specifications, we are unable to reject a null hypothesis of zero lives saved.

That leads to a really perverse comparison. Car seat laws may have saved thousands of children's lives, but at the same time prevented hundreds of thousands of children from being born in the first place. As Nickerson and Solomon note:

Ignoring the financial cost of purchasing safety seats, these estimates allow one to calculate the implied ratio of the value of a child’s life saved (conditional on them being born) versus the value of a child’s life prevented (children who might have been born, but were not). We estimate this ratio to be between 57 and 141.

Why would society be much more willing to save a child's life, in comparison to having a child born in the first place? Nickerson and Solomon first suggest endowment effects:

People’s acceptable price to acquire a good they don’t yet own is generally lower than their price to part with a good already in their possession, a phenomenon known as the endowment effect.

In this context, once a family is endowed with a child, they would be willing to pay a lot to lower the risk to that child's life, much more than they would have been willing to pay to add a child to their previously smaller family. This arises because of loss aversion. People value losses much more than equivalent gains - in this case, the loss of a child is a much greater negative for the family than is the positive of gaining a child.

That might explain part of the effect, but I think this is more likely:

There is also a large difference in salience between the dramatic event of a small child dying in a car crash, versus the largely unseen effect of a family who wanted another child deciding that the cost is too high.

This relates to Thomas Schelling's observation about the difference between the value of an 'identified life' and the value of a 'statistical life'. Schelling noted the paradox that a community that was willing to pay hundreds of thousands of dollars to save the life of a child that fell down a mineshaft, might simultaneously be unwilling to pay tens of thousands of dollars on highway improvements that would save on average one life every year. Once a child is born, they are an 'identified life', and paying to reduce risks to their life is a valuable expense. However, before a child is born they are simply a 'statistical life', since they only exist in the future with some probability.

Nickerson and Solomon also note that:

...policymakers do not understand the magnitude of the tradeoffs involved, and either overestimate the importance of safety seats on car crash fatalities, and/or underestimate the effects on fertility.

No doubt about that. It isn't the sort of trade-off that would occur to your average policy-maker. However, as Nickerson and Solomon conclude:

The current tradeoff is particularly perverse, given the sheer magnitude by which the unintended consequences exceed the intended consequences.

[HT: Marginal Revolution]

Thursday, 8 October 2020

Recorded lectures and the 'laundry test'

Last month I wrote a post about new meta-analytic research that showed some positive effects of using video recordings as part of teaching, especially if they are supplementary to in-class learning. I noted towards the end of that post that:

The takeaway from this is that, at the minimum, once face-to-face teaching returns we should be routinely recording our existing lectures and making those recordings available to students. Teachers need to get over their fear that making recorded lectures available somehow makes students worse off, because it clearly is not the case.

However, some anxiety remains among teachers, that recording lecture material would lead class attendance to fall. I know that some are even more worried about that, now that students have had a taste of learning by video (although, I'd be inclined to argue exactly the opposite case!). What can you do to make students want to come to class?

I had meant to follow up that earlier post, because I had recently read this pretty insightful article by Dan Levy. In the article, Levy talks about the 'laundry test':

Where I teach, online classes generally get recorded; students can watch the recorded videos if they cannot attend the live session. I recently asked a student how she decided whether to engage in the live class or watch the recording later. Her answer was revealing. She said, “When I am trying to decide, I ask myself, ‘Is this a class I could attend while folding my laundry?’ If the answer is yes, I watch the recording. If the answer is no, I attend the live session.”

While I think that, in general, we should design both synchronous and asynchronous experiences that students find so engaging that they cannot fold the laundry at the same time, I think the spirit of this question might help inform your decision of what to reserve for asynchronous learning.

While Levy is writing about teaching online, I believe the same principles apply to teaching face-to-face. If a lecture session is not interactive and the students could basically be sitting in class folding laundry, then it's probably time to reconsider your approach. I break my lectures up with exercises that make the students put into practice what they are learning immediately. I run short illustrative experiments or collect data from the class to illustrate points in my ECONS102 class. It would be difficult for students to participate in the exercises or experiments effectively and fold laundry at the same time. And it provides a clear value-added benefit over a static lecture recording (and that's why I was so dismayed at the decision not to have face-to-face lectures this trimester).

Anyway, Levy's article provides some great advice for those who are considering taking a blended learning approach. With lessons also for those who are not doing so.


Monday, 5 October 2020

What's behind the decrease in support for free trade?

In my ECONS102 class, we cover international trade and globalisation, but we don't really go into the globalisation debate any more (a consequence of squeezing more cool content into the paper, is that some things get squeezed out). However, we do still cover the arguments for and against free trade. And it would appear, based on recent experience, that the (increasingly populist) arguments against free trade are getting louder. A reasonable question then, is, what is behind the decrease in support for free trade?

In a new article published in the European Journal of Political Economy (ungated earlier version here), Philipp Harms (Johannes Gutenberg University Mainz) and Jakob Schwab (German Development Institute) try to answer that question. They use data from the International Social Survey Programme (ISSP) waves in 2003 and 2013, i.e. before and after the Global Financial Crisis. The data they use covers 21 countries, and includes over 37,000 observations. The key variable is based on the answer to the following question:

“How much do you agree or disagree with the following statement? ‘[My country] should limit the import of foreign products in order to protect its national economy.’”

Respondents were asked to answer on a scale from “Agree strongly” (=1) to “Disagree strongly” (=5). We capture this answer in the variable IMP_PHIL, which takes a value of 1 if a respondent disagrees or strongly disagrees with the statement (i.e. if he or she gives the answer 4 or 5). Over the entire sample, this applies to roughly 40% of the population.

So, given that 40% of people disagree or disagree strongly with that statement, there is substantial (but not majority) support for international trade in the sample. Harms and Schwab then use a regression model to find individual-level and country-level factors associated with support for international trade, and find that:

...a lower Age, higher education (Degree), a more successful career (WrkSup), as well as individual prosperity (RelIncome) induce respondents to support international trade, since all these features enable individuals to reap the benefits of globalization...

On top of these preconditions for economic success, a generally open attitude towards other countries (Cosmopol) is also positively correlated with the likelihood that an individual welcomes foreign goods imports... Moreover... in most economies, the average attitude towards international trade changed significantly between 2003 and 2013. More specifically, we observe that the average support for international trade decreased in twelve out of 21 countries, while it increased in six countries – interestingly, including the United Kingdom and the United States – and did not exhibit significant changes in three countries.

They then go on to tease out the factors associated with the change in support at the country level, and find that:

...a higher (lower) GDP growth rate significantly raised (reduced) support for international trade. The second variable we use to capture countries’ experience during the global financial crisis is the change in a country’s stock market index between its peak (usually June 2008) and its trough (usually March 2009). We expect larger collapses to drag down the support for trade, i.e. a positive sign of the variable StockMarket. The results... support this hypothesis. The third variable we used for Crisis-Experience... was the change in a country’s unemployment rate between 2008 and 2009... the coefficient of CrisisUnemp has the expected negative sign, but that the effect is not statistically significant. By contrast, the duration of the crisis (CrisisDuration) has a significantly negative effect... the change of a country’s Gini coefficient between 2003 and 2013 (ChangeGini, in percentage points) had a significantly negative effect on the support for international trade...

In other words, countries that generally had a worse experience of the Global Financial Crisis (lower GDP growth rate, larger falls in the stock market, and greater increases in inequality, but not changes in the unemployment rate) experienced greater reductions in support for international trade.

Finally, allowing the effects of various characteristics to change over time in their analysis, Harms and Schwab conclude that:

...our findings contradict the standard narrative that the increasing sentiment against international trade predominantly reflects the anger of those groups whose wages and jobs were negatively affected by international competition. By contrast, it is rather the eroding enthusiasm of the elites than the depression of the deprived, which contributed to the declining support for international trade: in 2013, youth, education and income were less likely to make individuals respond explicitly in favor of international trade than in 2003.

Those results are the most surprising aspect of the paper. As Harms and Schwab note, it contradicts the standard narrative.

It feels like there is more important work to be done in this space. Especially, I wouldn't be surprised if there was a common explanation for both the higher-inequality-lower-support relationship and the decline in elite support for trade. Hopefully, further research will help us understand this a bit more.