Sunday, 30 May 2021

Trump vs. the ruble

During his tenure as U.S. president, Donald Trump tweeted angry tweets about Russia on many occasions (to be fair, he tweeted angry tweets against lots of targets). It would be fair to wonder whether those tweets conveyed any useful information about the U.S.'s stance towards Russia, in relation to economic or other sanctions. In a recent article published in the Journal of Economic Behavior and Organization (ungated version here), Dmitriy Afanasyev (JSC Greenatom), Elena Fedorova (Financial University Under the Government of Russian Federation), and Svetlana Ledyaeva (Aalto University) look at the impact of President Trump's tweets on the Russian ruble exchange rate, over the period from October 2016 to August 2018.

Over that period, Trump tweeted 5548 times, of which 296 tweets related to Russia. Afanasyev et al. used five different lexicons to code each tweet as positive, negative, or neutral, and tested seven different decay schemes (in terms of how the effect of a tweet fades over time). To overcome the problem of having lots of variables to test and determining which ones to include in their final model, they use an elastic net (which, for those of you who are pointy-headed, is a mix of LASSO regression and ridge regression - essentially a type of machine learning approach). Having established which variables to include in their model, they then move to a Markov regime-switching model, which is increasingly used in modelling time series where the researcher believes that there are multiple different relationships between the variables (different regimes) that occur at different points in time.

Overall, Afanasyev et al. find that:

...oil price (the only economic variable (fundamental) chosen by elastic net) remains the main long-term determinant of ruble exchange rate... The impact of Trump’s tweets’ sentiment tends to be episodic and short-term. Significant toughening of Trump’s Twitter Russia-related negative rhetoric can lead to short-term (around 3 days) "abruptions" in the process of ruble exchange rate’s formation based on oil price causing significant depreciation of the Russian ruble.

In other words, the Russian ruble exchange rate is mostly determined by the oil price (since oil is such a large component of the Russian economy). However, Trump's negative tweets caused short-term deviations in the exchange rate. However, I wasn't entirely convinced by this paper. The throw-everything-at-the-wall-and-see-what-sticks approach is dodgy, regardless of whether you use a fancy machine learning approach to help with variable selection or not. The results were only statistically significant for the variables that were chosen in the final model, and we don't know if the other measures of sentiment (as noted above, there were five measures) or different decay periods (there were seven) would have led to similar results. Those sorts of robustness checks are important to include, if you want readers to find your results convincing.

Of what we do know from the paper, Trump's negative tweets only affected the ruble some of the time, and there isn't a good explanation for why they affected the ruble those times, and not others (maybe the time of day mattered?). The research question is interesting and potentially important - I hope other researchers are looking at this as well.

Saturday, 29 May 2021

The impact of epidemics on generalised trust

A few weeks ago, the Waikato Economics Discussion Group looked at this article by Arnstein Aasve, Guido Alfani, Francesco Gandolfi (all Bocconi University), and Marco Le Moglie (Catholic University of the Sacred Hearth), published in the journal Health Economics (ungated earlier version here). Aasve looked at a particularly timely topic - the impact of the Spanish Flu (1918-1919) on generalised trust.

This is a difficult topic to investigate, because there are no measures of generalised trust available for the 1918-1919 period (or immediately before and after, which is what you'd really want). Instead, Aasve et al. use data from the U.S. General Social Survey from 1978-2018 to infer measures of social trust for earlier generations. Specifically:

Survey respondents were also asked about their country of ethnic origin and a series of questions regarding their migration history: whether they were born in the United States or not, whether their mother and father were born in the United States and the number of grandparents born outside the country. Using this information, we group respondents on the basis of their country of ethnic origin and categorize them in three waves of immigration: second-generation Americans (i.e., people born in the United States with at least one parent and all the grandparents born abroad), third‐generation Americans (i.e., people with at least two immigrant grandparents and both parents born in the United States) and fourth‐generation Americans (i.e., people with more than two grandparents born in the United States and both parents born in the United States). We exploit different waves of immigration to measure the intergenerational path of social capital transmission by people migrated before and after the spread of the Spanish flu (i.e., 1918)...

It's quite an ingenious method, although it relies on a fairly strong assumption of intergenerational transmission of social trust. They have measures of trust from the GSS for 18 origin countries (Austria, Canada, Denmark, Finland, France, Germany, Hungary, Ireland, Italy, Mexico, the Netherlands, Norway, Portugal, Russia, Spain, Sweden, Switzerland, and the United Kingdom). Comparing levels of trust between countries at different levels of flu mortality, they find:

...a negative and significant effect of the Spanish Flu on trust. An increase in influenza mortality of one death per thousand resulted in a 1.4 percentage points decrease in trust.

Since mortality rates ranged between about 2 deaths per thousand and 20 deaths per thousand, moving from the bottom to the top of the distribution would decrease trust by around 25 percentage points, which is quite meaningful. Aasve et al. then go on to investigate potential mechanisms underlying their results:

A narrower resonance of the war within neutral countries, together with the specific lack of war censorship on media, might have led their respective citizens to internalize the extent and severity of the pandemic, and thus altered their social interactions accordingly... Consistently with this hypothesis, we do find a stronger reduction in social trust for the descendants of people migrating from countries heavily hit by the epidemic and that remained neutral during the war.

Of relevance to the ongoing coronavirus pandemic, they conclude that:

...if, during the Spanish Flu, the failure of government institutions and national health care services to contain the crisis led civil societies to experience a serious breakdown due to the climate of generalized suspicion (a situation further exacerbated by mistakes in communication, also due to war censorship) and this increased the persistent damage to social capital, then governments facing COVID‐19 today might have an additional reason to opt for strong policies of pandemic containment. While these are undoubtedly costly in the short run, it might be that they will contribute to minimize some economic costs to be paid in the long run.

However, as noted in the EDG group meeting where we discussed this paper, you could also interpret the results as suggesting that countries that did not heavily censor their media during the Spanish Flu pandemic suffered greater losses in generalised trust. Therefore censoring the media has a protective effect and governments that wanted to maintain high levels of trust should censor their media. Before we conclude that Russia or China has a better approach to maintaining the generalised trust of their population though, I think we need to see whether these results replicate for the current crisis.

Tuesday, 25 May 2021

The relationship between education and earnings for sex workers

In the basic supply and demand model of the labour market (as I teach in my ECONS102 class), workers with more education (or human capital) are more productive for their employer, and so they get paid a higher wage. That's because more productive workers generate a higher value of the marginal product of labour, which is the value generated for the employer, which determines the wage that the employer is willing to pay. This would also hold (perhaps even more strongly) in the case of self-employed workers. Search models of the labour market (which I teach in my ECONS101 class) also suggest that workers with more education (or human capital) get paid a higher wage. In this case, it is because more educated (and productive) workers have a better outside option, which gives them slightly more bargaining power in negotiating their wage with the employer.

Is it generally the case that more educated workers have higher earnings? Are there occupations where this doesn't hold? You might cite the case of jobs where luck is a big determinant of earnings, such as hedge fund managers. But what about unskilled jobs, where it is plausible that education won't make much of a difference to earnings (at least not within the occupation)?

One occupation that might fit into the latter cases is sex work (although, as I note a bit later, it may be incorrect to label it unskilled, and not for the reasons you may think). However, there is little in the way of analysis on sex workers' earnings and education, because of a paucity of good data. One exception is this 2017 article by Scott Cunningham (Baylor University) and Todd Kendall (Compass Lexecon), published in the journal Review of Economics of the Household (ungated earlier version here). Cunningham and Kendall use survey data on 685 sex workers from the U.S. in 2008-2009, and look at the relationship between education and earnings.

However, first they outline a theoretical model that shows that the relationship between education and earnings from sex work is ambiguous. This ambiguity stems from the marginal disutility of sex work. All work generates disutility (negative utility) to some extent, since working more involves giving up some leisure time, and workers would generally prefer to have more leisure (so, working more makes them worse off on one dimension, which they trade off for higher income). An important question is how much disutility is associated with sex work and, in this case, whether that disutility differs by education. Cunningham and Kendall note that:

Human capital may reduce the disutility associated with prostitution for several reasons. Better-educated women may be preferred by higher-quality clients who have lower disease and violence risks. In addition, better-educated women may be able to reduce arrest, violence, and disease risks by engaging in greater and more sophisticated screening of clients.

So, because sex work is lower risk for more educated sex workers, perhaps the disutility of sex work is lower for them than for less educated sex workers. Cunningham and Kendall then go on to show, theoretically, that:

...education has three separate potential effects on the propensity to engage in prostitution. First, if education is associated with higher legitimate market wages... and/or higher monogamous coupling returns... then education reduces prostitution participation, ceteris paribus. Second, if education is associated with lower marginal disutility from legitimate employment... and/or monogamous coupling... then education further reduces prostitution participation, ceteris paribus. Finally, if education is associated with lower marginal disutility from prostitution... then education increases prostitution entry, ceteris paribus.

And also:

...if education reduces the marginal disutility from prostitution work... and has no material effect on the marginal utility of consumption or leisure, then, conditional on participation, educated prostitutes will work more hours than those with less education.

So, based on the theoretical model, education either reduces engagement in sex work and the number of hours that sex workers will work (if education is not associated with lower marginal disutility from sex work), or education has an ambiguous effect on engagement in sex work but increases the number of hours that sex workers will work (if education is associated with lower marginal disutility from sex work).

Cunningham and Kendall then analyse their survey data, comparing college-educated and non-college educated sex workers, and find that:

...college-educated workers appear to work roughly 13.7% fewer weeks in the prostitution market... [and] conditional on working, college-educated workers see nearly 25% more clients.

...college-educated sex workers earned approximately 33% more in the last week than those with less education, conditional on working, but approximately the same amount unconditionally, accounting for the fact that they are less likely to work at all.

Those results seem to support a lower marginal disutility from sex work for sex workers with more education. Cunningham and Kendall then investigate why. Focusing on data from longer sessions with clients (which are more common among more educated sex workers), they find that:

...college completion is associated with a roughly 15% wage premium for these longer sessions. In other words, while college is not associated with statistically significant wage effects on average it is so for the longest sessions.

This leads them to conclude that:

These longer sessions... likely involve bundling of sexual services with non-sexual services such as companionship, for which college completion may be associated with higher productivity. Because sexual favors presumably form a smaller share of the total work time in these longer sessions, and because... college-educated workers are able to provide longer sessions, these results provide one means by which “job amenities” may be better, and therefore, the disutility of prostitution labor supply lower, for college-educated sex workers.

Cunningham and Kendall also note that:

...college-educated providers appear to be able to attract 33.5% more regulars... Regular clients generally involve lower violence and arrest risk since they are already known; moreover, sex workers may be able to form warmer, less “transactional,” relationships with regulars that may mitigate some of the disutility associated with prostitution labor supply. 

Taken altogether, the marginal disutility of sex work is clearly lower for more educated sex workers. These results also demonstrate that there are essentially (at least) two market segments here. High educated sex workers provide a meaningfully different bundle of services, of which sexual acts are only a part, than do low educated sex workers. This isn't an unskilled labour market for all sex workers. 

Overall, this research demonstrates that education does increase earnings, even in one occupation where a priori you might think that it wouldn't.

Saturday, 22 May 2021

Incentivising coronavirus vaccination

A couple of weeks ago, my ECONS102 class covered externalities. An externality is the uncompensated impact of the actions of one person on a bystander. Externalities can be negative (and make the bystander worse off), or positive (and make the bystander better off). One of the examples I use for a positive externality is vaccines. A person who gets vaccinated makes themselves better off (by reducing their chance of getting sick), but also makes others better off (because there is at least one fewer person who they can get sick from) - that's a positive externality.

The problem with positive externalities is that the market, left on its own, will not ensure that enough is produced or consumed. That's because the market participants don't have an incentive to take into account the benefits that their actions confer on others. That market will produce too little, compared to the quantity that maximises societal welfare. In the case of vaccines, too few people would get vaccinated.

There needs to be some mechanism to encourage more people to purchase goods with positive externalities. One way is to subsidise them (for example, see this post about subsidising education). The subsidy effectively increases the benefits of selling the good or service (if it is paid to the sellers), or reduces the cost of the good or service (if it is paid to the buyers). Either way, it increases the amount that is produced and consumed, and can ensure the quantity is increased to the socially optimal quantity.

Alternatively, the government could find some other way to incentivise more production and consumption. Right now, we're in a situation where governments want to roll out coronavirus vaccines in the face of a substantial amount of vaccine hesitancy. Some governments have started to incentivise vaccines through more than just subsidising them and making them available for free. For example, the New York Times reported last month that:

West Virginia will give $100 savings bonds to 16- to 35-year-olds who get a Covid-19 vaccine, Gov. Jim Justice said on Monday.

There are roughly 380,000 West Virginians in that age group, many of whom have already gotten at least one shot, but Mr. Justice said he hoped the money would motivate the rest to get inoculated, as “they’re not taking the vaccines as fast as we’d like them to take them.”

Some people worry that giving monetary incentives reduces intrinsic motivation. Indeed, this famous research by Uri Gneezy and Aldo Rustichini (ungated version here) showed that fining parents for picking up their children late from a childcare centre encouraged more late pickups. When the moral incentive to pick up on time is replaced by a financial incentive, it turned out to be less effective. The corollary for vaccines is that paying people to get vaccinated could encourage fewer of them to do so.

However, to counter that argument UCLA has run some experiments showing that monetary incentives are effective, as reported by the New York Times a couple of weeks ago:

In recent randomized survey experiments by the U.C.L.A. Covid-19 Health and Politics Project, two seemingly strong incentives have emerged.

Roughly a third of the unvaccinated population said a cash payment would make them more likely to get a shot...

Similarly large increases in willingness to take vaccines emerged for those who were asked about getting a vaccine if doing so meant they wouldn’t need to wear a mask or social-distance in public, compared with a group that was told it would still have to do those things.

So, perhaps we don't need to worry so much about whether the monetary incentive would be effective. And, perhaps we wouldn't have to pay it to everyone. CBS News reported yesterday:

Health officials in Ohio have reported a surge in the amount of people getting their first COVID-19 vaccination shots, a week after Ohio Governor Mike DeWine announced the $5 million "Vax-a-Million" lottery.

Just days after DeWine said the state would award five vaccinated residents $1 million each in order to raise vaccination percentages, the Ohio Department of Health reported more than 113,000 people received their first dose of the vaccine.

Based on preliminary data, the department said the recent period showed a 53% week-to-week increase (May 13 to 18) compared to the time period before the announcement, where 74,000 people received their first dose (May 6 to 11). 

"We are seeing increasing numbers in all age groups, except those 80 and older, who are highly vaccinated already," said Ohio Dept. of Health director Stephanie McCloud. "Although the rate among that group is decreasing, it is doing so at a less rapid pace, demonstrating some positive impact even in that group."

Ohio residents 18 and older who have received at least one dose of the vaccine can enter to win one of the five $1 million prizes. Ohioans between the age of 12 and 17 who have received at least one dose of the COVID-19 vaccine can enter to win one of five four-year, full-ride scholarships to any state college or university in the state. So far, approximately one million entries have been collected, according to Ohio Lottery and Ohio Department of Health.

Gamifying vaccination by attaching it to a lottery is kind of inspired. If people who are the least risk averse are those who are least likely to get vaccinated, and also those who are most likely to play the lottery, then this could be incredibly effective in increasing vaccination rates. People constantly overestimate the chance of events happening that have small probabilities (this is one of the key features of what is called prospect theory), like winning the lottery. So, government wouldn't necessarily have to ensure that the lottery amount was high enough to ensure that it captures all of the social benefits of vaccination, making this solution more cost effective than paying everyone who got vaccinated. For example, paying 100,000 people $100 each to get vaccinated costs $10 million. But, government could possibly offer five prizes of $1 million each and get the same outcome of 100,000 people getting vaccinated for half the total cost.

Overall, New Zealand's approach to vaccination is slow and steady. We're ahead of target (see the New Zealand Herald's Vaccine Tracker), but there is a fair amount of concern about whether we will achieve the overall target (e.g. see comments here or here). There seems to be plenty of demand for vaccines right now, but if things start to slow up later, perhaps we need our own vaccine lottery?

[HT: Marginal Revolution for the NY Times article on incentives; The Dangerous Economist for the article on Ohio's lottery]


Friday, 21 May 2021

Book review: Why Superman Doesn't Take Over the World

I just finished reading Brian O'Roark's book, Why Superman Doesn't Take Over the World, which I thought would be a neat intersection of two of my interests, economics and super heroes. Indeed, I had been looking forward to this book since it arrived just after we came out of the last lockdown last year. However, I was a little disappointed - not because I had set my expectations high, but because the book feels like it very quickly ran out of good ideas.

O'Roark uses examples from comic book super heroes (with a smattering of references to television and the movies) to illustrate economic concepts. Some parts of it are excellent. For example, the chapter that uses Batman and Robin to illustrate comparative advantage and the gains from trade was particularly good, as was the chapter that uses Avengers: Civil War to illustrate game theory and the prisoners' dilemma (also using Mick Rory and Leonard Snart as a second example, which is quite fitting). However, much of the rest of the book fell a bit flat. The comic book references were fine in themselves, as was the economics. However, the two didn't really gel together and the relationship felt quite strained. The final chapter, which attempts to use the concept of utility maximisation to identify who is the greatest super hero, is probably best forgotten (and not because O'Roark arrives at the 'wrong' conclusion, as he doesn't even end up answering the question).

As I noted in this 2017 post, I've often toyed with the idea of writing a book about the economics of the reality TV show Survivor. Aside from the time commitment required to write a book, the main thing that has stopped me from the attempt was the worry that, while there is certainly some excellent material linking Survivor and some fairly fundamental economic concepts and theories, there may not be enough for a book-length treatment. This book suffers from exactly that problem.

If you're into comic book super heroes and economics, there's something to like about a book that attempts to integrate the two together. However, you could probably file this one away with the Green Lantern movie.

Tuesday, 18 May 2021

The gender gap in academic economics in New Zealand

Last year, this article by Ann Brower and Alex James (both University of Canterbury), published in the journal PLoS ONE, got a lot of press (see here or here, for example). Brower and James used data from the Tertiary Education Commission from the Performance Based Research Fund (PBRF) assessment rounds from 2003-2012, along with details on researchers' gender and academic rank (lecturer, senior lecturer, associate professor, professor), to infer things about the gender gap in academia in New Zealand. They found that:

...women’s odds of being ranked, and paid, as Professor or Associate Professor, (i.e. in the professoriate) are lower than men’s... However, women have lower research scores... and the average woman is 1.78 years younger than the average man...

Neither controlling separately for recent research performance with the 2012 research score, nor age using logistic regression... diminishes the gender odds ratio of being in the professoriate...

Controlling for gender, age, 2012 research score, research field, and university together only decreases the gender odds ratio of being in the professoriate to 2.2...

In other words, the log odds of a woman being an associate professor or professor, controlling for their age and research performance, are 2.2 times lower than for men. Brower and James also extended their analysis to difference in pay, and showed that similarly the pay difference wasn't explained by research performance. These are important results, although they tell us little about the underlying reasons why the gap exists.

Anyway, I didn't read the article at the time, but Thomas Lumley at StatsChat made some useful comments:

The big limitation of any analysis of this sort is the quality of the performance data.  If performance is measured poorly, then even if it really does completely explain the outcomes, it will look as if there’s a unexplained gap.  The point of this paper is that PBRF is quite a good measurement of research performance: assessed by scientists in each field, by panels convened with at least some attention to gender representation, using individual, up-to-date information.  If you believed that PBRF was pretty random and unreliable, you wouldn’t be impressed by these analyses: if PBRF scores don’t describe research performance well, they can’t explain its effect on pay and promotion well.

There could be bias in the other direction, too.  Suppose PBRF were biased in favour of men, and promotions were biased in favour of men in exactly the same way.  Adjusting for PBRF would then completely reproduce the bias in promotion, and make it look as if pay was completely fair.

Aside from those criticisms, I wondered about the pay data. It is true that pay for Lecturers and Senior Lecturers is fairly easy to infer from the collective employment agreements, but that isn't the case for Professors, who as far as I can tell, have an incredible diversity of pay rates. Measurement error on both sides of the equation creates some serious problems for the pay analysis, so let's put that aside, and focus on the analysis of academic rank.

Having finally read the article myself this week, this bit in particular caught my eye:

Breaking field into 42 subject areas shows variability amongst areas... When predicting the probability of being in the professoriate, most have a gender odds ratio above 2; in only 9 subject areas are women advantaged (i.e. have an odds ratio less than 1)...

Like me, you're probably now wondering what the nine subject areas are where Brower and James found an advantage for women. I dug into the supplementary materials (which are available at the journal article link), and I can reveal that the nine subject areas are: Nursing; Pharmacy; Economics; Pure and Applied Mathematics; Architecture, Design, Planning, and Surveying; Engineering and Technology; Anthropology and Archaeology; History, History of Art, Classics and Curatorial Studies; and Communications, Journalism and Media Studies.

That list should certainly give some pause for thought, particularly the bolded example. Given the fairly-well-established gender gap in economics (see yesterday's post, and the long list of links at the end of it, for example), how plausible are these results? What about for mathematics and engineering?

Taking a step back, the results say that women are advantaged relative to men in those nine subject areas (and the odds ratios for all nine subjects are statistically significant), controlling for age and research performance. You could still observe an unconditional difference in the proportion of women who are part of the professoriate in those subjects, if women in those subjects are disproportionately younger and/or have lower research performance than men. I don't know about mathematics or engineering, but that could possibly be the case for economics. As I noted in this 2017 post:

The top 25% ranking for New Zealand economists can be found here. Of the 66 economists in that list, only five (that's 7.6% for those of you keeping score) are women (#26 Suzi Kerr; #44 Trinh Le; #58 Anna Strutt; #59 Rhema Vaithianathan; and #61 Hatice Ozer-Balli).

Updating that list as of today, there are 74 economists in the New Zealand top 25% ranking, of which only seven are women (#25 Suzi Kerr; #43 Trinh Le; #53 Hatice Ozer Balli; #66 Gail Pacheco; #68 Anna Strutt; #70 Susan Olivia; and #73 Asha Sundaram). That's still less than 10 percent women - economics is not exactly covering itself in gender-equality glory in terms of research performance (although I would be remiss if I didn't point out that two of the seven are at Waikato, and one of the others is a Waikato PhD graduate). At least there are a few new female economists' names appearing in the list.

So, research performance among New Zealand women economists is much lower than for men. However, they don't appear to be disproportionately high in academic rank. Of the seven women in that top 25% list, Suzi Kerr and Trinh Le are at Motu (so not in Brower and James' dataset), Gail Pacheco and Anna Strutt are Professors, and Hatice Ozer Balli is an Associate Professor. Susan Olivia and Asha Sundaram are both Senior Lecturers. The proportion of men in the top 25% list who are Professors or Associate Professors seems to be much higher than for women. It beats me how economics ends up such an outlier in Brower and James' analysis. If anything, I would have expected economics to be an outlier in the other direction.

Finally, Brower and James conclude with:

Taken singly, the internal logic of each hiring or promotion decision might cohere. But taken together, they reveal a strong pattern in which a man’s odds of being ranked associate or full professor are more than double those of a woman with equivalent recent research score and age.

Except for some pretty surprising outliers.

Monday, 17 May 2021

UWE and the gender gap in economics (results pending?)

I've written a number of posts on the gender gap in economics (see the list of links at the end of this post for examples). Most of those posts review or outline the latest research results on the gender gap. To me, the most interesting research papers are those that use some experiment or quasi-experiment to try and address the gender gap, such as here and here. I hadn't realised at the time, but those two pieces of research linked as part of the same larger initiative, Undergraduate Women in Economics (UWE), run by Tatyana Avilova (Columbia University) and Claudia Goldin (Harvard), and funded by the Alfred P. Sloan Foundation.

UWE is described in more detail in this 2018 NBER Working Paper by Avilova and Goldin. The paper first outlines the current state of the gender gap (illustrated using data from an unidentified college):

The Undergraduate Women in Economics (UWE) project seeks to uncover why women do not major in economics to the same degree as men and what can be done to change that...

Women who take Principles have a much higher probability of majoring in the subject if they obtain a high grade. That is not true for males, who major in economics almost independent of their grade in Principles...

As in most other institutions, the courses that follow Principles for the major at Adams are the intermediate theory courses and econometrics. There is no differential fall off by sex after students take these courses. The prime moments where female students relative to male students decide to major in economics are at the very start of their undergraduate life and just after taking Principles.

I think that accords somewhat with our experience at Waikato. However, we do need to account for what students are intending to major in when they begin their studies. At Waikato, students now have to elect their majors at the start of their degree (but can change them later). In my experience, at the end of ECONS101 (which is business economics, rather than principles of economics), more female students are going on to major in accounting, or marketing, rather than economics. There is a certain inertia when it comes to students' choice of major. If anything though, I think we gain additional female students at the end of ECONS101, relative to their initial choice of majors.

The paper then goes on to describe a randomised controlled trial (RCT) that UWE is undertaking, in order to determine what initiatives are most successful at reducing the gender gap. The initiatives that Avilova and Goldin identify to include in the RCT are:

1. Better Information: These interventions are to provide more accurate information about the application of economics and career paths open to economics majors. Interventions include informational sessions at the start of the academic year, having diverse speakers at events, and ensuring the presence of at least one female adviser.

2. Mentoring and Role Models: The intent is to create networks among students and to show support for their decision to major in the field. Potential interventions include mentoring freshmen and sophomores by upper-class students, providing more guidance to students in finding summer jobs and RA-ships in economics, organizing faculty-student lunches, and producing videos about the department and its students...

3. Instructional Content and Presentation Style: This category is meant to improve beginning economics courses and make them more relevant to a wider range of students. Examples include using more evidence-based material in gateway courses, and incorporating projects, such as those in the local community, into beginning and upper-level courses to allow students to apply their knowledge to current issues.

Note that the results described in the other two papers linked above relate to better information, and mentoring. So, we already have some results. The UWE RCT was supposed to report back in 2020, when the first cohort of students moving through (having started in 2015/16) completed their studies. There is no update on the UWE website on the status of the RCT. I suspect that the pandemic has created some minor havoc with the collection and interpretation of data. Hopefully, we'll hear something more from them soon!

Read more:

Saturday, 15 May 2021

The case for waiving patent protection for coronavirus vaccines is weak

The week before last, my ECONS102 class covered intellectual property rights. So, the unfolding story of the US announcing the waiving of patent protection for coronavirus vaccines is timely. As the New Zealand Herald reported:

The United States is throwing its support behind efforts to waive intellectual property protections for Covid-19 vaccines in an effort to speed the end of the pandemic.

US trade representative Katherine Tai announced the Government's position amid World Trade Organisation talks over easing rules to enable more countries to produce more of the life-saving vaccines.

"The Administration believes strongly in intellectual property protections, but in service of ending this pandemic, supports the waiver of those protections for Covid-19 vaccines," Tai said.

But she cautioned that it would take time to reach the required global "consensus" to waive the protections under WTO rules, and US officials said it would not have an immediate effect on the global supply of Covid-19 shots.

The announcement has generated a lot of debate. Before we get to that though, let's review the arguments for and against strong protection of intellectual property rights. The 2018 Nobel Prize winner William Nordhaus outlined the trade-off inherent in intellectual property rights. Strong protection of intellectual property rights provides an incentive for investment in the creation or development of new intellectual property, but this also provides a limited monopoly to the holder of the intellectual property rights (patents are one example, but so are trademarks, which I discussed in this 2018 post). The monopoly that the strong intellectual property rights creates leads to a higher price for the goods or services derived from the intellectual property, and under-consumption (relative to the welfare-maximising quantity of consumption). Weak protection of intellectual property rights allows anyone to make use of the intellectual property, but reduces the incentive to create it in the first place. This is the trade-off: strong protection of intellectual property rights leads to under-consumption, but weak protection leads to under-investment.

By waiving the intellectual property rights on patented coronavirus vaccines, the US would move the needle from strong protection to weak protection. In theory, that would lower the price of vaccines and increase the quantity consumed. However, that assumes that the pharmaceutical firms are deriving monopoly profits from the vaccines. Despite Pfizer reportedly making billions of dollars from vaccine sales, and no doubt the other pharmaceutical firms are doing likewise, are they really profit maximising here? Remember that coronavirus vaccine sales are being made in response to advance market commitments - governments lined up to guarantee future vaccine purchases at an agreed price before any vaccine had even been approved. Although the pharmaceutical firms had a lot of market power, it seems unlikely that they were really exploiting that power in the face of a high degree of public scrutiny (they're not all being run by Martin Shkreli, after all).

However, as most commentators on the announcement have noted, the assumption that pharmaceutical firms are restricting the supply of vaccines in order to raise the price (which is what a monopoly firm would do in order to maximise profits) doesn't stand up to scrutiny. As Alex Tabarrok noted:

Patents are not the problem. All of the vaccine manufacturers are trying to increase supply as quickly as possible. Billions of doses are being produced–more than ever before in the history of the world. Licenses are widely available. AstraZeneca have licensed their vaccine for production with manufactures around the world, including in India, Brazil, Mexico, Argentina, China and South Africa. J&J’s vaccine has been licensed for production by multiple firms in the United States as well as with firms in Spain, South Africa and France. Sputnik has been licensed for production by firms in India, China, South Korea, Brazil and pending EMA approval with firms in Germany and France. Sinopharm has been licensed in the UAE, Egypt and Bangladesh. Novavax has licensed its vaccine for production in South Korea, India, and Japan and it is desperate to find other licensees...

That doesn't sound like firms that are trying to restrict supply. At least, that's the outward impression one gets. The vaccine supply chain is complicated, and has lots of moving parts. Derek Lowe has written about where the bottlenecks in the vaccine supply chain really are. He is worth quoting at length:

I’ve gone over these other problems before, but here’s a brief summary of those – not in any order, because it’s difficult to rank them and those ranks change. An obvious first problem is hardware: you need specific sorts of cell culture tanks for the adenovirus vaccines, and the right kind of filtration apparatus for both the mRNA and adenovirus ones. You also need specialized mixing equipment for the formation of the mRNA lipid nanoparticles. A good proportion of the world’s supply of such hardware is already producing the vaccines, to the best of my knowledge. Second, you need some key consumable equipment to go along with the hardware. Cell culture bags have been a limiting step for the Novavax subunit vaccine, as have the actual filtration membranes needed for it and others. These are not in short supply because of patents, and waiving vaccine patents will not make them appear. Third, you need some key reagents. Among others, there’s an “end-capping” enzyme that has been a supply constraint, and there are the lipids needed for the mRNA nanoparticles, for those two vaccines. Those lipids are indeed proprietary, but their synthesis is also subject to physical constraints that have nothing to do with patent rights, such as the availability of the ultimate starting materials. Supplies have been increasing via the tried and true method of offering people money to make more, but switching over equipment and getting the synthesis to work within acceptable QC is not as fast a process as you might imagine. Fourth, for all these processes, there is a shortage of actual people to make the tech transfer work. For most reasonably complicated processes, it helps a great deal to have experienced people come out and troubleshoot, because the number of tiny things that can go wrong is not easy to quantify. Moderna, for one, has said that a limiting factor in their tech-transfer efforts is that they simply do not have enough trained people to go around. And keep in mind that these all have to do with producing a stream of liquid vaccine solution – but you need what the industry calls “fill-and-finish” capacity to deal with it after that. Filling and capping sterile vials for injection is a specialized business and the great majority of large-scale capacity is already being used for the existing vaccines. Time and money will fix that, and has been, but waiving vaccine patents won’t.

Eric Crampton made some related points here. I'm not a huge defender of patents. I talk at length in my ECONS102 class about the problems they generate. But, for once at least, it is likely that patents are not the problem here. Similarly, pharmaceutical firms have a lot to answer for. Unlike their early approach to AIDS drugs though, it doesn't seem like they are the problem either. In fact, for once the pharmaceutical firms may actually be primarily focusing on providing social good (no doubt with a selfish eye on the good publicity that comes from being a successful vaccine manufacturer that 'saved the world').

We should also be considering the long-term incentives here. If governments squash intellectual property rights early following this pandemic, then that reduces the incentive for pharmaceutical firms to generate vaccines in the advent of future pandemics (remember the trade-off between strong protection and weak protection of intellectual property). If governments are intent on this path, then they really need to consider some alternative way of maintaining the incentive to innovate. Perhaps creating a fund like the Health Impact Fund, but for contingencies such as future pandemics. The WHO (or some other body) could administer the fund, which would be built up by contributions from governments. In the advent of a pandemic, pharmaceutical firms could be paid out of the fund, in exchange for making their vaccines available in the public domain. The challenge of course is determining the right amount to pay out of the fund. Perhaps the fund could be combined with a predetermined advance market commitment that would apply to any pandemic. It's a difficult question, with many aspects to be worked through, but definitely worth considering.

[Update:] The Economist also raises some good points (which I have also seen elsewhere):

We believe that Mr Biden is wrong. A waiver may signal that his administration cares about the world, but it is at best an empty gesture and at worst a cynical one.

A waiver will do nothing to fill the urgent shortfall of doses in 2021. The head of the World Trade Organisation, the forum where it will be thrashed out, warns there may be no vote until December. Technology transfer would take six months or so to complete even if it started today. With the new mrna vaccines made by Pfizer and Moderna, it may take longer. Supposing the tech transfer was faster than that, experienced vaccine-makers would be unavailable for hire and makers could not obtain inputs from suppliers whose order books are already bursting. Pfizer’s vaccine requires 280 inputs from suppliers in 19 countries. No firm can recreate that in a hurry.

In any case, vaccine-makers do not appear to be hoarding their technology—otherwise output would not be increasing so fast. They have struck 214 technology-transfer agreements, an unprecedented number. They are not price-gouging: money is not the constraint on vaccination. Poor countries are not being priced out of the market: their vaccines are coming through covax, a global distribution scheme funded by donors.

Thursday, 13 May 2021

The effect of physical traits on the perception of scientists

A couple of months ago, I posted about the beauty premium for economists. One of my students then pointed me to this Miami Herald article on good-looking scientists. It refers to this 2017 article (open access) by Ana Gheorghiu, Mitchell Callan (both University of Essex), and William Skylark (University of Cambridge), published in the Proceedings of the National Academy of Sciences. They investigated whether appearances affect science communication for biological sciences and physics, across a number of studies.

In the first two studies, Gheorghiu et al. looked at how perceived social traits (e.g. how 'competent' a person looks, how 'moral' or trustworthy they look, and how 'sociable' or likeable they look) and attractiveness affected the general public's perceptions of the quality of the scientists' work and interest in the research. They found that:

People were more interested in learning about the work of scientists who were physically attractive and who appeared competent and moral, with only a weak positive effect of apparent sociability...

Judgments of whether a scientist does high-quality work were positively associated with his or her apparent competence and morality, but negatively related to both attractiveness and perceived sociability.

In sum, scientists who appear competent, moral, and attractive are more likely to garner interest in their work; those who appear competent and moral but who are relatively unattractive and apparently unsociable create a stronger impression of doing high-quality research.

The other studies built on these initial results, but I don't think they added much to the headline result, which is that the public is more interested in hearing about the research of attractive scientists, but had less confidence in the quality of that work. That suggests a couple of implications to me. First, to the extent that interest in research from the general public drives support for research, funding, or promotions, that would tend to lead to a positive beauty premium for scientists (as was found for economists).  On the other hand, negative perceptions of competence for more attractive scientists would suggest a negative beauty premium. Given that the beauty premium is a fairly robust result in labour economics, that would suggest that perhaps the general public's interest is more important than competence. However, that seems rather unlikely, as it is other scientists, and academic administrators, who ultimately determine the success of an individual scientist.

That brings me to my second implication. This research doesn't currently link well to the existing literature on beauty premiums (and, to be fair, that wasn't its primary purpose). The general public's perceptions of science and science communicators is important, but the career success of a scientist depends on the perceptions of other scientists, not necessarily the perceptions of the general public (which might be just as well, given the results of this study). So, it would be interesting to follow this study up by looking at how scientists (or academic administrators) perceive the competence (especially) of scientists, having been shown their picture. Linking from this work to the beauty premium literature would be a really interesting extension of the work.

[HT: Isla, from my ECONS101 class]

Tuesday, 11 May 2021

Book review: You Look Like a Thing and I Love You

I just finished reading Janelle Shane's You Look Like a Thing and I Love You, which must be one of the best books I've ever read on the realities of artificial intelligence (AI). It is certainly one of the funniest I have read for a while, and many times had me laughing out loud. Repeatedly. Shane blogs at AI Weirdness, where she writes about the myriad ways in which AI gets things wrong. And this book is a collection of those examples, as she explains in the introduction:

...the inner workings of AI algorithms are often so strange and tangled that looking at an AI's output can be one of the only tools we have for discovering what it understood and what it got terribly wrong.

Shane illustrates AI getting things wrong with examples of trained algorithms that do all sorts of things from create recipes, heavy metal band names (Inhuman Sand was particularly good, I thought), ice cream flavours (carrot beer sounds intriguing, if weird), to My Little Pony names (Raspberry Turd is probably one that Hasbro already thought of, but dismissed). The title of the book itself is the output of an AI algorithm, trained to write pickup lines.

The hilarious examples are designed to illustrate a number of things, including the 'five principles of AI weirdness':

  1. The danger of AI is not that it's too smart, but that it's not smart enough;
  2. AI has the approximate brainpower of a worm;
  3. AI does not really understand the problem you want it to solve;
  4. But: AI will do exactly what you tell it to. Or at least it will try its best;
  5. And AI will take the path of least resistance.
If you want to anticipate an AI that is about to fail, you should pay attention to the 'four signs of AI doom':
  1. The problem is too hard;
  2. The problem is not what we thought it was;
  3. There are sneaky shortcuts; and
  4. The AI tried to learn from flawed data.

And Shane notes that AIs can fail because we:

  • "gave it a problem that was too broad;
  • didn't give it enough data for it to figure out what's going on;
  • accidentally gave it data that confused it or wasted its time;
  • trained it for a task that was much simpler than the one it encountered in the real world, or
  • trained it in a situation that didn't represent the real world".
As you can probably tell, Shane appears to like lists. Some readers will probably find it a little off-putting, but it is an effective way of signposting the various examples that Shane then uses to illustrate the points that each list summarises. Readers who have paid attention to the last ten years of AI development, even just in the mainstream media, will have heard many of the key stories that Shane uses. However, that's why the highlight of the book to me was the examples that Shane gives where she has trained her own AI algorithms and they do dumb things, like the Buzzfeed article titles that include '18 delicious bacon treats to make clowns amazingly happy', and '43 quotes guaranteed to make you a mermaid immediately'.

This book is more upbeat than Cathy O'Neil's Weapons of Math Destruction (which I reviewed here), although it does cover some of the same ground. Shane isn't so much sounding the alarm, as pointing out the key flaws that should lead us to a more realistic assessment of the current potential for algorithms. Think of this as an antidote to the breathless hype that often gets repeated online and in the media. AI might be able to beat the best humans at chess or Go, it may be able to create passable abstract art, but it can't give your cat a halfway sensible name, and it sees giraffes in way too many landscape photos (so much so, that there is even a term for this, giraffing). In fact, right now the best AI might not be purely algorithmic. As Shane notes in Chapter 10:

...AI can't do much without humans. Left to its own devices, at best it will flail ineffectually, and at worst it will solve the wrong problem entirely... So it's unlikely that AI-powered automation will be the end of human labour as we know it. A far more likely vision for the future, even one with the widespread use of advanced AI technology, is one in which AI and humans collaborate to solve problems and speed up repetitive tasks.

This is a great book, which I highly recommend for anyone interested in AI. Clearly I'm not the only one who thinks this way. Diane Coyle rates it highly as well (and it was her recommendation that led me to it in the first place), and it got a special mention in her best of 2020 books list. And if you can't get the book, you can always chuckle along with Shane's blog.

Saturday, 8 May 2021

Splitting Kiwirail into two entities makes sense

Every year it seems, Kiwirail is in the news for the losses that it makes. However, that need not be a bad thing, as I wrote in this 2014 post:

Having the natural monopoly make a loss (and this is an economic loss, so it includes opportunity costs, and would be greater than any accounting loss) may be a good thing because it increases total welfare. However, relative to profit maximisation, it entails a transfer of welfare from taxpayers (who ultimately end up paying the loss) to consumers of rail services (and ultimately, to consumers of stuff that is transported by rail).

It was interesting to see Kiwirail in the news this week for something slightly different, as the NBR reported (gated):

A proposal to split KiwiRail into two is still on the table but is not a pressing priority for the government.

Treasury made the suggestion in late 2017 after the Auditor-General’s office had raised concerns about the state-owned rail company’s financial reporting...

Under the specific proposal put forward by the Treasury in late 2017 and early 2018, one entity would be responsible for the tracks and be publicly funded, while the other would have a commercial State-owned enterprise responsible for all the above rail services. The SOE might get some government money but largely would be expected to fund its capital investment itself.

In a paper dated February 2018, and obtained by NBR under the Official Information Act, the Treasury said KiwiRail had an uncomfortable mix of profit-oriented assets and services and public-benefit oriented assets and services.

“This mix distorts KiwiRail’s investment decisions, operating and financial performance, and the Crown’s funding model and level of influence and control,” it said.

The Treasury said the “above rail” aspects of KiwiRail – its locomotives, rolling stock, ferries and commercial work with customers – were profit oriented and properly belonged in an SOE. But the “below rail” network assets – the rails, rail formation, bridges, tunnels, signalling and power infrastructure – were public benefit infrastructure.

The key problem with Kiwirail is, as I noted in that earlier post, that Kiwirail is a natural monopoly. It has a very large up-front (fixed) cost of production, which is the cost of the rail infrastructure, and the marginal costs (the cost of transporting an additional unit of freight) are very low. A privately-owned natural monopoly would profit-maximise, decreasing the quantity of its services and raising the price. A government-owned natural monopoly could do that too, but if government wanted to increase economic welfare, it would prefer the monopoly to set a lower price. And if it sets the price at the welfare-maximising level, the natural monopoly makes a loss (see my 2014 post for details).

Treasury's proposal is to split Kiwirail into two entities. The natural monopoly would remain a government-controlled public-benefit entity, in charge of the infrastructure. The for-profit entity would run its services on the lines owned by the public-benefit entity. The infrastructure could then be government-funded using a pure subsidy, like roads. The for-profit entity would (presumably) no longer make a loss, and look a lot better for the government (controlling the loss-making Kiwirail is not a good look, which is why it is in the media every year).

The parallel with the funding of roads is important here. Separating the infrastructure funding from the operators using that infrastructure seems to me to be an improvement. It would make the funding applied to rail infrastructure wholly transparent to the taxpayer, and allow us to have a better sense of the relative government spending on roads and rail. With the infrastructure coordinated centrally and separately from the commercial rail services, rail services could be opened to competition, with other rail service companies allowed to use the same rail lines (with signalling and scheduling handled through some centralised process). That could lower the costs of rail services further, and allow allegedly 'uneconomic' lines like the rail line from Gisborne to the Hawkes Bay to attract alternative providers if they can be made profitable. Competition could be good for rail passengers as well, with potentially lower fares and more on-time services. Although, care would need to be taken that we don't simply swap one set of problems for another, such as we are seeing with Wellington buses at the moment.

There are also parallels with how airports are run. They aren't owned by the airlines that use the airport infrastructure. Presumably the public-benefit entity will receive fees from Kiwirail (and any other rail service operators on the network), but it could also develop the stations and receive commercial rents from retail firms, hospitality, and so on. This could reduce the subsidy required from the government, and if successful enough, perhaps no subsidy would even be required (although I think this unlikely).

Finally, it would be interesting to know if Treasury has similar views about other state-owned or privately-owned natural monopolies. We have Transpower, which runs electricity transmission lines on a similar model. But what about telecommunications infrastructure? Or water supply infrastructure? In each case public or private firms could deliver services over those networks, paying a fee to a public-benefit entity for the use of the infrastructure under their control. Given the recently compounding issues with Wellington's water infrastructure, an alternative model is definitely worth considering. But first, Kiwirail.

Monday, 3 May 2021

The Commerce Commission, competition and total welfare

This week in my ECONS102 class, we will be covering monopoly. One aspect of that topic is a consideration of public policy that can be applied to reduce the deadweight loss of monopoly. A deadweight loss is a loss of total welfare, and this is important because total welfare is our measure of how much net benefit is generated by the operations of a market. In the simplest sense, total welfare is made up of the consumer surplus (the gains for buyers of participating in the market) and producer surplus (the gains for sellers of participating in the market - essentially, their profits). If a monopoly creates a deadweight loss, then that means that the market isn't generating as much total welfare as it could.

A monopoly creates a deadweight loss because in its efforts to maximise profits, it restricts the quantity that it sells below the welfare-maximising quantity. This is illustrated in the diagram below (which uses a constant-cost monopoly as a simplifying example). The monopoly operates at the profit-maximising quantity, which is where marginal revenue meets marginal cost, at the quantity QM. To sell that quantity, the monopoly would want to set the price at PM (since at PM, the quantity demanded is exactly equal to QM). At that price and quantity, the consumer surplus is equal to the area ABPM, the producer surplus is equal to the area PMBDP0, and total welfare is equal to the area ABDP0.

However, if this market was perfectly competitive, it would operate at the quantity where supply meets demand, which is Q0. The equilibrium price would be P0. Consumer surplus would be equal to the area ACP0, producer surplus would be zero (because every unit costs P0 to produce, and that is equal to the selling price), and total welfare would be ACP0. This total welfare is maximised (because the quantity Q0 is the quantity where marginal social benefit is equal to marginal social cost, which is the required condition for maximising total welfare). Total welfare is lower for the monopoly firm by the area BCD, which is the deadweight loss of the monopoly.

Anyway, coming back to policy, one of the options available to the government is to restrict monopoly's market power by preventing monopolies from forming in the first place. After all, one way that we get large firms like monopolies is when two or more moderately-sized firms merge together. So, government can use anti-trust legislation to prevent these mergers and reduce the deadweight loss of monopoly. No monopoly means no deadweight loss (or, more likely, more competition means something that is closer to perfect competition than to monopoly, and higher total welfare). So, increasing competition seems like a good thing.

Which brings me to this example, from the New Zealand Herald back in February:

Trade Me has applied to the Commerce Commission for clearance to buy property business homes.co.nz.

The Commission said it got a clearance application from Trade Me Limited to buy 100 per cent of the shares or assets of PropertyNZ, which owns and operates the homes.co.nz website...

A public version of the clearance application will be available shortly on the Commission's case register.

The commission says it will give clearance to a proposed merger if it's satisfied it won't substantially lessen competition.

The wording "substantially lessening competition" comes from Section 27 of the Commerce Act, which is the legislation that the Commerce Commission exists to enforce (among other things). However, I think that wording is problematic, because lessening competition is not necessarily the same as decreasing total welfare. In fact, it is entirely possible for a merger between two firms to increase total welfare, while at the same time substantially lessening competition. [*]

To see how, consider the diagram below. Initially, there are many firms in perfect competition, operating at the quantity where supply meets demand, which is Q0. Total welfare is equal to the area ACP0 (this is the same as the previous diagram above). However, when the firms merge to form a single monopoly producer, there are cost savings. Perhaps the single firm doesn't need as many back office functions like finance or HR, and can consolidate some functions. This is represented by the lower marginal cost line MSC'. The merged firm is going to profit maximise by operating at the quantity where marginal revenue equals marginal cost, which is Q1, with a price of P1. The price is higher than the many perfectly competitive firms were charging. Consumer surplus falls to the area AEP1, so consumers are worse off. Producer surplus increases to the area P1EFG, which is much larger than zero (which is why the firms want to merge, no doubt). Total welfare is now the area AEFG.

Has total welfare increased? That depends. The area ECH was part of total welfare under perfect competition, but has been lost in the merger. However, the merger leads to a gain of total welfare equal to the area P0HFG. The question is, which of those two areas (ECH or P0HFG) is larger? It is entirely possible that P0HFG could be larger, in which case the merger increases total welfare (and therefore makes society better off in total). That would require your anti-trust authority to estimate the gains and losses of total welfare arising from the merger and compare them. But, if your test is whether the merger lessens competition (as is the rule applied by the Commerce Commission in New Zealand), even if total welfare increases, the merger could still be declined.

Of course, in practice the word 'substantially' becomes key in the Commerce Commission's assessment of merger applications, because all mergers must lessen competition to some extent. It seems to me, though, that the test potentially excludes some welfare-increasing mergers. A more suitable test is whether the merger reduces total welfare.

[HT: One of our tutors from last year, Taylor, who alerted me to the difference between the theoretical welfare test, and the test that the Commerce Commission actually applies]

*****

[*] Note that I am not saying that this necessarily applies to the TradeMe and PropertyNZ application.

Sunday, 2 May 2021

The double-edged sword of a double degree

I've written before about the value of signalling for university students (see here). The issue is one of asymmetric information and adverse selection. Students have private information about their quality as a future employee - the student knows whether they are high quality or not, but potential employers do not know. This is asymmetric information. That asymmetric information potentially leads to an adverse selection problem. Since employers don't know which job applicants are high quality, and which are low quality, it makes sense for employers to assume that all job applicants are low quality (otherwise, they are taking a gamble). This is what we call a pooling equilibrium - employers would make no job offers, because they think that every applicant is low quality. That is the adverse selection problem - the market fails (especially for high quality job applicants).

Of course, job markets and applicant processes have evolved to reduce this adverse selection problem. As I noted in that earlier post:

One way this problem has been overcome is through job applicants credibly revealing their quality to prospective employers - that is, by job applicants providing a signal of their quality. In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality applicants, would provide the signal), and it must be costly in a way that makes it unattractive for the lower quality applicants to do so (such as being more costly for them to engage in).

Qualifications (degrees, diplomas, etc.) provide an effective signal (they are costly, and more costly for lower quality applicants who may have to attempt papers multiple times in order to pass, or work much harder in order to pass). So by engaging in university-level study, students are providing a signal of their quality to future employers. The qualification signals to the employer that the student is high quality, since a low-quality applicant wouldn't have put in the hard work required to get the qualification. Qualifications confer what we call a sheepskin effect - they have value to the graduate over and above the explicit learning and the skills that the student has developed during their study.

That brings me to this article in The Conversation from last week, by David Carroll, Kris Ryan, and Susan Elliott (all Monash University):

Our research shows double degrees (students study for two degrees at once) can greatly improve new graduates’ prospects of finding full-time work.

Some combinations increased the success rate by as much as 40% compared to students with a single “generalist” degree. The gains were biggest for students in the arts and sciences.

Carroll et al. conclude that:

So, why do double degrees give graduates such an advantage in the labour market? This is a much harder question to answer.

There is likely to be a human capital benefit, in that double-degree holders have gained a greater depth and breadth of skills than those with single degrees. The labour market recognises this through a greater likelihood of receiving a job offer.

There is likely also to be a signalling benefit as employers, faced with very little information on the productivity of the graduates sitting opposite them in job interviews, use the double degree as a sign of their productivity. It’s likely they make offers to graduates on this basis.

There is likely to be a signalling benefit of a double degree, over and above the benefit of completing a single degree. In order to be effective, a signal needs to meet two conditions:

  1. It must be costly; and
  2. It must be costly in such a way that those with low quality attributes would not be willing to attempt the signal.

A university degree meets these conditions. It is costly, and it is more costly for lower quality students, who have to spend more time studying, or may fail papers and have to re-sit them. Does a double degree provide a signal of higher quality than a single degree? It would need to be more costly than a single degree (which it obviously is, as it takes longer to complete). It would also need to be costly in a way that it would dissuade lower-quality students from attempting.

A few years ago, I had a summer research student dig into the data on degree completions at Waikato Management School. They were trying to identify statistically the factors that were associated with students dropping out of their degree. There isn't a published research paper on this, but if you are interested and on campus, there is a copy of the student's research poster on the wall opposite my office. Anyway, one of the key findings was that students enrolled in a conjoint degree (our version of a double-degree) were twice as likely to not complete their degrees as students in a single degree, and the result was highly statistically significant. That suggests that enrolling in a double degree is costly in a way (lower chance of completion) that should dissuade lower quality students.

Taken altogether, the analysis of Carroll et al. and my summer research student suggest that a double degree is a double-edged sword. It has signalling value for employers, but by enrolling a student increases their risk of not completing.

Read more: