Sunday, 31 May 2020

If robots are coming for our jobs, someone forgot to tell the labour market data

It seems that every other week there is a new article written about how robots are coming to take all our jobs - the coronavirus pandemic is just the latest reason to be worried about this (for example, see here). However, this 2017 article by Jeff Borland and Michael Coelli (both University of Melbourne), published in the journal Australian Economic Review (sorry, I don't see an ungated version online), provides a reasonable and evidence-based contrast to the paranoia.

Borland and Coelli essentially argue that, if robots are taking our jobs, we should be able to observe this in the data. And we don't. This is analogous to Robert Solow's famous 1987 quip that "You can see the computer age everywhere but in the productivity statistics".

Borland and Coelli note that:
Evidence for the claimed effects of computer-based technologies on the labour market, however, is remarkably thin. Sometimes it consists simply of descriptions of the new technologies, perhaps with an assertion that these technologies are more transformative than what has come before. Sometimes it consists of forecasts of the proportion of jobs that will be destroyed by the new technologies. Sometimes measures are presented that, it is argued, establish that the new technologies are causing workers to lose their jobs or be forced to shift between jobs more frequently than in the past. Sometimes the evidence is an argument that categories of workers not previously displaced by technological change are now being affected.
They then present data from Australia that basically shows that the adoption of computer technology ('robots', broadly defined) is having little effect on the total number of jobs in the Australian labour market. That is:
From our analysis of employment outcomes in the Australian labour market we arrive at two main findings. First, there is no evidence that adoption of computer-based technologies has decreased the total amount of work available (adjusting for population size). Second, there is no evidence of an accelerating effect of technological change on the labour market following the introduction of computer-based technologies.
Interestingly, the common claim that the cohort entering the workforce now will work many more jobs over their working life that the cohorts before them comes in for a particularly rough time:
Not only is there no evidence that more workers are being forced to work in short duration jobs, but what is apparent is that the opposite has happened. The proportion of workers in very long duration jobs has increased from 19.3 per cent in 1982 to 26.7 per cent in 2016, and there has been a corresponding decrease in the proportion of workers in their jobs for less than a year.
Of course, none of this means that computer-based technology is having no effect on jobs. Like most pervasive technological changes, the rise of computer-based technology changes the type of jobs that are available, and the distribution of those jobs between groups of workers and between regions (or between urban and rural areas). However, based on the data we have so far, it is not correct to be making the apocalyptic claims that some pundits are making.

So, why is everyone so afraid of the robots? Maybe those making the claims are not really afraid. Maybe it's just incentives at work, leading them to make those claims. Borland and Coelli note that:
You are likely to sell a lot more books writing about the future of work if your title is ‘The end of work’ rather than ‘Everything is the same’. If you are a not-for-profit organisation wanting to attract funds to support programs for the unemployed, it helps to be able to argue that the problems you are facing are on a different scale to what has been experienced before. Or if you are a consulting firm, suggesting that there are new problems that businesses need to address, might be seen as a way to attract extra clients. For politicians as well, it makes good sense to inflate the difficulty of the task faced in policy making; or to be able to say that there are new problems that only you have identified and can solve.
That makes a lot of sense, and on the surface so do the claims of the robot apocalypse. At least, until you start to look at the data, which make those claims shaky at best.

Read more:


Tuesday, 26 May 2020

Oliver Williamson, 1932-2020; and Alberto Alesina, 1957-2020

We had a double dose of sad news over the last few days, first with the passing of 2009 Nobel prize winner Oliver Williamson, and then with the passing of Harvard University political economist Alberto Alesina. Williamson was 87, while Alesina was just 63 years young.

Williamson is best known for his work on transaction cost economics, industrial organisation, and the theory of the firm. My ECONS102 students get the very briefest of introductions to his work on transaction costs, which is important to understanding societal and political organisation, as well as the structure of firms.

Alesina is best known for his work on political economy and economic systems, and I really enjoyed his 1997 article with Enrico Spolaore on the optimal number and size of countries, published in the Quarterly Journal of Economics (ungated version here). I've previously blogged about some of his research (see here and here).

You might not think that these two great economists shared much in common, but A Fine Theorem has an excellent article that links both of them:
While one is most famous for the microeconomics of the firm, and the other for political economy, there is in fact a tight link between their research agendas. They have attempted to open “black boxes” in economic modeling – about why firms organize the way they do, and the nature of political constraints on economic activity – to clarify otherwise strange differences in how firms and governments behave.
That article is excellent in explaining the importance of the research of each of them. Haas News has more detail on the life and work of Williamson, while the Washington Post has an excellent obituary for Alesina. They will both be missed.

Monday, 25 May 2020

A few papers on grade inflation at universities

University lecturers who have been around for a while often lament the incentives that universities have to inflate students' grades, and many claim that grade inflation has been an ongoing phenomenon for decades. As I noted in this 2017 post, grade inflation has negative implications for students, because it makes it more difficult for the truly excellent students to set themselves apart from students who are merely very good. However, it isn't just universities that have incentives for grade inflation - the student evaluation system creates incentives for staff to inflate grades too.

However, there are reasons to doubt whether grade inflation is real. High school teaching has improved over time, so perhaps students are coming to university better prepared for university study, and higher grades reflect that better preparation. On the other hand, university teaching has also improved over time, so perhaps students are learning more during their university classes and higher grades genuinely reflect better performance as a result. Finally, as I have noted in relation to economics, over time cohorts of students have increasingly selected out of 'more difficult' courses and into 'easier' courses. So, improving grades may simply reflect changes towards courses that are more generous in offering higher grades. Untangling these various effects, and whether any grade inflation remains after you control for them, is an empirical research task.

I've just finished reading a few articles on the topic of grade inflation, so I thought I would share them here. In the first article, by Rey Hernández-Julián (Metropolitan State University of Denver) and Adam Looney (Brookings Institution), published in the journal Economics of Education Review in 2016 (ungated earlier version here), the authors use data from Clemson University of:
...over 2.4 million individual grades earned by more than 86,000 students over the course of 40 academic semesters starting in the fall of 1982 and ending in the summer of 2002.
They note that:
Over the sample period, average grades increased 0.32 grade points (from 2.67 to 2.99), similar to increases recorded at other national universities... At the same time, average SAT scores increased by about 34 points (or roughly 9 percentile points on math and 5 percentile points on verbal sections)... 
So, while university grades improved over time, so did the SAT scores of the incoming cohorts of students. Moreover, they note that there has been a shift over time in course selection, so that students have increasingly selected into courses that have higher average grades (arguably, those courses that 'grade easier'). Once they decompose the change in grades into its various components, they find that:
...more than half of the increase in average grades from 1982 to 2001 at Clemson University arises because of changes in course choices and improvements in the quality of the student body. The shift to historically easier classes increased average grades by almost 0.1 grade point. Increases in SAT scores and changes in other student characteristics boosted grades by almost another 0.1 grade point. Nevertheless, almost half of the increase in grades is left unexplained by observable characteristics of students and enrollment — a figure that suggests the assignment of higher grades plays a large role in the increase.
In other words, even after controlling for the quality of incoming students and their course choices, grades increased over time, providing some evidence of 'residual' grade inflation. However, this article is silent as to why they observe this grade inflation, and of course it relates to the experience of just one university in the US.

The second article I read recently, by Sergey Popov (National Research University, Moscow) and Dan Bernhardt (University of Illinois, Urbana-Champaign), published in the journal Economic Inquiry in 2013 (ungated earlier version here), develops a theoretical argument for why we might observe grade inflation over time, and for why grade inflation would be greater at 'higher quality' universities. Their theoretical argument rests on the following:
Firms learn some aspects of a student’s non-academic skills via job interviews, and forecast academic abilities using the information contained in the ability distribution at a student’s university, the university’s grading standard, and the student’s grade...
Universities understand how firms determine job placement and wages, and set grading standards to maximize the total expected wages of their graduates.
The incentives this creates leads to a situation where:
...top universities set softer grading standards: the marginal “A” student at a top university is less able than the marginal “A” student at a lesser university. The intuition for this result devolves from the basic observation that a marginal student at a top school can free ride on the better upper tail of students because firms cannot distinguish “good A” students from “bad A” students. In contrast, lesser schools must compete for better job assignments by raising the average ability of students who receive “A” grades, setting excessively high grading standards.
So, top universities can benefit their students by giving more of them higher grades. It turns out this situation is exacerbated when the number of 'good jobs' is increasing over time. However, Popov and Bernhardt's paper is purely theoretical, and therefore they don't provide any strong empirical analysis to support their theory.

The third article I read recently, by Geraint Johnes and Kwok Tong Soo (both Lancaster University), published in the journal The Manchester School (ungated earlier version here) actually provides evidence against Popov and Bernhardt's theoretical model. Johnes and Soo look at aggregate data from all UK universities over the period from 2003/04 to 2011/12, and specifically look at the proportion of 'good degrees' (first or upper second class honours degrees) awarded. They use a stochastic frontier model in order to control for the inefficiency of some universities in producing top graduates - the most efficient universities form the frontier in this analysis. They find little evidence of grade inflation:
The evidence to support the existence of grade inflation is, at best, patchy. The quality of the student intake to universities has typically been rising over this period, and there have been changes in other factors that might reasonably be supposed to affect degree performance too.
In relation to the Popov and Bernhardt theory, they find that:
...although better universities award more good degrees, we find little evidence that different groups of universities exhibit different degrees of grade inflation over time.
However, there is a real limitation of this study relative to Hernández-Julián and Looney study I mentioned first, that identified grade inflation at Clemson University. The first paper controlled for student quality using SAT scores, while the Johnes and Soo paper controlled for student quality using student results in A levels. So, rather than finding no evidence of grade inflation, it would be more correct to say that Johnes and Soo found no evidence of grade inflation at university to a greater degree than the extent of grade inflation at high school. Because Hernández-Julián and Looney use the results of a standardised test, their analysis isn't subject to the same limitation. So, grade inflation may be real, but in Johnes and Soo's results it is no worse at university than it is at high school.

Overall, these three articles present contrasting views on grade inflation. There is definitely more research required on the extent to which grade inflation is a real phenomenon, and how much of it can be explained by changes in student quality, teaching quality, or course selection by students.

Read more:


Saturday, 23 May 2020

What happens to pawnbrokers when you ban payday loans?

Many people believe that payday loans are exploitative. They come with high fees and high effective interest rates (some can exceed 500% on an annualised basis). They tend to be targeted at low income people, who are excluded from traditional lending due to being perceived as high risk and/or lacking the collateral and credit history or rating necessary to obtain a loan from a more traditional lender.

A common policy solution to the perceived exploitation of low income borrowers is to restrict lending practices, such as setting a maximum effective interest rate (annual percentage rate, or APR). If the maximum rate is set too low though, it makes it uneconomic for any payday lender to lend to low income borrowers, effectively closing the market for payday loans, and further excluding low income borrowers from credit markets. However, that doesn't mean that these low income borrowers aren't going to look elsewhere for short-term loans.

If the government bans payday loans, or makes them uneconomic to offer, these borrowers might turn to other sources of loans, such as informal and unregulated loan sharks, or pawnshops. This increases the demand for those services, and makes them more profitable. If the barriers to entry into the loan shark or pawnshop market are low, then new firms may enter these markets to take advantage of the new profit opportunities.

That is the hypothesis that was tested in this article by Stefanie Ramirez (University of Idaho), published in the journal Empirical Economics (sorry, I don't see an ungated version). Ramirez looked at how the number of financial institutions (of various types) changed when Ohio set a maximum APR of 28% on payday loans in 2008, effectively making the industry uneconomic. Using monthly data at the county level from 2006 to 2010, she found that:
...the payday lending industry was demonstrably populated and active within the state prior to the ban with an average of 123.85 county-level operating branches per million. The effects of the ban can most definitely be seen as the average number of operating branches decreases to 10.14 branches per million in periods with the ban enacted.
Ok, so the ban was effective. Turning to other financial institutions, she finds that:
Pawnbrokers and precious-metals dealers are similarly concentrated to one another pre-ban, with an average of 16.65 branches per million and 18.51 branches per million, respectively. However, while there was an increase in concentration in both industries after the ban, growth in the pawnbroker industry was more pronounced than with previous-metal dealers, with the pawnbroker industry nearly doubling in size...
Small-loan lenders are the least populated industry but also show slight growth between pre- and post-ban periods. The average number of operating branches per million increased by approximately 21% between regulatory periods...
Finally, the average operating second-mortgage licensees per million shows no growth, however shows no decline between pre- and post-ban periods.
After controlling for other variables (like the price of gold, and the real estate index, population and other demographic variables) in a regression analysis, she found that pawnbrokers, small-loan lenders and second-mortgage licensees all showed statistically significant increases in numbers after the law change came into effect.

Low income people want access to credit. If policymakers ban one source of credit, they simply shift those borrowers into other markets, and those markets become more active. As Ramirez concludes:
In an effort to eliminate payday lending and protect consumers, policymakers may have simply shifted operating firms from one industry to another, having no real effect on market conduct.
The sad thing is that these other markets (e.g. pawnbrokers) may be even more difficult to regulate than the payday loan industry was.

[HT: Marginal Revolution, last year]

Thursday, 21 May 2020

Hormonal contraceptives and economic behaviour

In the canonical model of neoclassical economics, decision-makers' preferences are assumed to be fixed. Now, anyone with a passing acquaintance with the real world would recognise that assumption as false - people change their minds when offered essentially the same choice at different points in time, and not just because their tastes change. That raises a valid question: what sorts of things determine (and therefore can change) our preferences? Are preferences socially constructed, or are there biological mechanisms at play? Or both?

I'm not going to attempt to answer those questions today, but I am going to briefly highlight this recent article (open access) by Eva Ranehill (University of Gothenburg) and co-authors, published in the journal Management Science. They studied the effects of hormonal contraceptives on economic decisions. The rationale is simple:
...OCs [oral contraceptives] have potent hormonal effects, and recent research suggests that OC intake may influence female economic decision making through their hormonal impact... Combined OCs suppress levels of endogenous testosterone, estrogen, and progesterone, and artificially mimic high estrogen and progesterone levels through varying doses of synthetic ethinylestradiol or estradiol and progestins... A growing body of economic studies has found these hormones, which are either present in OCs, or interact with OC intake, to be related to important economic preferences and behaviors such as financial risk taking, competitive bidding, willingness to compete, as well as social preferences...
So, here is a potential biological determinant of preferences - hormones. Ranehill et al. conducted a randomised controlled trial. Of the 340 women in the study, some were randomly selected to receive oral contraceptives (the treatment group), while others received a placebo (the control group). After three months, the women participated in some economics experiments. Specifically, the experiments were designed to measure the women's altruism, financial risk taking, and willingness to compete. They found that:
...no significant effects of OCs on economic decision making related to altruism, financial risk taking, and willingness to compete. Measured hormonal levels changed in the expected direction, which supports that participants were compliant to study treatment. However, these changes did not result in any significant effects on economic preferences in our three primary outcome measures.
They also find no effects of the phase of the menstrual cycle on decision-making either. So, despite some earlier studies having found hormonal effects on decision-making, this randomised controlled trial finds none. Of course, we'd want to see some replication of this in other (and potentially larger) studies before we draw too strong a conclusion. And of course, the study was limited to those three measures of preferences (altruism, financial risk taking, and willingness to compete). However, file this study on the side of evidence in favour of social determinants of preferences.

[HT: Marginal Revolution, back in 2017]

Monday, 18 May 2020

A cynical take on The Body Shop's 'open hiring' policy

In the world before coronavirus lockdowns, this article on Fast Company caught my attention, and I've been meaning to blog about it for a while:
Almost all retailers run background checks on prospective employees—one of the many obstacles for people who were formerly incarcerated and are now trying to find a job. For other job seekers, a drug screening for marijuana might cost them a position even in states where recreational use is legal. This summer, the Body Shop will become the first large retailer to embrace a different approach, called “open hiring.” When there’s an opening, nearly anyone who applies and meets the most basic requirements will be able to get a job, on a first-come, first-served basis.
The company piloted the practice, which was pioneered by the New York social enterprise Greyston Bakery, in its North Carolina distribution center at the end of 2019...
The results were striking: Monthly turnover in the distribution center dropped by 60%. In 2018, the Body Shop’s distribution center saw turnover rates of 38% in November and 43% in December. In 2019, after they began using open hiring, that decreased to 14% in November and 16% in December. The company only had to work with one temp agency instead of three.
I can immediately see two reasons why this might be a good approach for The Body Shop, and neither reason has anything to do with the 'halo effect' of appearing to be a good employer. Both reasons have to do with wages (which, if you read the story, you will notice are not mentioned anywhere).

First, consider a search model of the labour market. Each time a job is filled, this creates a match between the employer and a worker, and that match creates a surplus (the employer receives some additional profits from employing the worker). The employer and the worker share the surplus. How it is shared depends on their relative bargaining power. If the employer has less bargaining power and the worker has more, then the worker can demand a bigger share of the surplus, and wages will be higher. On the other hand, if the employer has more bargaining power and the worker has less, then the employer can offer a lower wage, and wages will be lower.

Now, think about The Body Shop's open hiring. Suddenly, they are willing to accept applications from almost anyone. Applications flood in. The Body Shop has lots of potential workers to choose from. If Worker A isn't willing to accept the wage that The Body Shop offers, then they can simply move on to Worker B, or Worker C, or any of the countless other applicants. Can you see that this 'open hiring' shifts the bargaining power in favour of The Body Shop? They can offer lower wages because the applicant pool is much larger than before.

Second, consider asymmetric information and adverse selection. Workers know how productive they are (more or less), but the employer doesn't know - this is private information. The employer usually wants to hire the most productive workers. So, they try to reveal the private information through screening. The background checks, and job interviews, and psychometric tests and whatever else that human resources people dream up are all ways of screening job applicants in order to determine which of them are most likely to be highly productive workers for the employer.

Now, consider what happens if you remove the screening tools like background checks. Now the employer can't tell the more productive and less productive applicants apart. Basically, they would have to assume that all applicants are the same, and the safer assumption to make is that they are all of the less productive type - we refer to this as a pooling equilibrium. Normally, we'd consider a pooling equilibrium to be bad - employers want to tell the more productive and less productive workers apart. However, if an employer had to assume that all workers were of the less productive type, then their best option is to offer a lower wage (why pay a high wage to workers who are likely to have low productivity?).

So, now you can see that there are two good (albeit cynical) reasons to see why The Body Shop's 'open hiring' philosophy makes good business sense. That's not the end of the story though. Background checks are a fairly imperfect screening tool, and even job interviews and psychometric tests can get things wrong. A better screening tool is to let the worker do some work, and observe their productivity. If the employer can then easily fire any less-productive workers, then this may be a more accurate way to identify the most productive job applicants.

In the U.S., many states have 'at-will employment laws', which basically mean that the employer can fire a worker at any time for any reason (there are exemptions, and the actual laws vary by state). It is interesting to note that North Carolina is a state with at-will laws, and North Carolina is where The Body Shop's distribution centre that trialled 'open hiring' is located.

The Body Shop may well have gotten a lot of positive press with this move, but it's likely that is not the only benefit they have enjoyed from 'open hiring'.

[HT: Marginal Revolution]

Friday, 15 May 2020

It's time to move to a literal research grant lottery

If you ask academic researchers about grant writing, many (if not most) of them will describe it as a lottery (for example, I made that point in a post last December). But, like any lottery, the only way to win is to buy a ticket. And so, many hours are spent on grant application writing, trying to create the 'perfect' grant application that will get funded. But maybe there is a better way?

The Health Research Council has been awarding its Explorer Grants using a modified lottery system since 2013. Is it a better system? It appears so, at least according to those who have been successfully funded. A new article (open access) by Mengyao Liu (Health Research Council) and co-authors, published in the journal Research Integrity and Peer Review, reports on a survey of Explorer Grant applicants. They find that:
There was agreement that randomisation is an acceptable method for allocating Explorer Grant funds with 63% (n = 79) positive.
However, this might be the most interesting bit from the article:
Respondents who had won funding were far more positive about the use of random funding allocation... Seventy-eight percent of respondents who had won Explorer Grant funding thought randomisation was acceptable, compared with 44% for those whose applications were declined by the panel. Similarly, far more applicants who had won funding supported an expansion of random funding into other grant types.
I guess that is a form of positivity bias? If you have been successful in the lottery, you are more likely to think that lotteries are a good way to allocate funding. But if you've been unsuccessful, you're more likely to disagree with them, probably thinking that your proposal would have been funded under a more 'equitable' funding allocation mechanism.

On the issue of equity though, I thought that this point from the article was important:
By reducing the role of people in decision making, lotteries also minimise the problems of sexism, racism and ageism influencing who receives funding...
I hadn't really considered that a benefit of randomisation before. Instead, I have long believed in randomisation simply because of a simple cost-benefit calculation - the time saving in peer review alone would be enormous. However, it appears that randomisation doesn't save the applicants time:
A surprising result was that most applicants did not reduce the amount of time they spent on the application... As one applicant commented, “I am pretty excited about this project and so, in the end, it [the random allocation] had no impact on the effort I put into preparing my bid.”
Overall, I'd say those results are pretty favourable to instituting a literal (rather than figurative) research funding lottery. Time savings, less focus on 'track record' (which privileges established names over early- and mid-career researchers), and less scope for bias, are all good reasons to support it. The time has come. We need research funding by lottery.

[HT: The New York Times, via Marginal Revolution]

Thursday, 14 May 2020

Donohue and Levitt revisit their famous paper on abortion and crime

As I mentioned in yesterday's book review, there was a vigorous debate back in the 2000s about this paper by John Donohue (Stanford University) and Steven Levitt (of Freakonomics fame, and the University of Chicago). So much so that the debate even has its own Wikipedia page. You can read the original paper, published in the Quarterly Journal of Economicshere (ungated version here). The short summary is that Donohue and Levitt found a link between legalising abortion in the U.S. in the 1970s, and subsequent reductions in crime in the 1990s. In fact, they concluded that legalised abortion may have accounted for 50 percent of the decrease in crime. One of the most robust arguments against their findings was this article by Christopher Foote and Christopher Goetz, also published in the Quarterly Journal of Economics (ungated version here), where they showed that using arrest rates instead of arrest totals, the effect went away (and they had other criticisms of the original paper as well).

In a new volley in this debate, Donohue and Levitt followed up on their 2001 paper last year with a new NBER Working Paper. In this new paper, they repeat the same analysis, using an additional 17 years of data. They find that their original results still hold:
The estimated coefficient on legalized abortion is actually larger in the latter period than it was in the initial dataset in almost all specifications. We estimate that crime fell roughly 20% between 1997 and 2014 due to legalized abortion. The cumulative impact of legalized abortion on crime is roughly 45%, accounting for a very substantial portion of the roughly 50-55% overall decline from the peak of crime in the early 1990s.
Here's their Figure III, which I think nicely encapsulates the key result:


The blue line tracks the difference in violent crime between 19 high-abortion states and 32 low-abortion states (District of Columbia counts as a state in their analysis, which is why there are 51), while the red line tracks the difference in effective abortion rates between high-abortion states and low-abortion states. As the difference in abortion rates increases, the difference in violent crimes decreases.

This is hardly going to be the last word in this debate though. Given that this analysis simply repeats the original analysis but with more data, expect the same objections to be raised this time around.

[HT: Marginal Revolution, last year]

Wednesday, 13 May 2020

Book review: Economics 2.0

I just finished reading Economics 2.0 by Norbert Haring and Olaf Storbeck. The subtitle is "What the best minds in economics can teach you about business and life", and the book takes a tour of some of the latest in economics research (as of 2009, when it was published). Haring and Storbeck are not researchers, so unlike books such as Freakonomics, this book doesn't contain a summary of the authors' own research. However, it is a grand tour of many areas of research in economics. As the authors explain in the preface:
...in recent years, economics and business studies have made huge strides. They have become more empirical, more realistic. It is this type of a contemporary economic science which we refer to as Economics Version 2.0.
That quote probably overstates the case. While the early chapters contain truly novel and pathbreaking work such as neuroeconomics, the book quickly devolves into more mainstream economics fare such as labour economics, trade, and financial economics. While the book is nicely written and engaging, I think it falls short of a vision of 'Economics 2.0'.

The highlight to me was actually the last chapter, where Haring and Storbeck present some of the most stirring arguments in economics, such as the argument between Foote and Goetz, and Donohue and Levitt, over whether legalising abortion reduced crime (more on that in a post tomorrow), and the argument between Oberholzer-Gee and Strumpf, and Liebowitz, over whether online piracy reduced music sales (which I have previously blogged about here).

Even though I think the title overstates the content of the book, I really enjoyed it. Haring and Storbeck write in an engaging, easy-to-read style that was easy to follow. If you want to see the breadth of economic research, this book will provide you with a good taster.

Tuesday, 12 May 2020

The long-run economic consequences of pandemics

The exemplar for the economic impact of a pandemic is the Black Death, which struck Western Europe in the 14th Century and killed somewhere between one third and two thirds of the population. As I discuss in my ECONS102 class, the Black Death had long-run consequences, and has been implicated in a lot of later social change in Europe (such as the Jacquerie rebellion in France [1358], the Peasant Revolt in England [1381], and even the Renaissance). However, that's just the tale of a single pandemic - is there something more general we can expect about the long-run consequences of pandemics?

That seems to be a question with particular relevance right now, and is the topic of this new NBER Working Paper (ungated) by Oscar Jorda (Federal Reserve Bank of San Francisco), Sanjay Singh, and Alan Taylor (both University of California - Davis). They looked at the impact of 15 major pandemic episodes over the period from the Black Death (1347-1352 C.E.) to H1N1 (2009 C.E.) on the real natural interest rate (essentially a moving average of the real interest rate). They hypothesise that pandemics result in:
...transitory downward shocks to the natural rate over such horizons: investment demand is likely to wane, as labor scarcity in the economy suppresses the need for high investment. At the same time, savers may react to the shock with increased saving, either behaviorally as new precautionary motives kick in, or simply to replace lost wealth used up during the peak of the calamity.
And indeed, that is what they find:
Pandemics have effects that last for decades. Following a pandemic, the natural rate of interest declines for decades thereafter, reaching its nadir about 20 years later, with the natural rate about 150 bps [basis points] lower had the pandemic not taken place.
Looking at real wages, they find that:
The response of real wages is almost the mirror image of the response of the natural rate of interest, with its effects being felt over decades... real wages gradually increase until about three decades after the pandemic, where the cumulative deviation in the real wage peaks at about 5%. 
Both results are consistent with earlier work on the economics of the Black Death. Interestingly, the effects for major episodic wars are the opposite - the real natural interest rate increases after periods of war. Their results are also robust to excluding various pandemics, including the Spanish Flu (which, of course, was followed soon after by the Great Depression).

Jorda et al. conclude that:
...if the trends play out similarly in the wake of COVID-19—adjusted to the scale of this pandemic—the global economic trajectory will be very different than was expected only a few weeks ago. If low real interest rates are sustained for decades they will provide welcome fiscal space for governments to mitigate the consequences of the pandemic.
It's difficult to see whether things will play out the same this time. Real (and nominal) interest rates are at all-time lows already (see, for example, the various charts in this paper) - they don't have far to go. Maybe this time really is different?

Monday, 11 May 2020

Are student evaluations of teaching even measuring teaching quality?

A few months ago, I wrote a post about gender biases in student evaluations of teaching, concluding:
All up, I think it is fairly safe to conclude that SETs are systematically biased, and those biases probably arise from stereotypes...
We need to re-think the measurement of teaching quality. Students are not consumers, and so we can't evaluate teaching the same way we would evaluate a transaction at a fast food restaurant, by surveying the 'customers'.
You might have taken away from that post that within a gender or ethnic group, the ranking of teachers might be a suitable measure, even if the relative ranking between those groups led to bias overall. After all, there are decades of research and meta-analyses (such as this heavily cited one by Peter Cohen).

However, I just finished reading this 2017 article by Bob Uttl (Mount Royal University), Carmela White (University of British Columbia), and Daniela Wong Gonzalez (University of Windsor), published in the journal Studies in Educational Evaluation (appears to be open access, but just in case there is an ungated version here), which calls into question all of the previous meta-analyses (including, and especially, the Cohen meta-analysis). All of the papers in these meta-analyses are based on multi-section designs. Uttl et al. explain that:
An ideal multisection study design includes the following features: a course has many equivalent sections following the same outline and having the same assessments, students are randomly assigned to sections, each section is taught by a different instructor, all instructors are evaluated using SETs at the same time and before a final exam, and student learning is assessed using the same final exam. If students learn more from more highly rated professors, sections' average SET ratings and sections' average final exam scores should be positively correlated.
Uttl et al. first replicate the meta-analyses conducted in several past papers, correcting for small study bias - the idea that studies with only a small number of observations (in this case, a small number of sections) are more likely to report large effects, even if the 'true' effect is zero. In the case of re-analysing Cohen's data, they report that:
...Cohen’s (1981) conclusion that SET/learning correlations are substantial and that SET ratings explain 18–25% of variability in learning measures is not supported by our reanalyses of Cohen's own data. The re-analyses indicate that SET ratings explain at best 10% of variance in learning measures. The inflated SET/learning correlations reported by Cohen appear to be an artifact of small study effects, most likely arising from publication bias.
They find similar in re-analyses of other past meta-analyses, and then go on to conduct their own thorough meta-analysis of all studies up to January 2016, which included 51 articles reporting the results of 97 studies. They find that:
...when the analyses include both multisection studies with and without prior learning/ability controls, the estimated SET/learning correlations are very weak with SET ratings accounting for up to 1% of variance in learning/achievement measures... when only those multisection studies that controlled for prior learning/achievement are included in the analyses, the SET/learning correlations are not significantly different from zero.
In other words, there is no observed correlation between teaching quality (as measured by final grade or final exam mark or similar) and student evaluations of teaching. Better teachers (as measured by their evaluation scores) do not provide better teaching (as measured by student outcomes). Uttl et al. conclude that:
Despite more than 75 years of sustained effort, there is presently no evidence supporting the widespread belief that students learn more from professors who receive higher SET ratings.
That seems to simply strengthen my conclusion from the previous post, that we need to re-think the measurement of teaching quality. At a time when our teaching methods have been thrown into chaos by the coronavirus lockdown, it seems like this might be an opportune time to rethink evaluation and build something more reliable.

Read more:


Saturday, 9 May 2020

A few papers related to evaluating the optimal coronavirus lockdown

Earlier this week, I posted about the optimal length of the COVID-19 lockdown, and noted that:
...if the marginal benefit of lockdown is highly uncertain, and the marginal cost is also uncertain, then we really have no way of knowing for sure whether the lockdown has been too long, or too short.
Of course, I'm not the only one thinking about the issue of how long an optimal lockdown should last, although as I also noted in that post, the quality of work is highly variable. In the main, it is because the researchers aren't systematically thinking through both the benefits and the costs of lockdown (most are focused on one or the other). However, I have taken note of a number of papers that seem to address the question of the optimal lockdown length using a suitable framework of both costs and benefits.

For instance, this NBER Working Paper by Fernando Alvarez (University of Chicago), David Argente (Pennsylvania State University), and Francesco Lippi (Einaudi Institute for Economics and Finance), investigates:
...the problem of a planner who has access to a single instrument to deal with the epidemic: the lockdown of the citizens... The planner's problem features a tradeoff between the output cost of lockdown, which are increasing in the number of susceptible and infected agents, and the fatality cost of the epidemic.
Notice that this is pretty much the same trade-off I outlined in my post (although I didn't frame it in exactly their terms). Using a combination of a simple epidemiological model and a simple economic model, Alvarez et al. study (emphasis is theirs):
...how the optimal intensity and duration of the lockdown depend on the cost of fatalities, as measured by the value of a statistical life, on the effectiveness of the lockdown (the reduction in the number of contacts once the citizens are asked to stay home), and on the possibility of testing, i.e. to identify those who acquired immunity to the disease. We show that if the fatality rate (probability of dying conditional on being infected) is increasing in the number of infected people, as is likely the case once the hospital capacity is reached, the policy maker motive for lockdown is strengthened.
Their findings are not at all surprising, and depend on their parameter assumptions:
In our baseline parameterization, conditional on a 1% fraction of infected agents at the outbreak, the possibility of testing and no cure for the disease, the optimal policy prescribes a lockdown starting two weeks after the outbreak, covering 60% of the population after 1 month. The lockdown is kept tight for about a full month, and is subsequently gradually withdrawn, covering 20% of the population 3 months after the initial outbreak. The output cost of the lockdown is high, equivalent to losing 8% of one year's GDP (or, equivalently, a permanent reduction of 0.4% of output). The total welfare costs is almost three times bigger due to the cost of deaths... 
One other interesting point is this:
...the elasticity of the fatality rate to the number of infected is a key determinant of the optimal policy - we found that when the fatality rate is flat the optimal policy is to have no lockdown.
In other words, lockdowns make the most sense when the fatality rate is high and increasing in the number of infected - that is, when the fatality rate has a non-linear relationship with the number of infected.

Alvarez et al. used the value of a statistical life in their calculations, as a measure of the health benefits of a lockdown. Weighing up costs and benefits requires a common metric be used, and economists often use dollars (which is why the value of a statistical life is important - it provides a dollar value that can be used as a measure of the value of lives saved or deaths averted).

An interesting alternative is proposed in this working paper by Richard Layard (London School of Economics) and co-authors. They use a measure of 'WELLBYs' - a wellbeing equivalent of the QALY (Quality-Adjusted Life Year), which is often used as an evaluation tool in health policy. Essentially, one year of perfect life satisfaction is worth one 'WELLBY', while one year with life satisfaction of 5/10 is half a 'WELLBY'.

Having converted the benefits and costs of the lockdown into the WELLBY metric, Layard et al. show that the UK lockdown should be lifted around 1 June. Of course, their analysis has a huge number of assumptions that are pretty heroic - I wouldn't take their headline results as a strong endorsement of a date for lifting the UK lockdown. And of course, any analysis based on life satisfaction is ignoring the strong theoretical problems of life satisfaction measurement (as noted in this post). However, the overall framework of considering costs and benefits of the lockdown is important, even if you don't believe the measurement using WELLBYs.

Finally, both the Alvarez et al. and Layard et al. papers outline trade-offs between public health benefits and economic costs of the lockdown. However, not going into lockdown can have economic costs as well, as this working paper by Martin Bodenstein (Federal Reserve Board), Giancarlo Corsetti (University of Cambridge), and Luca Guerrieri (Federal Reserve Board) notes. They use a more sophisticated economic model and epidemiological model than the Alvarez et al. paper. In particular, their economic model distinguishes between a core economic sector and another sector, while their epidemiological model distinguishes three groups (one associated with each economic sector, and one non-working group). They are able to show that:
...by affecting workers in this core sector, the high peak of an infection not mitigated by social distancing may cause very large upfront economic costs in terms of output, consumption and investment.
So, the simple trade-off between economic output and lives saved may not be quite so simple. Of course, we are still early on in properly understanding the trade-offs associated with the coronavirus lockdown, and clearly we're not going to be able to evaluate the optimal lockdown length until well after the lockdowns have been lifted. However, the methods and measurement necessary to better understand this problem for future pandemic outbreaks are developing quickly.

[HT: Marginal Revolution for all three papers: here, here, and here]

Read more:


Friday, 8 May 2020

The mallpocalpse is here - will mall operators respond?

The demise of the mall has been predicted for years in western countries (it's even been referred to as the 'mallpocalpyse' - see here and here, for example). And despite that, in New Zealand we have seen lots of mall development (for example, see here). However, retail has been decimated by the COVID-19 lockdown, and as the New Zealand Herald reported this week:
...online retail sales are up 350 per cent under Alert Level 3, but overall sales are down by about 80 per cent on average.
As New Zealand comes out of lockdown, and some semblance of normality starts to resume, you might think that retail will bounce back fairly quickly. However, there is good reason to worry, especially for large malls. And that has to do with network externalities.

As I discuss in my ECONS101 paper, malls are an example of a platform market (or a two-sided market). The mall exists as a space where buyers and sellers can meet to exchange goods. The mall doesn't actually sell anything, other than access to this space, for which they charge retailers an annual rent (in theory, the malls could also charge consumers an entry fee, but in practice none do).

The reason that malls attract retailers, and the reason that malls attract consumers, is network externalities. A network externality exists when the value of the good (or service) depends on the number of users. To a retailer, the value of locating in a mall depends on the number of consumers who visit the mall. The more visitors, the more customers will see their products or services on display, and the more sales revenue they will generate. To a consumer, the value of going to the mall depends on the number (and quality) of retailers and other service providers located in the mall. The more retailers and service providers, the greater the value of going to the mall. Both sides of the market produce an externality that affects how much the other side of the market values the mall. The consumers visiting the mall (and spending their money) creates value for the retailers, and the retailers create value for the consumers. Everybody wins.

Now, consider our post-COVID-19 recovery. If consumers are reluctant to interact with other people for fear of infection risk, the mall is one of the last places they are going to want to go. If you're anxious about visiting the supermarket, then the mall is really going to freak you out. Even those who are not overly cautious might opt for online purchases, which have been forced on us in recent times, but which many people have become habituated to (if they weren't already).

Fewer consumers visiting malls means that the value to retailers of locating in the malls reduces. If the malls don't reduce rents in response, then some retailers may opt to close down (if they haven't already). That simply exacerbates the number of closed retailers, by building on the numbers that did not survive the lockdown.

Fewer retailers in the malls reduces the value of going to the malls to consumers, so on top of any anxiety, fewer consumers will see the value in going to the mall. Can you see that all this is creating a death spiral for malls?

The only way that this can be avoided (or at the least, delayed), is by keeping the mall retailers open for long enough that mall visitor numbers recover, and the network externalities start moving in the right direction again. Mall owners have a vested interest in this outcome, of course. Will they realise that one of the few things that they can do is to lower rents to their retailers, to keep them operating through the lean times? Watch this space.

Thursday, 7 May 2020

The optimal toilet seat rule

Back in 2016, I wrote a post about the 'toilet seat game' - was it better to leave the toilet seat up, or down? Economists research the really important questions, you see.

A while back, someone (I forget who) drew my attention to this 2011 article by Jay Choi (Michigan State University), published in the journal Economic Inquiry (ungated version available here). Choi develops a theoretical model of toilet seat etiquette, and then solves the model for the optimal rule (that is, the most efficient rule for the toilet seat). He finds that:
...the down rule is inefficient unless there is large asymmetry in the inconvenience costs of shifting the position of the toilet seat across genders. I show that the “selfish” or the “status quo” rule that leaves the toilet seat in the position used dominates the down rule in a wide range of parameter spaces including the case where the inconvenience costs are the same. The intuition for this result is easy to understand. Imagine a situation in which the aggregate frequency of toilet usage is the same across genders, that is, the probability that any visitor will be male is 1/2. With the down rule, each male visit is associated with lifting the toilet seat up before use and lowering it down after use, with the inconvenience costs being incurred twice. With the selfish rule, in contrast, the inconvenience costs are incurred once and only when the previous visitor is a member of different gender. The worst case under the selfish rule would occur when the sex of the toilet visitor strictly alternates in each usage. Even in this case, the total inconvenience costs would be the same as those under the down rule if the costs are symmetric. If there is any possibility that consecutive users are from the same gender, the selfish rule strictly dominates the down rule because it keeps the option value of not incurring any inconvenience costs in such an event.
He also notes in the conclusion that:
...the selfish rule is incentive-compatible in that it can be self-enforcing without any outside sanctions for violating the rule.
So, there you go. Follow the 'selfish rule', and maximise efficiency by leaving the toilet seat as it is when you are done. Unless you're William, of course. Or if the costs are asymmetric (Choi estimates that if the inconvenience costs for females are three times or more higher than the inconvenience costs for males, then the 'down rule' becomes the efficient alternative). Or if men sometimes have to use the toilet with the seat down. Or if some of the costs of the selfish rule blow back onto you (I don't mean it in that way - ewww, gross!), which is a situation that Choi didn't consider.

That's a lot of exceptions. I guess more work is required on this important research topic.

Read more:


Tuesday, 5 May 2020

Considering a universal basic income

With the economic carnage caused by the lockdown response to the coronavirus crisis, and most governments responding with a much more generous social safety net, many people are wondering about whether it is time to seriously consider a universal basic income (for example, see here and here). However, before we get ahead of ourselves, we need to consider what adopting a universal basic income (UBI) would mean, and what we already know from the limited experiments that have been undertaken so far in developed countries.

In a 2019 article (open access) published in the journal Annual Review of Economics, Hilary Hoynes and Jesse Rothstein (both University of California, Berkeley) do an excellent job of reviewing the academic literature on UBI. The start with the obvious - what is a universal basic income? It seems like an obvious question, but it turns out that when people talk about UBI, they often mean different things. Hoynes and Rothstein note three features of a canonical UBI:
1. It provides a sufficiently generous cash benefit to live on, without other earnings.
2. It does not phase out or phases out only slowly as earnings rise.
3. It is available to a large proportion of the population, rather than being targeted to a particular subset (e.g., single mothers).
It turns out that a lot of UBI proposals depart from this ideal, especially in terms of the first feature (by keeping the payments small, in order to manage the total cost), or the third feature (by limiting who is eligible).

Hoynes and Rothstein then go on to outline the arguments in favour of a UBI, of which there are three main ones:
One motivation commonly offered for adopting a UBI is that the labor market is not delivering, or is not expected to deliver, adequate growth of wages and earnings for the lower portion of the income distribution. This is sometimes presented as the “robots are coming” argument...
A distinct argument for a UBI is that it could replace the current patchwork of transfer programs in the United States, thereby avoiding the high cumulative marginal tax rates implicit in many existing poverty programs, such as cash welfare... According to some, a UBI would radically simplify the transfer system, reducing perverse incentives while still ensuring a minimum level of income for those who are truly unable to work...
...a UBI represents a more comprehensive and politically defensible safety net [than the current patchwork system], one that reaches all of the needy and not just a demographically targeted subset... They argue that a more universalist approach would also reduce the stigma of program participation, simplify cumbersome application processes, and possibly move the conversation away from assessments of the deservingness of the poor...
In the current crisis, the first argument for a UBI becomes overwhelming, but not for the reasons originally proposed. If the labour market is unable to deliver wages at all due to a lockdown, then that makes the case for a UBI much stronger.

Hoynes and Rothstein then outline how a UBI compares with existing social security programmes in the U.S., and only some of that section applies to countries like New Zealand, where the existing social safety net is more comprehensive and generous. However, the takeaway message is important, since it would apply broadly to most social security systems:
In sum, a UBI would have quite substantial distributional and cost effects. A smaller proportion of UBI dollars would go to the bottom of the income distribution compared to the current system, though a generous UBI, with the needed revenue funded by a progressive tax, could increase the absolute size of transfers to the bottom and thus would represent a (potentially very large) downward redistribution of income. Similarly, a canonical UBI would give a larger share of transfers to the nonelderly and nondisabled than the existing programs, so any proposal to finance it through cuts in health and retirement programs — the largest sources of funds in the existing US transfer system — would need to address the large declines in living standards that the elderly and disabled would experience.
The article reviews the literature on the potential labour market effects of a UBI - a key consideration for some, who believe that the work disincentives would be large. Finally, they review the existing literature on the effects of UBI pilot programmes (such as those I have previously discussed here and here), but in general that literature might be summarised as unhelpful, because:
UBIs meeting the definition we laid out above — large enough to live on, and without phaseout or other eligibility restrictions — have never been implemented in a rich country on a large scale or even in a pilot experiment. What we know about the likely effects of a UBI comes from analyses of policies that are similar in some ways to UBIs, though different in others, and from the broader labor supply literature.
Finally, it is impossible to adequately consider a UBI without considering its cost. As Hoynes and Rothstein note in their conclusion:
The source of the new funds is a first-order issue and will have substantial impacts on the distributional effects of the policy and its ability to target those most in need of assistance. In particular, replacing existing antipoverty programs with a UBI would be highly regressive, unless substantial additional funds were put in.
The idea that adopting a UBI could represent a regressive change in the social security system would come as a surprise to many people, I expect. Overall, will a UBI be a saviour for the economy - unless we are somehow able to solve the issue of how to fund it, we may never find out.

[HT: Marginal Revolution, last year]

Monday, 4 May 2020

How long should the coronavirus lockdown last?

Last week, New Zealand moved from pandemic Alert Level 4 to Alert Level 3. On 11 May, we'll find out how long we have to stay at Level 3 before moving to Level 2, which presumably lifts most of the remaining lockdown restrictions (while maintaining physical distancing and the ban on large gatherings). With only a handful of new cases each day (and zero yesterday), it appears that the worst has passed, at least in terms of the public health costs of the pandemic. However, people are now starting to tally up the economic costs of the lockdown and wondering if it was worth it (see here and here for two examples).

Following on from yesterday's post on the economics of COVID-19 policy, let's consider what the optimal period of lockdown would be. In doing so, I'll show why anyone who wanted a longer lockdown, and anyone who wanted the lockdown lifted earlier, are both unable to provide compelling evidence to support their arguments. Let's put aside the question of whether a lockdown should be used at all, and start from a position of a lockdown having been imposed - how long should the lockdown go on for?

In my ECONS102 class next semester, in the very first week we'll talk about a framework for determining the optimal quantity of something, using marginal analysis. Marginal analysis involves considering the marginal benefits and marginal costs, and in this case we're considering the optimal length of lockdown.

The general model is outlined in the diagram below. Marginal benefit (MB) is the additional benefit of one more day of lockdown. The benefits of lockdown include primarily the reduction in public health costs, including morbidity and mortality as a result of COVID-19 infection. In the diagram, the marginal benefit of lockdown is downward sloping - the first day of lockdown provides the greatest benefit in terms of reduced infections (and associated costs). Extra days provide more benefits, but compared with the first day, the marginal benefit is less - that's because, once 'the curve starts to flatten', each day in lockdown prevents fewer additional infections than the previous day.

Marginal cost (MC) is the additional cost of one more day of lockdown. The costs are primarily economic - reduced output for the economy, associated with lower incomes (in total and distributed unevenly in the population). The marginal cost of lockdown is upward sloping - most businesses can survive a few days of lockdown, but the longer the lockdown continues, the more businesses close up permanently, and this process likely accelerates over time.

The 'optimal length' of lockdown occurs where MB meets MC, at Q* days. If the lockdown was longer than Q* days (e.g. at Q2), then the extra benefit (MB) of those additional lockdown days is less than the extra cost (MC), making us worse off overall. If the lockdown was shorter than Q* days (e.g. at Q1), then the extra benefit (MB) of an additional lockdown day is more than the extra cost (MC), so keeping the lockdown going for one more day would make us better off overall.


So, has New Zealand got it right? Are we getting out of lockdown too soon, or too late? The problem is that it is virtually impossible to tell (which is why I said that both sides lack compelling evidence). If you believe that the lockdown is too short, you must believe that we are at Q1. An extra day (or more) of lockdown would save more public health costs than the economic costs it would inflict. If you believe that the lockdown is too long, you must believe that we are at Q2. An extra day (or more) of lockdown would cost more economically than the public health costs it would save.

To come to either conclusion, you would have to be able to assess the economic costs and the public health costs of both alternatives - what would happen with, and without, the lockdown continuing. As I briefly mentioned yesterday, and will be apparent to anyone who has been following the emerging and ever-updating research on the modelling of the pandemic, the current models of the pandemic largely fail to agree on anything. To get an estimate of the marginal benefit of an additional day of lockdown, you need to know not just how many infections would be averted tomorrow, but how many through the remainder of the pandemic as a whole (since an infection averted tomorrow, means one fewer person who can infect others in the future, etc.). The uncertainty in the models renders that a fairly fruitless exercise. And that's before we even consider the uncertainty in the marginal benefits (and the potential for interaction between the two, because an infection averted tomorrow could actually improve the economy overall - consider the infection of an essential worker, as one example).

So, if the marginal benefit of lockdown is highly uncertain, and the marginal cost is also uncertain, then we really have no way of knowing for sure whether the lockdown has been too long, or too short. So, anyone claiming they know the answer is really talking out of a lower orifice. While there is definitely an optimal lockdown length, and we can define what it would be in theory, in practice this isn't a question that can be answered with any certainty - we don't have the data, and we don't have the models, to do so.

And in situations of high uncertainty like this, it might be tempting to invoke the precautionary principle - to be cautious and avoid making a decision with the potential for long-term negative consequences (essentially, this was one approach that Mulligan et al. noted in the article I linked to yesterday - 'buying time'). However, it isn't clear whether it would be better to be precautionary in relation to public health costs (and err on the side of having a lockdown that lasts too long), or to be precautionary in relation to the economic costs (and err on the side of having a lockdown that is too short).

I definitely don't envy the decision-makers on this one. This is genuinely one of those times where, no matter what decision is made, there will be a large number of vocal critics, who can't prove their argument is correct, but equally can't be proven wrong either.

Read more:


Sunday, 3 May 2020

The economics of COVID-19 policy

I've spent dozens of hours reading up various analyses of the COVID-19 pandemic, economic impact, public health responses, and policy options. Most of what I've read is pretty interesting. However, a lot of it suffers either from being very lightweight in terms of considering the trade-offs (economic cost vs. human health cost). I'll talk a little bit more about that in my next post. However in the meantime, this article by Casey Mulligan, Kevin Murphy, and Robert Topel (all from the University of Chicago) is by far the best that I have read so far.

Mulligan et al. do a great job of laying out the policy trade-offs in a clear way, outlining what we know and what we don't know, and working through the implications of the main policy alternatives. The article is quite details and difficult to excerpt from, so here is their own summary from near the end of the article (emphasis is mine):
Our analysis indicates that the features of a cost-effective strategy will depend on both current circumstances and how we expect the pandemic to play out. Some elements are common, such as the desire to use STTQ [Screen, Test, Trace and Quarantine] rather than LSSD [Large-Scale Social Distancing] when infection rates are low, and shifting the incidence of disease away from the most vulnerable. These apply whether the objective is to buy time, manage the progression of the disease, or limit the long-run impact of a pandemic that will run its course. The key difference in terms of the optimal strategy is whether our focus is on keeping the disease contained. If the objective is to buy time, then our analysis favors early and aggressive intervention. This minimizes the overall impact and allows for strong but scalable measures via STTQ. In contrast, limiting the cumulative cost of a pandemic that will ultimately run its course argues for aggressive policies later, when they will have the biggest impact on the peak load problem for the health-care system and when they will have the greatest impact on the ultimate number infected. Given the desire to protect the most vulnerable, this objective can even argue for allowing faster transmission to those that are less vulnerable, which further limits the burden on the vulnerable and also reduces the burden on the health-care system... Finally, the objective of long-run containment calls for an effective STTQ strategy applied early to keep the overall infection level low. Starting early lowers overall costs and lowers cumulative infections under the long-term containment strategy.
The bolded bits demonstrate that the optimal approach depends on the policy-makers' objective: to buy time, or to limit the cumulative cost of the pandemic. It isn't at all clear which approach is better, and even after the pandemic is over we probably still won't know. One of the important parts of this article is the acknowledgement that buying time allows us to wait until we have better information on which to base decisions.

I encourage you to read the whole article, which as I said is by far the best that I have read so far. Having said that, I wouldn't take their modelling too seriously - I really don't think that there is any model as yet doing a good enough job while incorporating sufficient heterogeneity in the population in terms of infection risk and behavioural response. However, the way that Mulligan et al. frame the issues and work through them is important and to my mind this provides a model for how we should be thinking about these issues.

[HT: Marginal Revolution]

Saturday, 2 May 2020

It's going to be a good time to buy a used car, if you're able to

This week in my (online) ECONS101 class, we covered supply and demand. So, it was interesting to see this opinion piece by David Linklater in the New Zealand Herald on Thursday:
What will happen to the New Zealand car industry once the country gets back to some kind of normality (also known as level 2, level 1… no level)? More to the point, will it suddenly be a buyers' market big-time for those still in a position to purchase?
The truth is that nobody in the Kiwi industry knows at this stage.
That last sentence is only correct in the sense that no one knows exactly what will happen. However, the article itself pretty much tells us what will happen. Let's just consider the market for used cars. The article says:
A month of virtually no sales has corrected any supply issues for new and used; that and the dire situation rental car firms are in, with the potential need to offload thousands of unwanted near-new cars.
So in the very short term, it's fair to say this is very much a buyers' market. Dealerships have stock and they're very motivated to move it as quickly as possible. There will certainly be fewer buyers thanks to the economic impact of Covid-19, but those who are left will be tempted with a lot of choice and great incentives. The ships didn't stop coming just because New Zealand went into lockdown; the cars still came.
So, we can expect a decrease in the demand for used cars. With unemployment looking like being at record levels, consumers aren't looking to take on debt or blow their savings on updating their vehicle (even if some may prefer to drive rather than take public transport), so the demand for used cars is likely to decrease. On the other hand, rental car companies suddenly have lots of cars that no one is renting, so they'll likely be looking to offload some of that stock, meaning that the supply of used cars is likely to increase.

The combined effects of those two market changes (a decrease in demand and an increase in supply) is illustrated in the diagram below. The market starts at equilibrium, where demand is D0 and supply is S0. The equilibrium price is P0 and the equilibrium quantity of used cars traded is Q0. Demand decreases from D0 to D1, and supply increases from S0 to S1. The price of used cars decreases to P1. The change in the quantity of used cars traded is ambiguous though - it could increase, decrease, or (unlikely, although this is what the diagram shows) stay the same. What happens to quantity depends on the relative size of the shifts in demand and supply.


So, we can be fairly sure that the price of used cars is about to fall in New Zealand. If you're secure enough, it will soon be a good time to upgrade your car. As Linklater said, "this is very much a buyers' market".