Sunday 28 February 2021

Monkey shortages and the strategic monkey reserve

The New York Times reported earlier this week:

Mark Lewis was desperate to find monkeys. Millions of human lives, all over the world, were at stake...

The world needs monkeys, whose DNA closely resembles that of humans, to develop Covid-19 vaccines. But a global shortage, resulting from the unexpected demand caused by the pandemic, has been exacerbated by a recent ban on the sale of wildlife from China, the leading supplier of the lab animals.

The latest shortage has revived talk about creating a strategic monkey reserve in the United States, an emergency stockpile similar to those maintained by the government for oil and grain...

In the meantime, the price for a cynomolgus monkey has more than doubled from a year ago to well over $10,000, Mr. Lewis said. Scientists researching cures for other diseases, including Alzheimer’s and AIDS, say their work has been delayed as priority for the animals goes to coronavirus researchers.

The shortage has led a growing number of American scientists to call on the government to ensure a constant supply of the animals.

Let's put aside the idea of a 'strategic monkey reserve' (we'll come back to it later). A shortage arises when the quantity of some good or service demanded exceeds the quantity supplied at the current market price. This is illustrated in the diagram below. At the market price of P0, the quantity demanded is QD and the quantity supplied is QS. Since QD is greater than QS, there is not enough supply to satisfy the demand - there is a shortage.


Ordinarily, a shortage is a temporary situation. That's because the price will adjust to ensure the market ends up at an equilibrium. In this case, the price of monkeys for research should increase. As the price increases, the quantity of monkeys demanded decreases, and the quantity of monkeys supplied increases, [*] and eventually we would end up at the point where quantity demand is exactly equal to quantity supplied, which is the quantity Q1 in the diagram above, where the price has increased to P1. Notice that this is consistent with what is noted in the New York Time article, "the price for a cynomolgus monkey has more than doubled".

How does the price rise? In my ECONS101 and ECONS102 class, I describe it in the following way. There is a shortage, so some willing buyers (of monkeys) are missing out, and some of them will be willing to pay more than the current market price. Those buyers will find a seller, and ask for a guaranteed supply of monkeys, and in exchange they offer a slightly higher price. In other words, buyers bid the price (of monkeys) up. An alternative mechanism is that buyers who miss out could find a successful buyer, and offer to buy the monkeys off them for a higher price. Again, the result is that the market price of monkeys is bid upwards. The price of monkeys increases.

Anyway, let's take a step back and see how we ended up in this situation of a shortage to begin with. This is illustrated in the diagram below. The market was initially in equilibrium, where the demand curve D0 meets the supply curve S0, with price P0 and the quantity of monkeys traded was Q0. Then, the demand for monkeys increased to D1 due to vaccine firms wanting monkeys for testing. The price initially stays at P0, where the quantity of monkeys supplied remains at Q0, but at that price the quantity of monkeys demanded increases to QD. There is a shortage (between Q0 monkeys supplied and QD monkeys demanded). Notice that this diagram is actually the same as the first diagram - the only differences are that we've added the initial demand curve D0, and labelled the quantity of monkeys supplied Q0 rather than QS


How would a strategic monkey reserve change things? First, it is worth knowing a little bit about the U.S. strategic petroleum reserve. The U.S. holds millions of barrels of petroleum in reserve, equating to more than a month's total domestic demand. If there is a 'severe supply interruption', the government can temporarily increase the supply of petroleum by making some of the reserve available for sale. If the U.S. Government did something similar for macaques, and decided that the current shortage triggered the release of monkeys from the reserve, the result would be an immediate increase in the supply of monkeys. This is shown in the diagram below. Let's assume that the government releases enough monkeys to keep the price stable at its initial levels. The supply would increase from S0 to S1, and the price would remain at P0, with the new equilibrium quantity of monkeys (determined by the intersection of D1 and S1) being Q2. There is no shortage, because at the price P0, the quantity of monkeys demanded (Q2) is equal to the quantity of monkeys supplied (Q2), including the supply from the strategic monkey reserve.


Finally, it is worth noting that a supply increase is not a usual response of a market to a shortage. It only occurs in this case if the government has a strategic monkey reserve, and uses it to deal with the shortage. That is different from shortages of toys at Christmas, because the government doesn't maintain a 'strategic Christmas toy reserve'. In that case, the price of Christmas toys would probably increase (as described above).

Is a strategic monkey reserve necessary? The answer to that question should depend on a careful assessment of the costs and benefits (and risks) of not having the reserve. However, given that the initial stages of the coronavirus pandemic revealed that there were not even enough reserves of personal protective equipment (and shortages were still ongoing nine months later), I suspect there are more important reserves that should be stood up first.

*****

[*] However, in the very short run, the supply of monkeys for research could be perfectly inelastic, meaning that it is totally unresponsive to price changes, and the supply curve is vertical rather than moderately upward sloping as shown in the diagram. That's because monkey breeders can't instantly increase 'production' of monkeys in response to the higher price. However, whether the supply curve is upward sloping or vertical isn't important for the rest of the explanation.

Thursday 25 February 2021

Teaching the economics of online dating

Some time ago, I reviewed Paul Oyer's book Everything I Ever Needed to Know about Economics I Learned from Online Dating, noting that it was "a delightful treatment of how economics can apply to a wide range of activities, not just online dating". Clearly, I'm not the only person who enjoyed the book. Andrew Monaco (University of Puget Sound) enjoyed it so much that he created an entire upper-level undergraduate course based on the book, as he detailed in this 2018 article published in the Journal of Economic Education (ungated earlier version here).

The course is really just an elaborate bait-and-switch, where students are taught economic theory and modelling. As Monaco explains:

Online dating serves as an alluring point of entry, but ultimately, the course is an advanced undergraduate microeconomic theory course with a focus on economic modeling...

One colleague has described it as “a Trojan Horse into the world of economic model-making,” and another has said that although it “may simply sound like a sexy course …it is theoretically rich and complex, wherein on-line dating is used as a vehicle to explain and develop other models of economic phenomena.”

Monaco doesn't present any thorough analysis of the success of the course, but he does offer this:

Student responses on course evaluations can also provide qualitative insight into the effectiveness of this course at achieving its objectives. On the surface, students overwhelmingly find the material engaging, as evidenced in a typical comment: “Online dating is a fun and intriguing way of presenting economic questions and analysis.”

Finding interesting and appealing ways of connecting students to the course material should be the goal of all good lecturers. I currently use a range of materials from many different sources (including Oyer's book and others), but building a course based around a single common theme (such as online dating) is an appealing alternative. Similar to Marina Adshade's course on the economics of sex and love, that formed the basis for her book Dollars and Sex (which I reviewed here). Or basically any course on sports economics.

While I won't be doing anything like that any time soon, Monaco's approach is certainly worthwhile sharing.

Tuesday 23 February 2021

Tyler Cowen on the four most important things to know about macroeconomics

ECONS101 teaching starts next week. While we won't get to macroeconomics until the last four weeks (and even then I'm not teaching that part of the paper), I thought this Bloomberg article by Tyler Cowen (of the Marginal Revolution blog) was interesting, because in it he highlights the four most important things to know about macroeconomics:

The first and most important thing to know about macroeconomics is that a strong negative shock to demand — a sudden decline, in other words — usually leads to a loss of output and employment. Nominal wages are sticky, for a complex mix of sociological reasons, and so employers do not always respond to lower demand with lower wages for workers. Instead they lay some people off, and that can lead to a recession.

That may sound pretty simple. But it is one of the most important discoveries in history. It was true in the Great Depression, in the disinflation of the 1970s and ‘80s, and in the financial crisis following 2008.

The second thing to know is that well-functioning central banks can offset such demand shocks to a considerable degree — or even prevent them from arising in the first place. The bank can engage in complex financial transactions or simply print more currency to stabilize nominal demand and restore some measure of order.

The third thing to know is that if central banks go crazy increasing the money supply, the result will be high price inflation. There is one exception to this, which was evident in 2008 and 2009, when the Fed paid interest on bank reserves: If central banks simultaneously act to decrease the velocity of money — that is, if they take measures to reduce borrowing and lending — then price inflation will be limited accordingly.

A fourth thing to know is that non-monetary shocks, if they are large enough, can also create recessions or depressions. Consider the oil price shock of 1973, the current pandemic, or bad harvests in earlier agrarian societies. Central banks can partially stabilize such shocks, but they cannot erase them.

I believe an overwhelming majority of macroeconomists would largely agree with these propositions, even if they might place the emphasis differently.

We only have four weeks of macroeconomics in the ECONS101 paper at Waikato. Les Oxley teaches the macroeconomics section, so how does he do relative to Cowen's four important things? All four are in there, plus more. The first and fourth points are covered in Topic 9 (Economic fluctuations and unemployment), the second point is covered in Topic 12 (Monetary policy), the third point is covered in Topic 10 (Money and inflation). That we cover all of Cowen's four important things is a useful affirmation for our ECONS101 paper, even though we can only fit four weeks of macroeconomics into it.

For completeness, Topic 11 of ECONS101 covers government fiscal policy (taxing and government spending). It may be somewhat surprising that Cowen has no 'important thing' associated with fiscal policy, which is a staple of every introductory macroeconomics textbook. Although maybe it isn't that surprising, since having read the MR blog for many years, I get the distinct feeling that Cowen is a sceptic about the fiscal multiplier (the idea that an additional dollar of government spending leads to more than a dollar of total output for the economy). I haven't formed a particular view myself. The idea of the fiscal multiplier is sound in theory, while the empirical evidence as far as I am aware is that multipliers are generally quite small. However, it still seems like a big idea in macroeconomics, and I'm glad that we include it in ECONS101.

[HT: Marginal Revolution]

Monday 22 February 2021

Bonus points for tries in rugby create incentives in both directions

In rugby competitions, it is now standard to award bonus competition points to teams that score a certain number of tries in a game. The last holdout competition was the Six Nations, which introduced bonus points only in 2017, over 30 years after they were first introduced in New Zealand's National Provincial Championship (in 1986). In 2016, Super Rugby modified the bonus point, by moving from teams needing to score four tries, to teams needing to score three more tries than their opposition. In all cases, the purpose of these bonus points is to increase try scoring, by creating incentives for teams to score more tries.

As I noted in yesterday's post on child allowances:

...'people respond to incentives'. When economists say that, they mean that when the costs of doing something increase, we tend to do less of it. And if the costs of doing that thing decrease, we tend to do more of it. The reverse is true of benefits - when the benefits doing something increase, we tend to do more of it, and when the benefits decrease, we tend to do less of it.

In this case, bonus points for tries increase the benefits of scoring tries, so teams should try harder (pun intended). But, do these incentives work? In a 2019 article in The Conversation, Liam Lenten (La Trobe University) says yes, and no:

In research to be published in the Scottish Journal of Political Economy, we report that the introduction of the try bonus was effective in increasing the likelihood that teams would score four tries in a match (which is an above-average number).

The effect was concentrated on home teams, which given the advantages they already enjoy are more often in a position to go for the bonus. It would appear to lend support for the view that the rule (or policy, in economist-speak) had achieved what it was meant to...

But not so fast. We also found a significant reduction in teams scoring five or more tries.

That’s right, a reduction.

We believe it was driven by teams reducing their attacking effort once the bonus had been secured, as a large share of teams that score a fourth try already have a comfortable lead, and it is generally late in the game.

It means that, on balance, the evidence in favour of bonus points achieving their aims is mixed. At best they achieve something, at worst they are counterproductive.

The research itself was finally published last year (sorry, I don't see an ungated version), and is co-authored by Robert Butler (University College Cork) and Patrick Massey (Compecon). It was based on data from the European Rugby Cup (or Heinecken Cup, if you prefer) over the period from the 1996/97 to 2013/14 seasons, with a bonus point for scoring four tries introduced in the 2003/04 season.

Butler et al. found that teams that scored at least three tries were more likely to score a fourth try than would be expected based on game and competition conditions. So, there was a positive incentive for try scoring for teams that had already scored three tries, and those teams were putting in more effort to score.

In contrast, teams that had already scored four tries were less likely to score a fifth try than would be expected. So, there was a negative incentive for try scoring for teams that had already secured a bonus point. Those teams were taking their foot off the gas, because there is less incentive to keep running up the score once they had scored four tries.

It would be interesting to see what the incentive effects are for Super Rugby, now that the bonus point is based on net try scoring - where teams need to score three more than the opposition. That rule change has only been around a few years, so it will take much more data collection before a quantitative analysis can be conducted. In the meantime, keep an eye on how the teams are playing when they are close to a three try lead. Super Rugby Aotearoa starts this coming weekend!

Sunday 21 February 2021

Child allowances and fertility incentives

In the first week of most economics classes (and indeed, throughout most of those classes), one of the first lessons is that 'people respond to incentives'. When economists say that, they mean that when the costs of doing something increase, we tend to do less of it. And if the costs of doing that thing decrease, we tend to do more of it. The reverse is true of benefits - when the benefits doing something increase, we tend to do more of it, and when the benefits decrease, we tend to do less of it.

Incentives matter, even in decisions that most people wouldn't automatically think of as being 'economic' decisions. This week's example comes from the U.S. debate over the introduction of a child allowance. As the New York Times reports:

Governments worry about declining fertility for many reasons; for one, they count on the next generation to finance the safety net and provide the caregivers, inventors and public servants of the future.

The birthrate in the United States fell in part because of large decreases in births among two groups: teenage and Hispanic women. The Great Recession also contributed to the fertility decline — births have sunk below replacement level since then, and there are indications that the pandemic may decrease fertility further. American women are also waiting longer to have babies.

There are many reasons. Would-be parents face challenges like the rising cost of child care, record student debt, a lack of family-friendly policies, workplace discrimination against mothers and concerns about climate change and political unrest. At the same time, women have more options for their lives than ever and more control over their reproduction. As countries become wealthier, and as women have more opportunities, fertility rates decline, data shows.

The monetary costs of raising children have increased (e.g. due to the rising cost of childcare), and the opportunity costs have increased as well. In terms of opportunity costs, women are earning more, so the costs of time taken out of the workforce for childbirth are higher (even putting aside the extra time spent on caregiving and parental responsibilities after birth).

A child allowance paid to every family lowers the costs of raising children [*]. When the cost of something decreases, we tend to do more of it. So, we could expect (at the margin) more children, i.e. higher fertility. That doesn't mean that every family would have more children, only that the child allowance would provide enough incentive for some families to have another child (or a first child), so that the average number of children per family increases by a little.

How many extra children? It's difficult to say exactly, but the New York Times articles notes:

Research from other countries shows that direct payments lead to a slight increase in birthrates — at least at first. In Spain, for instance, a child allowance led to a 3 percent increase in birthrates; when it was canceled, birthrates dropped 6 percent. The benefit seems to encourage women to have children earlier, but not necessarily to have more of them — so even if it increases fertility in a given year, it doesn’t have large effects over a generation.

Incentives matter, but sometimes they only matter a little bit.

*****

[*] Alternatively, you could think of a child allowance as increasing the benefits of raising children. The incentive effect is the same either way.

Book review: Thinking Strategically

One of my favourite parts of teaching ECONS101 each year is teaching game theory. Probably the biggest disappointment is that we can only devote one week to it (in fact, when we made the switch from ECON100 to ECONS101, I toyed with the idea of trying to get two weeks of game theory into the paper, but it just wouldn't work). There are just so many cool applications of game theory that we simply do not have time to fit into the paper. That problem was reinforced as I was reading Thinking Strategically, by Avinash Dixit and Barry Nalebuff.

The book was written in 1991, and while the specific applications are dated, the underlying game theoretic concepts are timeless. Dixit and Nalebuff do an excellent job of explaining some relatively complex ideas in a way that is relatively easy to understand. And even better, they do it without going deep into the mathematics of game theory. And that's important - my feeling is that the best ideas in game theory can be understood intuitively and without disguising the intuition behind an impenetrable wall of equations.

The applications that Dixit and Nalebuff employ range widely across business, politics, and society, from serving strategy in tennis to incentivising soldiers in the military to organising a joint venture between firms. Not all of the applications have solutions or implications that are obvious at first glance. I particularly appreciated this application:

If you are going to fail, you might as well fail at a difficult task. Failure causes others to downgrade their expectations of you in the future. Failure to climb Mt. Everest is considerably less damning than failure to finish a 10K race. The point is that when other people's perception of your ability matters, it might be better for you to do things that increase your chance of failing in order to reduce its consequence. People who apply to Harvard instead of the local college, and ask the most popular student for a prom date instead of a more realistic prospect, are following such strategies...

Psychologists see this behavior in other contexts. Some individuals are afraid to recognize the limits of their own ability. In these cases they take actions that increase the chance of failure in order to avoid facing their ability. For example, a marginal student may not study for a test so that if he fails, the failure can be blamed on his lack of studying rather than intrinsic ability.

That is quite insightful, and not immediately obvious. I also liked an explanation for how retailers could use low price guarantees to raise prices.

I made a bunch of notes that I will use in teaching ECONS101 this trimester (which starts in a little over a week). I could easily have made a bunch more notes, if only there was more space in the paper for game theory! Dixit and Nalebuff have other follow-up books, which I am now looking forward to reading (including The Art of Strategy, which I have on my bookshelf already).

I'm not the only one who really enjoyed Thinking Strategically. Earlier this month, Tim Harford listed it in his list of the best books ever published in the history of the universe. I wouldn't go quite that far, but I am happy to recommend it to readers interested in game theory or strategy.

Friday 19 February 2021

Apparently, binge-watching Grey's Anatomy can count as research

I just saw this article, entitled "Accuracy of Urologic Conditions Portrayed on Grey’s Anatomy", published in the journal Health Education & Behavior. Let me translate the abstract:
We spent the coronavirus pandemic lockdown period binge-watching Grey's Anatomy. Our annual performance review is coming up, and we realised that our Dean isn't going to be too happy with our lack of research output, so we had to do something, and quickly.

I guess they made the most of a bad situation. I'm sure there are some people who wouldn't mind getting paid to watch 15 seasons of Grey's Anatomy, even if they had to watch it with a pen carefully poised to quickly take notes if a urological condition is mentioned.

File this with the research on getting crayfish drunk.


Wednesday 17 February 2021

Wage subsidies vs. income-contingent loans to reduce the economic impact of coronavirus

The economic disruption/s caused by the coronavirus pandemic have put pressures on government social support systems. Many countries have responded with wage subsidies for workers who have faced job loss or a reduction in work hours. In a new article in The Conversation yesterday, Richard Meade (Auckland University of Technology) presents an alternative:

For many countries, targeted wage subsidies of some form have been the principal tool for maintaining employment and economic confidence, often complemented by small business loans. While these have clearly been useful, they also have clear limitations — not least their cost.

This raises questions about the ongoing viability of wage subsidies and small business loans as the economic response measures of choice.

My recently published policy paper proposes an alternative approach, modelled on student loans schemes such as those operating in New Zealand, Australia and the UK.

Rather than attempting to support firms and households to pay wages, rents and other expenses, this alternative enables firms and households whose incomes have fallen due to the pandemic to take out government-supported “COVID loans” to restore their pre-pandemic income levels — a form of “revenue insurance”.

I argue this alternative approach will be not just more affordable and sustainable, but will also be more effective and more equitable.

Meade summarises his proposal in The Conversation, with additional detail in this forthcoming article in the journal New Zealand Economic Papers. He argues that:

In terms of affordability, loans to make up drops in income would be repaid via tax surcharges on those taking out loans, as and when their future incomes allow. This means would-be borrowers need not be deterred by fixed repayment deadlines in times of ongoing economic uncertainty.

Furthermore, since any firms and households borrowing against their own future incomes will ultimately be repaying their debt, COVID loans represent an asset on government balance sheets.

This offsets the extra liabilities governments take on by borrowing to finance these loans — something wage subsidies do not do. This increases the affordability of a loans-based approach from a government perspective (even allowing for defaults and subsidies implicit in student loan schemes).

Using illustrative data for New Zealand, my paper shows COVID loans are 14% cheaper than wage subsidies (and small business loans) in terms of their impact on net government debt.

The problem with Meade's analysis is that it adopts a purely fiscal approach, and ignores the distributional consequences. With a wage subsidy, affected households receive an immediate income boost, and the costs are shared across all taxpayers, current and future. Current taxpayers face a burden if taxes increase, or if government services are reduced (in quantity or quality) to offset the cost of the subsidy. If instead the wage subsidy if funded by government borrowing, future taxpayers will face the burden of paying the debt back (through higher taxes or reduced government services in the future). If instead of a wage subsidy, the government provides income-contingent loans, affected households receive an immediate income boost (just like the wage subsidy), but it is those households, and not all taxpayers, that face the burden of paying for the loans (in the future).

We need to consider who is likely to receive the government assistance (whether wage subsidy or loan). According to this article by Michael Fletcher, Kate Prickett and Simon Chapple (all Victoria University of Wellington), also forthcoming in New Zealand Economic Papers, lockdown-related job losses were concentrated among the lowest income households:

Respondents with low annual household incomes (under $30,000pa) were substantially more likely to have lost jobs (24%) or not working due to the lockdown (36%), and less likely to be working from home (13%). Respondents with high household incomes (over $100,000) were less like to have lost jobs (3%) or to be unable to work (18%) and more likely to be working from home (45%).

Although, Fletcher et al. find that in terms of income losses:

Higher-income households were more likely to report at least one adult had experienced job or income loss, compared to households with lower incomes (52% among those earning over $100,000pa compared to 33% among homes living on less than $30,000pa). This result is driven by higher rates of income loss, not job loss, among the high income households and job loss more than income loss among lower-income households.

So, then it probably comes down to who would accept an income-contingent loan from the government. Higher-income households face fewer credit constraints, likely have higher net wealth, and may be more able to self-insure. In other words, while they would gladly accept a wage subsidy from the government (who doesn't love free money?), they would be less likely to accept an income-contingent loan. Lower-income households face credit constraints, and a loss of income is a more serious proposition for many lower-income households, so an income-contingent loan wouldn't so easily be passed up.

I think it's pretty clear, even in the absence of a thorough distributional analysis, that the wage subsidy scheme is progressive (to the extent that low-income households benefited to a greater extent in relative terms than high-income households). In contrast, Meade's income-contingent loan scheme is likely to be more regressive.

So, while the loan-based scheme may save the government 14 percent in terms of the total cost, that has to be weighed up against the potential for higher future poverty and inequality as the loan repayments impact low-income households relatively more than high-income households. Some governments may think that is an acceptable trade-off. I suspect that the current New Zealand government is not among them.

Tuesday 16 February 2021

Combating cheating in online tests

From my perspective, the most challenging aspect of teaching during the pandemic lockdowns last year wasn't the teaching itself, it was dealing with students cheating in the online assessment. To give you some idea, I sent more students to the Student Discipline Committee in B Trimester 2020 than I had in the previous 10 years of teaching combined. All but one of those students ended up failing their paper. And I was not alone. The Student Disciplinary Committee faced a huge increase in workload, especially related to students using contract cheating websites to answer assessment questions for them.

Anyway, as you may expect, my experiences (and those of my colleagues) are not isolated examples. In a new paper in the Journal of Economic Behavior and Organization (ungated earlier version here), Eren Bilen (University of South Carolina) and Alexander Matros (Lancaster University) looked at cheating in online assessments. They use two examples to illustrate the pervasiveness of cheating: (1) students in an intermediate level class in Spring Semester 2020 (when lockdowns were introduced partway through the semester); and (2) online chess tournaments. They motivate their analysis with a simple game theoretic model, as shown below (the first payoff is to the student, and the second payoff is to the professor).


They note that in the sequential game:

It is easy to find a unique subgame perfect equilibrium outcome, where the student is honest and the professor does not report the student. Note that this is the best outcome for the professor and the second best outcome for the student.

To see why that is the subgame perfect Nash equilibrium, we can use backward induction. Essentially, we work out what the second player (the professor) will do first, and then use that to work out what the first player (the student) will do. In this case, if the student cheats, then we are moving down the left branch of the tree. The best option for the professor in that case is to report the student (since a payoff of 3 is better than a payoff of 2). So, the student knows that if they cheat, the professor will report them. Now, if the student doesn't cheat, then we are moving down the right branch of the tree. The best option for the professor in that case is not to report the student (since a payoff of 4 is better than a payoff of 1). So, the student knows that if they don't cheat, the professor will not report them. So, the choice for the student is to cheat and get reported (and receive a payoff of 1) or not cheat and not get reported (and receive a payoff of 3). Of course, the student will choose not to cheat. The subgame perfect Nash equilibrium here is that the student doesn't cheat, and the professor doesn't report them.

The problem with that analysis is that the professor doesn't know with certainty if the student has cheated or not. So, Bilen and Matros move onto a sequential game, as shown below. Even though the players make their choices sequentially, because the student's choice about whether to cheat or not is not revealed to the professor, it is as if the professor is making their choice about whether to report or not at the same time as the student. That makes this a simultaneous game.



Bilen and Matros note that, in this game:
This game has a unique mixed-strategy equilibrium, which means that the student and the professor should randomize between their two actions in equilibrium. Thus cheating as well as reporting is a part of the equilibrium.
To see why, we need to try to find the Nash equilibriums in this game, and to do that we can use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the textbook definition of Nash equilibrium). In this game, the best responses are:
  1. If the student chooses to cheat, the professor's best response is to report the student (since 3 is a better payoff than 2);
  2. If the student chooses not to cheat, the professor's best response is not to report the student (since 4 is a better payoff than 1);
  3. If the professor chooses to report the student, the student's best response is to not cheat (since 2 is a better payoff than 1); and
  4. If the professor chooses not to report the student, the student's best response is to cheat (since 4 is a better payoff than 3).
A Nash equilibrium occurs where both players' best responses coincide (normally I would track this with ticks and crosses, but since I didn't create the payoff table I haven't done so in this case). Notice that there isn't actually any case where both players are playing a best response. If the student cheats, the professor's best response is to report them. But if the professor is going to report the student, the student's best response is to not cheat. But if the student doesn't cheat, the professor's best response is not to report them. But if the professor doesn't report the student, the student's best response is to cheat. We are simply going around in a circle.

In cases such as this, we say that there is no Nash equilibrium in pure strategy. However, there will be a mixed strategy equilibrium, where the players randomise their choices of strategy. The student should cheat with some probability, and the professor should report the student with some probability. The optimal probabilities depend on the actual payoffs to the players (we could work it out in this case, but I'm not going to go that far today).

Anyway, Bilen and Matros conclude that in-person tests and examinations most closely resemble the sequential game, where the equilibrium is for students not to cheat, but that online tests most closely resemble the simultaneous game, where cheating will be much more common. And, their analysis supports that theory:
Using a simple way to detect cheating - timestamps from the students’ Access Logs - we identify cases where students were able to type in their answers under thirty seconds per question. We found that the solution keys for the exam were distributed online, and these students typed in the correct as well as incorrect answers using the solution keys they had at hand.

Then they present their proposed solution to the problem of online cheating:

In order to address this issue based on our theoretical models, we suggest that instructors present their students with two options: (1) If a student voluntarily agrees to use a camera to record themselves while taking an exam, this record can be used as evidence of innocence if the student is accused of cheating; (2) If the student refuses to use a camera due to privacy concerns, the instructor should be allowed to make the final decision on whether or not the student is guilty of cheating, with evidence of cheating remaining private to the instructor.

I'm not sure that I agree. The optimal solution would be one that returns to conditions where cheating is easy to detect, as in the sequential game above. A voluntary webcam doesn't do this, since students who want to cheat, as well as students who have privacy concerns, would opt out. The game would revert to the simultaneous game for those students, and some of those students would cheat.

My solution, which will be implemented if we go into lockdown and can't have a final examination in the trimester due to start in a couple of weeks, is to move to individual oral examinations. It is a lot harder to hide when you are put on the spot in a Zoom call and asked to answer a question chosen at random by the lecturer. Certainly, you can't use an online cheating website to provide the answer for you in such a situation! I haven't fully worked out the mechanics of how it would work (and hopefully I never have to implement it!), but it would seem to me to return the assessment to a sequential game.

On the other hand, the theory of asymmetric information and signalling does suggest that the webcam solution may work. The student knows whether they intend to cheat or not. The professor doesn't know. Whether the student is planning to cheat is private information. The student can reveal this information by the choices they make. Say that the professor offers a voluntary webcam policy, where students who agree set up a webcam such that they can be observed while they complete the test. Students who agree to the webcam are clearly those that weren't intending to cheat, because then they would surely be caught. In contrast, if a student was intending to cheat, they wouldn't agree to the policy. And so, the professor is able to reveal who the likely cheaters are, simply by offering the policy. The students signal whether they are cheaters by agreeing to have the webcam, or not.

That seems way too simple, and you could argue that it is somewhat coercive, forcing honest students to compromise their privacy to signal their honesty. It's going to be imperfect too, because it would be difficult to separate students who chose not to webcam from those that can't afford a webcam, or whose internet connection is too slow to support webcam use, and so on.

And invigilating the webcam solution is going to be extraordinarily expensive. You can't get AI to do the supervision (at least, not yet). And since Zoom can only support 25 faces on screen at a time, you would need at least one invigilator per 25 students. Probably you need more than that, because the invigilator needs to be able to see clearly what the student is doing (so unless they are using a 50" screen, you probably want a whole lot fewer than 25 images on-screen at a time). I think I'll stick to the oral examination solution.

Cheating in online assessments is clearly a serious problem. This is the main reason that I am highly sceptical about the move to online education. Until we can solve the serious academic integrity issues in a cost-effective way (and we currently don't have one), the quality of assessment (in terms of accurately ranking students or assessing their knowledge and skills) is so flow that it makes a mockery of the whole idea of teaching online.

Sunday 14 February 2021

The effect of El Salvadoran gangs on development

In the 1980s, El Salvadoran youth in Southern California formed two street gangs that are now well known internationally: MS-13 and 18th Street. By the mid-1990s, these two gangs had moved on from petty crimes and become more serious criminal organisations, and the U.S. authorities cracked down on them. In 1997, the U.S. began deporting migrants with criminal records (including members of MS-13 and 18th Street) back to their countries of origin. El Salvador say an influx of criminal gang members, who quickly gained control of neighbourhoods across the country, but especially in the capital, San Salvador. The government, and the police, lacked the capacity to prevent these gang territories from being established, in part because the country was still recovering from the civil war that ended in 1992.

What was the impact of these gangs on the neighbourhoods that they controlled, and the people living there? That is the research question that this recent working paper by Nikita Melnikov (Princeton University), Carlos Schmidt-Padilla (University of California, Berkeley), and María Micaela Sviatschi (Princeton University) sets out to answer. They use a variety of data sources, including the 1992 and 2007 Censuses and their own field survey, and implement a regression discontinuity design. Essentially, they test whether there are big jumps (up or down) in key variables related to socio-economic development, occurring at the boundary of gang-controlled neighbourhoods.

They find that:

...residents of gang-controlled neighborhoods in San Salvador have worse dwelling conditions, less income, and lower probability of owning durable goods compared to individuals living just 50 meters away but outside of gang territory. They are also less likely to work in large firms. The magnitudes are very large. For instance, we find that residents of gang areas have $350 lower income compared to individuals living in neighboring non-gang locations and have a 12 percentage points lower probability of working in a firm with at least 100 employees.

The gangs had clear negative effects on development (and interestingly, it doesn't matter which of the two gangs - MS-13 or 18th Street - controls the neighbourhood, since the effects are statistically indistinguishable between them. You might wonder if there was something different about the gang-controlled neighbourhoods before the gang leaders returned to El Salvador. Not so:

These differences in living standards did not exist before the arrival of the gangs. In particular, we replicate the regression discontinuity design with data from the 1992 census, showing that, at that time, neighborhoods on either side of the boundary of gang territory had similar socioeconomic and geographic characteristics. The difference-in-differences analysis confirms this result: after the arrival of the gang members from the United States, areas with gang activity experienced lower growth in nighttime light density compared to places without gang presence, while before the deportations, both types of locations experienced similar rates of growth.

The effect on development is quite large. When Melnikov et al. look at the effect on night-time light intensity (as a measure of development), they conclude that:

The magnitude of the effect is quite large. By 2010, thirteen years after the deportations, areas with high gang presence had experienced nearly 120 percentage points lower growth in nighttime light density than places with low gang presence... in 1998-2010, areas with low gang activity had nearly 120×0.28 = 33.6 percentage points higher growth in GDP than areas with gang presence.

Melnikov et al. undertake a battery of robustness checks on their results (so much so that the working paper is over 90 pages long!). The results are quite robust to changes in the data, and the regression discontinuity results are backed up by the night-time light analysis, which uses a different method (difference-in-differences).

So, what is it that the gangs are doing that impedes development? Melnikov et al. investigate that as well, and note that:

A key mechanism through which gangs affect socioeconomic development in the neighborhoods they control is related to restrictions on individuals’ mobility. In order to maintain control over their territory and prevent the police and members of rival gangs from entering it, both MS-13 and 18th Street have instituted a system of checkpoints, not allowing individuals to freely enter or leave their neighborhoods... Our analysis suggests that, as a result of these restrictions, residents of gang-controlled areas often cannot work outside of gang territory, being forced to accept low-paying jobs in small firms in the neighborhoods where they live.

Restricting freedom of movement, and therefore the freedom of people living in the gang territory to take up higher paying job opportunities outside of the territory, seems to be driving the results. We already know that freer movement of people could substantially increase development globally (see this post, for example). This is an example operating at the micro-level.

[HT: Marginal Revolution, last year]

Wednesday 10 February 2021

Game theory and the search for extraterrestrial intelligence

In game theory, a coordination game is a game where there is more than one Nash equilibrium. If the game is non-repeated (played just once), then it is difficult for the players to coordinate their actions to ensure that the equilibrium outcome is obtained. One example of a coordination game is the game of chicken (see here or here). This difficulty in coordination can even be an issue if the game is repeated (although the players may eventually be able to learn, develop trust in each other, and coordinate).

A famous example of a coordination problem (or game) was outlined by Thomas Schelling. If you needed to meet with a stranger in New York City, and you had no way of communicating with them, when and where would you try to meet them? This seems an extraordinary question to answer, as there are literally millions of possible location and time combinations you could choose. However, if you and the stranger are both thinking about the same problem, then some locations and times are more likely than others. You'd want to choose somewhere central, which is easy to find, and which many people would think about going. Similarly, you'd want to choose a time that is a pretty common time for meeting. Schelling argued that you should choose one of these 'focal points' (which we now refer to as Schelling Points), as being the most likely equilibrium. Schelling's solution to the New York City problem was that you should go to the information booth at Grand Central Station at 12 noon.

That brings me to this article from Phys.org last month:

New research from the University of Manchester suggests using a strategy linked to cooperative game playing known as 'game theory' in order to maximize the potential of finding intelligent alien life.

If advanced alien civilisations exist in our galaxy and are trying to communicate with us, what's the best way to find them? This is the grand challenge for astronomers engaged in the Search for Extraterrestrial Intelligence (SETI). A new paper published in The Astronomical Journal by Jodrell Bank astrophysicist, Dr. Eamonn Kerins, proposes a new strategy based on game theory that could tip the odds of finding them more in our favor.

Notice that this is similar to Schelling's New York City coordination problem. We want to look for extraterrestrial intelligence, but where should we look? To quote Tommy Lee Jones' character in the movie Armageddon, "beg'n your pardon sir, but it's a big-ass sky". And if some extraterrestrial intelligence is looking to broadcast their location, where should they direct their broadcasts? And to take things a step further, with two separate civilizations on different planets, which one should be the sender of the signal, and which should be the receiver? If everyone is listening but no one is sending a signal, then that is a coordination failure too. Kerins' solution is excellent:

Dr. Kerins dubs his idea "Mutual Detectability." It states that the best places to look for signals are planets from which we would be capable of determining that Earth itself may be inhabited.

"If we have evidence of a potentially inhabited planet, and civilisations there have similar evidence about our planet, both sides should be strongly incentivised to engage in SETI towards each other because both will be aware that the evidence is mutual."...

The new theory suggests examining transiting planets, planets that are on orbits that pass directly across the face of their host star, briefly making it appear dimmer. This dimming effect has been previously used to discover planets. In fact, transiting planets make up most of the planets we currently know about. For some, astronomers can determine if they are rocky planets like Earth, or if they have atmospheres that show evidence of water vapor.

"What if these planets are located in line with the plane of the Earth's orbit? They'll be able see Earth transit the Sun and they'll be able to access the same kind of information about us. Our planets will be mutually detectable." said Dr. Kerins. 

And in relation to being a sender or a receiver of a signal, there is a game theoretical solution there as well:

"It turns out that civilisations on a planet located in the Earth Transit Zone can know whether the basic evidence of their transiting planet is clearer to us or if our signal is clearer to them. We'll know this too. It makes sense that the civilisation that has the clearest view of the other's planet will be most tempted to send a signal. The other party will know this and so should observe and listen for a signal."

In the research paper Dr. Kerins shows that the vast majority of habitable planets in the Earth Transit Zone are expected to be in orbits around low-mass stars that are dimmer than the Sun. He shows that these civilisations would have a clearer view of us. Using the Mutual Detectability theory suggests that targeted SETI programs should therefore concentrate on looking for signals from potentially habitable planets around dim stars.

It appears that there is a Schelling Point in the search for extraterrestrial intelligence. Now, we have to hope that, if such intelligence exists, they have also developed an understanding of game theory. And if SETI suddenly becomes successful, it may have game theory to thank.

[HT: Marginal Revolution]

Saturday 6 February 2021

Stocks vs. flows... Amazon edition

Regular readers of this blog will know that one of my pet hates is journalists invalidly comparing stocks and flows (for example, see this post), like wealth (a stock) and income (a flow), or total assets (a stock) and total revenue (a flow). However, journalists can be forgiven, because they often don't know any better. Researchers, on the other hand, should know better.

This week's example of not understanding the difference between a stock and a flow comes from this article in The Conversation by Louise Grimmer (University of Tasmania), Gary Mortimer (Queensland University of Technology), and Martin Grimmer (University of Tasmania):

Amazon is now one of the most valuable companies in the world, valued at more than US$1.7 trillion. That’s more than the GDP of all but 10 of the world’s countries. It’s also the largest employer among tech companies by a large margin.

Amazon is one of the most valuable companies in the world, but despite what Grimmer et al. imply, Amazon is not worth more than all but 10 of the world's countries. The size of an economy (as measured by GDP) is a flow of resources for a single year. Amazon's market capitalisation is a stock (a measure of its total value), not a flow for a single year.

A valid comparison would be between GDP and Amazon's profits (or, even better, its 'value added'). Amazon's value added is still a large number, and Amazon would still rank highly relative to some countries. But comparing GDP with market capitalisation is lazy, and simply reduces the credibility of the researchers making the comparison.

Wednesday 3 February 2021

Reviewing the gender wage gap

I've written a couple of times on the gender wage gap (see here and here). While it may be attractive to explain the gap as resulting from discrimination, the factors underlying the gap are actually many and varied. I just read this excellent (and long) review (open access) of the literature by Francine Blau and Lawrence Kahn (both Cornell University), that was published in the Journal of Economic Literature in 2017.

The paper itself contains a lot of detail, and much too much for me to excerpt easily here. However, they first demonstrate some key facts about the gender wage gap in the U.S.:

We have shown that the gender pay gap in the United States fell dramatically from 1980 to 1989, with slower convergence continuing through 2010. Using PSID microdata, we documented the improvements over the 1980– 2010 period in women’s education, experience, and occupational representation, as well as the elimination of the female shortfall in union coverage, and showed that they played an important role in the reduction in the gender pay gap. Particularly notable is that, by 2010, conventional human capital variables (education and labor-market experience) taken together explained little of the gender wage gap in the aggregate. This is due to the reversal of the gender difference in education, as well as the substantial reduction in the gender experience gap. On the other hand, gender differences in location in the labor market—distribution by occupation and industry—continued to be important in explaining the gap in 2010.

There is reason for both optimism and pessimism in those findings - optimism because the size of the gender wage gap has been decreasing, but pessimism because it hasn't fully closed and the remaining gap cannot be fully explained by worker or industry/occupation characteristics. Also on the pessimistic side:

We also found that both the raw and the unexplained gender pay gap declined much more slowly at the top of the wage distribution that at the middle or the bottom. By 2010, the raw and unexplained female shortfalls in wages, which had been fairly similar across the wage distribution in 1980, were larger for the highly skilled than for others, suggesting that developments in the labor market for executives and highly skilled workers especially favored men.

In terms of the factors that may explain the gender wage gap, Blau and Kahn conclude that:

One of our findings is that while convergence between men and women in traditional human-capital factors (education and experience) played an important role in the narrowing of the gender wage gap, these factors taken together explain relatively little of the wage gap in the aggregate now that, as noted above, women exceed men in educational attainment and have greatly reduced the gender experience gap...

...recent research suggests an especially important role for work force interruptions and shorter hours in explaining gender wage gaps in high-skilled occupations than for the workforce as a whole... the interpretation of these findings in a human capital framework has been challenged. Goldin (2014), for example, argues that they more likely represent the impact of compensating differentials, in this case wage penalties for temporal flexibility...

Although decreases in gender differences in occupational distributions contributed significantly to convergence in men’s and women’s wages, gender differences in occupations and industries are quantitatively the most important measurable factors explaining the gender wage gap (in an accounting sense). Thus, in contrast to human-capital factors, gender differences in location in the labor market, a factor long highlighted in research on the gender wage gap, remain exceedingly relevant...

Another factor emphasized in traditional analyses that remains important is differences in gender roles and the gender division of labor. Current research continues to find evidence of a motherhood penalty for women and a marriage premium for men. Moreover, the greater tendency of men to determine the geographic location of the family continues to be a factor even among highly educated couples...

And on discrimination specifically:

The persistence of an unexplained gender wage gap suggests, though it does not prove, that labor-market discrimination continues to contribute to the gender wage gap... We cited some recent research based on experimental evidence that strongly suggests that discrimination cannot be discounted as contributing to the persistent gender wage gap. Indeed, we noted some experimental evidence that discrimination against mothers may help to account for the motherhood wage penalty as well.

And on psychological factors:

While male advantages in some factors, like risk aversion and propensity to negotiate or compete, may help to explain not only some of the unexplained gender wage gap but also gender differences in occupations and fields of study, it is important to note that women may have advantages in some other areas, like interpersonal skills.

Finally, they briefly looked at institutional differences across countries, and noted that:

...the more compressed wage structures in many other OECD countries, due to the greater role of unions and other centralized wage-setting institutions in these countries, have served to lower the gender pay gap there relative to the United States by bringing up the bottom of the wage distribution. This appears to have also lowered female employment and raised female unemployment compared with men, as would be expected if higher wage floors are binding.

The article is a very thorough review of what we know about the gender wage gap (at least, up to 2017). The application of data on the U.S. wage gap doesn't render this paper inapplicable to other western developed countries, because most of the same trends (particularly, a decreasing but still substantial gender wage gap) are apparent in those countries as well. If you are interested in the topic but not familiar with the extensive research literature, this article is an excellent (and broadly non-technical) place to start.

Read more: