Tuesday 31 March 2020

Coronavirus border controls are raising the price of illegal goods and services

InSight Crime reports on an unexpected effect of the coronavirus border controls:
Contacts in China provide Mexican criminal groups with everything from counterfeit luxury goods to chemical precursors for making fentanyl. But with the spread of the coronavirus, shipments from China have dried up and the cartels are feeling the pinch...
And this is not the only possible consequence for Mexico’s cartels. The Jalisco Cartel New Generation (Cartel Jalisco Nueva Generación — CJNG) is reportedly also struggling to source chemical precursors from China to make fentanyl, the synthetic opioid which has caused thousands of deaths in the United States and Mexico alike...
The global lockdown due to the coronavirus appears to be hitting legal and illegal economies equally hard, but it is likely the supply chain troubles of La Unión de Tepito and the CJNG are only the beginning.
Criminal groups across the region will feel the squeeze.
Countries across Latin America are shutting down borders and preventing air travel, which is likely to significantly disrupt criminal economies like drug trafficking, contraband smuggling and human trafficking.
With most aircraft grounded, illicit drug flights that have become a mainstay of drug trafficking in the region may become easier to track.
There are a lot of things going on here, but they all add up to a decrease in supply of illegal drugs. Difficulty in finding precursors to manufacture drugs reduces supply of drugs, as does a higher likelihood of shipments being intercepted. The diagram below illustrates the effect of these changes on the market for illegal drugs. The market was previously in equilibrium, where the price was P0 and the quantity of drugs traded was Q0. The decrease in supply (from S0 to S1) moves the market to a new equilibrium, where the price of drugs has increased to P1, and the quantity of drugs traded has decreased to Q1.


The drug market isn't the only one affected of course (although the effects are similar in terms of the market diagram above). The market for counterfeit goods is mentioned in the quote at the start of this post, but the article also notes:
Yet criminal groups are nothing if not able to find opportunities in a crisis. In Honduras, after the government locked down the borders due to the virus, human traffickers, known as “coyotes,” raised their prices to help people and contraband get in or out of the country illegally, El Diario de Hoy reported.
When goods (or people) become more difficult (more costly) to transport, the equilibrium price of those goods and services will rise. That is the case even when those goods and services are illegal.

[HT: Marginal Revolution]

Monday 30 March 2020

Blonds have more fun (or rather, they get paid more)

There's a lot of evidence that a beauty premium exists in the labour market. That is, more attractive people earn more (see here and here, as well as here for my review of Daniel Hamermesh's excellent book Beauty Pays). Taller people also earn more (see here), so beauty is not the only physical attribute that attracts a labour market premium. What about hair colour?

In a short 2012 article published in the Journal of Socio-Economics (sorry I don't see an ungated version), Nicolas Guéguen (Université de Bretagne-Sud) reported the results of an experiment he ran to find out. Guéguen used wigs to randomise the hair colour of waitresses at several restaurants in Brittany, and measured the effect on whether the customers gave a tip, and how much they tipped. He found that:
Men gave tips more often to a waitress with blond hair and, when they did, they gave her a larger amount of money. No hair color effect was found with female patrons. In this experiment, we found that the average rate of customers who left a tip to a blond waitress is 25.2% higher when compared with the three other combined hair color condition. The systematic use of this technique could increase their income from 1173.73€ to 1244.97€ a month (from 8.38€ to 8.89€ per hour).
Blond waitresses earned statistically significantly more than waitresses with black, brown, or red hair (there were no differences between the other three hair colours). You might worry that the waitresses acted differently when wearing the blond wig, but if that were the case, then there would have been differences in the tips earned from female customers (and there weren't). The experimental setting and the randomisation of wig colours across different days also solves much of the problem of correlation here. These results are plausibly causal.

You might be willing to call this out as discrimination. However, as Daniel Hamermesh notes in his book, customers may be willing to pay more for the amenity of dealing with an attractive person (although, Hamermesh doesn't put it in quite those terms). Customer preferences matter, and that is one explanation for the higher wages for more attractive people. Employers pay attractive people more because they attract more customers. However, that argument doesn't really extend to the tipping behaviour of customers. So, it is an important finding that customer willingness-to-pay for physical attributes is also picked up by what they actually pay (as a tip), which is a more direct measure. It would be interesting to see if this extended to other physical attributes (attractiveness, height), although the challenges of designing an experiment for those attributes are much greater!

Read more:

Saturday 28 March 2020

The marginal benefit of peer review

I engage in a lot of peer review. Last year, I reviewed 24 papers submitted to 18 different journals. The year before was similar. And some of those article went through multiple rounds of peer review. I typically avoid asking for anything new from authors in the second or subsequent rounds of peer review, but I have heard horror stories from colleagues of papers sitting in peer review for years, while reviewers ask for revision after revision, all the time with new analyses or robustness checks being appended to the paper. A legitimate question is, does all of that extra peer review time add value?

A recent IZA Discussion Paper by Aboozar Hadavand (Johns Hopkins University), Daniel Hamermesh (Columbia University), and Wesley Wilson (University of Oregon), provides us with an initial answer. They collated data from the journal Economic Inquiry over the period from 2009 to 2018. Interesting, Economic Inquiry offers submitting authors two tracks:
Submitting authors could choose between a fast track, in which the article receives a simple yes or no; or a regular track, which might lead to acceptance with minor revisions, to a revise/re-submit response with subsequent additional refereeing, or to rejection...
Hadavand et al. then compare articles that were published through the fast track (which didn't suffer multiple rounds of peer review) with articles that were published through the regular track (which could have). Of course, authors self-select into the track they submit to, so the authors use a two-stage selection model to deal with that. When looking at the subsequent citation performance of articles (which is an indicator of article quality), they find that:
...no matter what vectors of covariates are included in this second stage, those regular-track articles that go through multiple rounds of refereeing have no greater scholarly impact than those that obtain only one set of referee reports... Compared to regular-track articles that go through only one round of refereeing, multiple rounds of refereeing never enhance an article’s subsequent scholarly impact in any of the econometric formulations that we have constructed.
In other words, the additional rounds of peer review seem to add no value (in terms of the article quality). To some extent, this makes sense. Economists recognise that marginal benefit is diminishing. So, each additional round of peer review should add less benefit than earlier rounds. However, I think many people would be surprised to know that the marginal benefit basically immediately falls to zero, and may even be zero for the first round of peer review.

This calls into question the whole cost-benefit calculation of peer review overall. Hadavand provide a rough estimate that the time cost of peer review in economics is US$50 million per year, and US$1 billion in total across all disciplines. How much of that cost is essentially a waste? It would be good to get a sense of whether this study is an outlier, or whether it can be replicated for other journals, and perhaps more importantly, in other disciplines.

[HT: Marginal Revolution and Development Impact]

Friday 27 March 2020

The prisoners' dilemma, and why we all have to be kept in lockdown

The prisoners' dilemma is probably the most famous example of game theory in action. I've written about it many times, in many different contexts (for example here, herehere, and here). In the prisoners' dilemma, all players have a dominant strategy, which is a strategy that is always better for the player, no matter what any of the other players choose to do. However, if each player acts in their own self-interest, the outcome leads to payoffs that are worse for everyone, than if they had cooperated. Robert Frank has described these games as leading to behaviour that is 'smart for one, dumb for all'.

And you can see this sort of behaviour all the time (once you know what to look for). Consider the current lockdown situation in New Zealand. If the lockdown was in any way voluntary (as it effectively was until Wednesday night), then each person has two options: (1) stay home; or (2) act normally. For most people, acting normally is a dominant strategy, at least in the early stages of the coronavirus spreading. They are better off acting normally if everyone else stays home (because they mostly get to go on with their lives as normal, and have low risk of catching the coronavirus; whereas staying home they would be giving up on things they like to do), and they are better off acting normally if everyone else is acting normally (because life goes on as normal, rather than giving up on things they like to do). So, individually people are better off acting normally. And so we saw things like this:
Hundreds of partygoers ignored the Government's advice and crowded into Wellington bars on Saturday night with no social distancing in sight.
Just hours after prime minister Jacinda Ardern confirmed the country was on a level two alert and urged people to work from home and keep a safe two metre distance from each other, young revellers took to bars around the country...
There were also reports of bars in other main cities including Auckland and Hamilton teeming with people.
And around the world:
Young German adults hold "corona parties" and cough towards older people.
A Spanish man leashes a goat to go for a walk to skirt confinement orders.
From France to Florida to Australia, kitesurfers, university students and others crowd the beaches...
"Some consider they're little heroes when they break the rules," French Interior Minister Christophe Castaner said. "Well, no. You're an imbecile, and especially a threat to yourself." 
These people aren't stupid. They are selfish, and acting in their own self-interest. Which is why we needed to go into full lockdown, and early, if we wanted to curtail the spread of coronavirus. Any voluntary or partial measures would simply be subject to the prisoners' dilemma. As I've noted before, in a repeated prisoners' dilemma (which this is, because we are essentially repeating it every day for the next four weeks), cooperation requires trust. And collectively, our behaviour up to this point hasn't earned that trust. We need to be locked down, so that the dominant strategy was no longer available to us.

Finally, an interesting research project for later would be to look at the difference in response, and the effectiveness of response, between countries that are more authoritarian and those that are more democratic, and between countries where the average 'respect for authority' (for want of a better term) is high or low. Even looking at the early data on the countries where the spread has been curtailed relatively quickly (China, South Korea) and those where it hasn't (Italy, Spain), I think we can see a pattern forming.

Wednesday 25 March 2020

Online classes may also make better students worse off

I've written a number of posts about various research papers looking at the impact of teaching online (see the links at the end of this post). One of the things I have taken away from that literature is that teaching online (or blended learning - a mix of online and face-to-face) appears to improve learning outcomes for highly motivated and engaged students, but makes things worse for less motivated or engaged students.

However, this 2016 article by Jennifer Heissel (Northwestern University), published in the journal Economics of Education Review (sorry, I don't see an ungated version online), makes that simple conclusion much less clear. Heissel investigates the effect of online teaching of Algebra I to relatively high performing eighth-graders in North Carolina. She makes use of an interesting natural experiment:
Columbus County Schools offer a potential natural experiment to exploit. Before 2011, CCS had no Algebra I option for their middle school students; instead, their advanced students received some supplemental learning as part of their regular eighth-grade math class. Beginning in 2011, CCS began offering NCVPS [North Carolina Virtual Public School] Algebra I to their advanced eighth graders. Classes of students met in their school computer lab and participated in the virtual Algebra I course. The virtual teacher was the primary instructor, but a local staff member supervised the class.
She compares the eighth graders who took the online Algebra I course with eighth graders who took Algebra I using a traditional (face-to-face) class, and with ninth graders. Since Algebra I is typically taken in ninth grade, eighth graders who take it tend to be near the top of the performance distribution for mathematics (and academically overall). Using data from the 2010/11 and 2011/12 academic years, for nearly 200,000 students (of which only 719 took Algebra I via NCVPS), she finds that:
...NCVPS students have between 0.28 and 0.31 standard deviations lower test scores than students in traditional classrooms. Passing rate differences are not consistently statistically significant at conventional levels...
When looking at the results by student mathematical ability, she finds that:
...the top-performing quintile in CCS had scores 0.30 standard deviations below expected after the policy change. The second-highest quintile had scores 0.24 standard deviations below expected...
The results are robust to various differences in econometric specification. She concludes that:
 Virtual students appear to underperform relative to similar peers in traditional middle school Algebra I classrooms and relative to students who wait until ninth grade to take the course. Policymakers should carefully weigh these tradeoffs.
The trade-offs here are the (supposedly) lower cost of teaching online vs. the lower academic performance of these students. Besides highlighting that trade-off, these results are interesting to me in particular because we teach some accelerated students in my ECONS101 class, through the UniSTART programme. That programme allows senior high school students to take university-level papers in an online format. Those students tend to perform quite well in ECONS101, and in the past these students have frequently dominated the ranks of the top ten students (and beyond) overall. However, the students who are allowed into the UniSTART programme tend to be those that are relatively high achievers. It makes me wonder - could they have done even better if they had waited to study on-campus?

In any case, this paper adds to the growing evidence that suggests to me that online teaching is not an optimal format. And that is highly relevant given the current lockdown, which has driven all of our university teaching online.

Read more:

Friday 20 March 2020

The oil price war is actually just a return to equilibrium

The 'war against coronavirus' may be just getting under way, but it wasn't the only war in the news in the past couple of weeks. Saudi Arabia and Russia have fired the first shots in an oil price war. The New Zealand Herald reported:
Expect to see petrol prices fall sharply in coming days after fears of a global price war sparked an historic collapse in crude oil barrel prices this morning
Oil prices plunged as much as 30 per cent as markets opened this morning - the biggest one day fall since the start of the first Iraq war in 1991.
Brent crude is currently down about 20 per cent.
But this follows falls of 10 per cent at the weekend on fears that Russia and Saudia [sic] Arabia will launch a full-scale price war.
Saudi Arabia slashed the price of its crude and upped production after Opec and Russia failed to agree on a supply response to coronavirus.
The industry had hoped that major players would agree on production cuts to mitigate the impact on global demand.
But talks in Vienna failed late on Saturday (NZT).
Global crude oil production is an example of an oligopoly - a market where there are many buyers, but few sellers. When there are few sellers, it is in the sellers' best interests to work together as a cartel. A cartel essentially acts like a monopoly seller - it is able to use market power to extract greater economic rent from the market (in the form of higher profits, arising from higher prices), than the countries would be able to extract if they were competing with each other.

This is another example of game theory in action. Let's say that there are two players - Saudi Arabia and Russia. Each player has two strategies - high production (which leads to lower prices and lower profits for oil producers), or low production (which leads to higher prices and higher profits). If one country has high production and the other low production, the high production country benefits more. However, if both countries have high production, both are worse off. These outcomes and payoffs are illustrated in the diagram below (the payoff numbers represent profits, but are just made up to illustrate this example).


To find the Nash equilibrium in this game, we use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium). In this game, the best responses are:
  1. If Russia chooses high production, Saudi Arabia's best response is to choose high production (since 4 is a better payoff than 2) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Russia chooses low production, Saudi Arabia's best response is to choose high production (since 10 is a better payoff than 8);
  3. If Saudi Arabia chooses high production, Russia's best response is to choose high production (since 3 is a better payoff than 1); and
  4. If Saudi Arabia chooses low production, Russia's best response is to choose high production (since 6 is a better payoff than 4).
Note that Russia's best response is always to choose high production. This is their dominant strategy. Likewise, Saudi Arabia's best response is always to choose high production, which makes it their dominant strategy as well. The single Nash equilibrium occurs where both players are playing a best response (where there are two ticks), which is where both countries choose high production.

Notice that both countries would be unambiguously better off if they chose low production. However, both will choose high production, which makes them both worse off. This is a prisoners' dilemma game (it's a dilemma because, when both players act in their own best interests, both are made worse off).

That's not the end of this story though, because the simple example above assumes that this is a non-repeated game. A non-repeated game is played once only, after which the two players go their separate ways, never to interact again. Most games in the real world are not like that - they are repeated games. In a repeated game, the outcome may differ from the equilibrium of the non-repeated game, because the players can learn to work together to obtain the best outcome.

And that is what happens when a cartel forms. If Saudi Arabia and Russia work together and both agree to choose low production, both countries benefit. That is what they were doing, up until a couple of weeks ago. The problem here is that both countries choosing low production is not an equilibrium. Each country individually realises that it can benefit by choosing high production (knowing that the other country is choosing low production), thereby cheating on the agreement. If both countries cheat, then the agreement breaks down and we end up at the Nash equilibrium (which appears to be what has happened). So essentially, the current oil 'price war' is simply a return to the equilibrium in this game.

However, it probably won't last. After a little while at the Nash equilibrium, it is likely that both countries will realise their folly, and work towards a new cartel agreement based on low production. And the cycle will begin all over again. Essentially, this has been the story of OPEC since soon after its formation - agreement by all (or most) parties, followed by selective cheating, followed by a breakdown in the agreement, followed by a new agreement.

So, make the most of low petrol prices. The coronavirus pandemic may keep them low for a while yet (due to low demand), but eventually Saudi Arabia and Russia will get their act together, and we'll be back to a world of higher oil prices (and consequently higher petrol prices).

[Updated 15/04/2020: To correct the positions of ticks and crosses in the payoff table]

Thursday 19 March 2020

Supermarket toilet paper discounts are part of their long game to attract and retain customers

Yesterday, I wrote a post about the Great Toilet Paper Crisis, using game theory to explain why people are panic buying. However, consumers' behaviour isn't the only behaviour that seems a little strange at this time. Last week, when doing my weekly shopping, I walked past the toilet paper and there was a woman who was just putting her third package of toilet paper into her trolley. She saw me looking at her, and said "What? It's on special!". And you know what? She was right. Pak'n'Save was selling some toilet paper at a discount, at a time when it is in high demand due to panic buying.

On the surface, discounting an item when demand is high makes no sense. Basic demand theory from ECONS101 suggests that when demand is higher, prices should increase, not decrease. Over on The Visible Hand of Economics blog, Matt Nolan recounts a similar story to mine, and offers some suggestions to explain the unusual discounting behaviour of the supermarkets. I want to start by focusing on this potential explanation:
Supermarkets do not sell just one good. As a result, if a special on toilet paper – along with stacks and stacks of toilet paper from wall to wall in the store – will get people in the door, then that also ensures that people will buy OTHER goods and services from the supermarket.
In other words, toilet paper and other supermarket goods are complements, and the discounts and advertising of toilet paper is a way supermarkets can get you in the door to purchase these other goods.
This broad concept has a common name called the “Halo effect“. However, as that post notes this effect is quite unclear as it is the mix of two things, the complementarity of products due to their co-location, and brand spillovers.
In this instance it is just the former we are meaning. In fact there is a better term in this context, where the supermarket may be willing to sell toilet paper at a loss to get people in the door – a loss leader.
Because of the fear of COVID-19, people are trying to find something they can control to give themselves a sense of protection – in this case toilet paper purchases.
Seeing this, supermarkets recognise that people are especially responsive to toilet paper availability and prices and so use these sales to increase demand for their other – higher margin – products.
When I cover pricing strategy in ECONS101, one of the elements of that topic is considering circumstances where a firm is better off deviating from the short-run profit maximising price. One of those circumstances occurs when the firm sells multiple products, and can increase its total profit by selling one or more products at a loss - the so-called loss leader product.

An ideal loss leader product is one that will encourage a lot of extra customers to visit the store. That usually suggests a loss leader product is one that has relatively elastic demand (so that, when price is reduced, it attracts a lot more customers to the store). In a time of panic buying of toilet paper though, it isn't clear to me that demand is relatively elastic. In fact, it is likely that demand is relatively inelastic for goods like toilet paper, hand sanitiser, and other products that people are hoarding right now. People really want these products, and would be willing to pay a high premium to avoid missing out (so they are less sensitive to price - demand is relatively inelastic).

You could argue that, since toilet paper is in short supply, if a particular supermarket has toilet paper and other supermarkets don't, then that would attract customers to the supermarket with toilet paper. But, if that were the case, they wouldn't need the discount to attract customers - they could just post a big sign that says: "WE HAVE TOILET PAPER", and sell at full price (or more!).

So, if it's not loss leading, what are supermarkets doing? Another pricing tactic that firms use is to price low in order to foster a long-term relationship with their customers. Developing a reputation as being a 'fair player' in the market is an important part of that. In this instance, a supermarket doesn't want to be seen as taking advantage of their customers by jacking up the price of toilet paper. And, if raising the price makes the supermarket the 'bad guy', then perhaps they believe that lowering the price makes them the 'good guy': "Not only can customers continue to buy toilet paper from us, we're making it more affordable for them to do so".

Supermarkets are playing a long game here (yes, more game theory, like yesterday's post). Customers tend to be reasonably loyal to their preferred supermarket. If a simple tactic like discounting toilet paper when it is in short supply can encourage some customers to switch, and after switching they become loyal to the new supermarket, then the long-run profit gains may well outweigh any foregone potential profits on the toilet paper.

Of course, if one supermarket engages in this tactic, then it makes sense for all of them to do so. Otherwise, the supermarkets that don't follow suit face the risk of losing long-term customers. So, supermarkets start out discounting toilet paper to attract new customers away from their competitors, but end up discounting in order to retain their existing customers and stop them being lured away.

This is actually an example of a prisoners' dilemma game (see this post for another example). All supermarkets would be better off if they continued to charge full price (or more) for toilet paper, but it is in every supermarket's individual best interest to discount toilet paper to try and steal customers away from the others (or to retain their existing customers, if other supermarkets are discounting). And so, we end up in a situation where the supermarkets are selling toilet paper at a discount, even as the shelves are being left bare.

This is not loss leading, at least not in the normal sense. But it is long-run profit maximising behaviour by the supermarkets.

Wednesday 18 March 2020

The Great Toilet Paper Crisis coordination game

This week in my ECONS101 class, we've been covering game theory. Which means I no longer have to hold back on blogging about this article in The Conversation from a couple of weeks ago, by Alfredo Paloyo (University of Wollongong):
Shoppers in Australia, Japan, Hong Kong and the United States have caught toilet paper fever on the back of the COVID-19 coronavirus. Shop shelves are being emptied as quickly as they can be stocked.
This panic buying is the result of the fear of missing out. It’s a phenomenon of consumer behaviour similar to what happens when there is a run on banks.
A bank run occurs when depositors of a bank withdraw cash because they believe it might collapse. What we’re seeing now is a toilet-paper run...
Both banking and the toilet-paper market can be thought of as a “coordination game”. There are two players – you and everyone else. There are two strategies – panic buy or act normally. Each strategy has an associated pay-off.
If everyone acts normally, we have an equilibrium: there will be toilet paper on the shop shelves, and people can relax and buy it as they need it.
But if others panic buy, the optimal strategy for you is to do the same, otherwise you’ll be left without toilet paper. Everyone is facing the same strategies and pay-offs, so others will panic buy if you do.
The result is another equilibrium – this one being where everyone panic buys.
Let's work through Paloyo's example systematically. In this game, there are two players: you, and everyone else. However, instead I'm going to use two named players (Sam and Chris) - the result would be the same if we used Paloyo's players, but I just find it easier to refer to named players. There are two strategies: panic buy, and act normally. Assuming that this is a simultaneous game (both players' decisions about strategy are revealed at the same time), then we can lay out the game as a payoff table, like this:


The payoffs are measured in utility (satisfaction, or happiness), for Sam and Chris. If both players act normally, then no one misses out on toilet paper, and both players receive 'normal' utility (utility = 0). If one player panic buys and the other doesn't, then the panic buyer is worse off by a little (they have to pay the costs of storing their panic purchases; utility = -2), but the player acting normally is much worse off (there is a good chance that the store runs out of stock and they have to search around for toilet paper; utility = -10). If both players panic buy, then both are worse off (costs of storing, and a good chance that the store runs out of stock, but at least they have some toilet paper once they find some that is available to buy; utility = -5).

Paloyo identifies two equilibriums in this game (everyone acts normally, and everyone panic buys). They are what we call Nash equilibriums, and to confirm that they are Nash equilibriums in our game, we can use the 'best response method'. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the textbook definition of Nash equilibrium). In this game, the best responses are:
  1. If Chris chooses to panic buy, Sam's best response is to panic buy (since -5 is a better payoff than -10) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If Chris chooses to act normally, Sam's best response is to act normally (since 0 is a better payoff than -2);
  3. If Sam chooses to panic buy, Chris's best response is to panic buy (since -5 is a better payoff than -10); and
  4. If Sam chooses to act normally, Chris's best response is to act normally (since 0 is a better payoff than -2).
Notice that, as Paloyo notes in his article, there are two Nash equilibriums in this game: (1) where both players choose to panic buy; and (2) where both players choose to act normally.

Which of these two equilibriums will obtain depends on what each player thinks the other player will do. So, if there is good reason to believe that the other player is acting normally, your best response is acting normally (and vice versa for the other player). But, if there is good reason to believe that the other player is panic buying, your best response is to panic buy too (and vice versa for the other player).

Notice that the acting normally equilibrium is better for both players than the panic buying equilibrium - everyone would be better off if everyone just acted normally. We refer to that as a Schelling Point (named after the Nobel Prize winner Thomas Schelling). In a coordination game (a game with more than one Nash equilibrium), a Schelling Point is the equilibrium that would be more likely to obtain - that's because everyone can see that the Schelling Point equilibrium is better for at least one player, and makes no player worse off (compared with any other Nash equilibrium). However, even though acting normally is a Schelling Point, and should be more likely (after all, it is the usual state of affairs), once people start panic buying, it suddenly becomes a better option for everyone to panic buy.

Which happens to be what we are seeing, with people desperately stockpiling toilet paper. Few people are stocking up on toilet paper because they think they need that much toilet paper. But no one wants to miss out on toilet paper because everyone else bought it up first. The outcome is everyone panic buying. Hopefully, some sanity will be restored by supermarkets, which are starting to impose limits on buyers. If acting normally is imposed on some buyers, and people can believe that others are being forced to act normally, then the Schelling Point equilibrium will be restored. And then the Great Toilet Paper Crisis will be over.

Monday 16 March 2020

Book review: Hidden Order

I just finished reading David Friedman's 1997 book Hidden Order: The Economics of Everyday Life. The Foreword by Steven Landsburg (author of, among many other books, Can You Outsmart an Economist?, which I reviewed earlier this year) notes that the book "can serve simultaneously as an introduction to mainstream economic theory and an introduction to the extraordinary mind of David Friedman". I can't speak to the second of these, but the book certainly is an introduction to economics.

What was most interesting to me about this book was the sheer number of examples and theoretical explanations that were part of the teaching of ECON100 since before I was involved in the paper (which goes back to 2003). To be honest, I wouldn't be at all surprised to learn that this book, or at least the writings of David Friedman, were the inspiration for many of the explanations and examples that we continue to use in ECONS101 today.

However, the book is beginning to show its age. The inclusion of numerous graphs is not out of place in a textbook, and the explanations are clear enough, but they would be unlikely to be seen on the pages of more recently published pop economics books. That makes this book less of a layman's introduction to economics, but more of an easy-reading textbook. However, even textbooks have come far since the 1990s, and this book no longer has an edge over the textbook market either.

Having said all of that, it was still a good read, and had many interesting bits, and I made a lot of notes on things that I can build into my ECONS101 and ECONS102 classes. Take this bit about burglar alarms:
Defensive expenditures by the victims are rent seeking as well - the function of a burglar alarm is to make sure that the property remains in the hands of its original owner.
Rent seeking occurs when a party undertakes some expenditure that is socially wasteful and is undertaken to protect a monopoly position or some other source of market power. In this case, it is expenditure to protect property ownership. That sort of slightly surprising insight - linking an everyday observation to economic theory, in this case rent seeking - is the real value of this book, and there are many instances of it. However, they don't all work. For example, I don't buy that an effect of price controls is to shift the demand curve, and I suspect that adopting that approach would be incredibly confusing for students.

I'm glad I read this book, and there is much for an economics student to gain by reading it. However, I wouldn't recommend it to a general audience, particularly when there are many other options available. They should try Tim Harford's The Undercover Economist instead.

Sunday 15 March 2020

One additional positive effect of cancelling March Madness

The NCAA cancelled its season-ending basketball tournament (known as March Madness) earlier this week, due to the ongoing COVID-19 outbreak. This was the first time since 1938 that the tournament had not gone ahead, and it has left fans and players distraught (especially number-one-ranked Kansas). However, other than the benefits in terms of reduced virus spread, there is at least one other margin on which the cancellation is likely to be a positive.

This 2019 article by Dustin White (University of Nebraska at Omaha), Benjamin Cowan (Washington State University), and Jadrian Wooten (Pennsylvania State University), published in the journal Contemporary Economic Policy (open access), looked at the relationship between March Madness participation and college drinking during the 1993, 1997, 1999, and 2001 seasons.

They use data from the Harvard School of Public Health College Alcohol Study, and compare individual-level student alcohol consumption between students at colleges that qualified for the tournament, and students at colleges that did not quality. They have data on around 26,000 students across those four seasons. White et al. find that:
...tournament participation increases the binge drinking rate of male students by approximately 47% (relative to the average binge rate among males at tournament schools). Furthermore, we find that students are more likely to self-report drunk driving during the tournament.
When comparing the results across genders, they find that:
...the measured increase in binge drinking and number of drinks consumed is concentrated almost exclusively among male students.
When their team qualifies for the NCAA tournament, male students drink more, and binge drink more often, but female students are unaffected. So, with no March Madness this year, college campuses will likely be, at least on one dimension, a little less mad.

Friday 13 March 2020

The economic benefit of becoming a US state

After the Mexican-American War (1846-1848), a large chunk of northern Mexico was ceded to the United States in the so-called Mexican Cession (as a result of the Treaty of Guadalupe Hidalgo). This included the modern-day states of California, Nevada, and Utah, most of Arizona, and parts of New Mexico, Colorado, and Wyoming. We could also include the state of Texas, the annexation of which laid the seeds for the war.

This raises some obvious questions. Texas obviously thought they were better off as a state of the US. Is that actually what happened? What about the other states that joined the US as part of the Mexican Cession? And what about other territories that could easily have become states, such as Cuba or Puerto Rico?

Those are the questions that this recent working paper by Robbert Maseland (University of Groningen) and Rok Spruk (University of Ljubljana) sought to address. They use state-level and country-level data on GDP per capita, and attempt to estimate the economic growth impact of a state joining the US. This paper is interesting for several reasons, which is why we discussed it in the Economics Discussion Group at Waikato this week.

First, the paper illustrates the importance of the counterfactual - what would have happened if history had been different. In this particular case, what would economic growth have been in the Mexican Cession states, if they had remained part of Mexico? We don't know for sure of course, because we are only able to observe what happened after they became states, and we don't observe what would have happened if they didn't. Similarly, for the territories that didn't become US states that Maseland and Spruk look at, we don't observe what would have happened if they did become states, only what happened when they didn't.

Maseland and Spruk address the counterfactual problem by using the synthetic control method. Essentially, they create a model of each Mexican Cession state's observed variables (including GDP per capita) up to the time before the Mexican Cession, based on the observed variables of other countries. This creates, for each state, a synthetic version of the state that is made up of proportions of the data from other countries. For example, they find that:
...the growth pattern of prestatehood California is best reproduced by growth and development covariate values of Canada (49%), South Africa (19%), New Zealand (15%), Egypt (10%), Norway (5%), and United Kingdom (3%), respectively. On the other hand, the synthetic control group for Arizona consists of Egypt (70%), Jamaica (18%), United Kingdom (11%), and Greece (1%), respectively.
As an interesting aside, New Zealand was part of the synthetic versions of California (15%), Colorado (11%), Nevada (58%!), and New Mexico (1%).

The second interesting aspect of the paper is the results themselves. Maseland and Spruk compare the actual economic growth performance of each state with the growth performance of the synthetic version of itself. They find that there are:
...strong and pervasive effects of the admission on long-run growth. The underlying post-admission gap coefficients are both large, positive and statistically significant at 1%, respectively, and readily suggest that the effect of joining the US appears to be specific to the treated states.
For example, here's a graph of the economic growth path for actual and synthetic California:


The solid line is the actual trajectory for California, while the dashed line is the synthetic control. It is clear that they deviate from each other around 1850, and the synthetic control does much worse. This demonstrates how much better off California was after statehood.

Maseland and Spruk then go on to show that the opposite is true of Mexican states that did not join the US, finding that there are:
...large gains from the hypothetical admission of Mexican states to the United States although the variation in the long-run growth effect is notable... the states next to the US border appear to be most adversely affected by not joining the United States.
Maseland and Spruk then look at the hypothetical case of several territories joining the US, including Cuba, Puerto Rico, and the Philippines, (all of which were occupied by the US after the Spanish-American War in 1898), and Greenland (which was granted home rule by Denmark in 1979, and could hypothetically have joined the US as a state then). They find that:
By 2015, the synthetic Cuba as a hypothetical US states would be seven times richer than its real counterpart... By 2015, the difference between the synthetic Philippines as a US state, and the real Philippines is about a factor [of] 10.
...synthetic Puerto Rico as a US state would be 51 percent richer than the real non-state Puerto Rico... In terms of magnitude, the gap between synthetic Greenland as a US state and its real version as a Danish territory is 32 percent, respectively.
Finally, they look into the reasons for these substantial differences. Does statehood allow easy access to a larger domestic market and therefore drive economic growth, or does statehood lead to the adoption of better institutions? Maseland and Spruk find evidence in favour of the role of institutions:
On balance, our estimates indicate that the joint temporal and spatial variation in the level of electoral democracy can explain up to 41 percent of the statehood-induced growth premium. The corresponding variation in the level of liberal democracy accounts for up to 56 percent of the long-run growth benefits stemming from improved institutional quality upon the admission to the United States. The institutional quality bonus received by joining the US apparently drives a large part of the performance boost. This lends support to the second leg of the American Exceptionalism thesis, which is that it is America's political institutions that gave the US its unique advantage.
So, institutional quality was key to the improved economic growth performance of the Mexican Cession states after they joined the US.

There is a third reason why this paper is interesting. The original Constitution of Australia provided an opportunity for New Zealand to become a state of Australia. We chose not to. On the other hand, Western Australia became a state at that time following a referendum, when instead they could have become independent. An interesting exercise for an enterprising honours student might be to look at these two cases (data permitting) and follow the process outlined by Maseland and Spruk to construct the relevant counterfactuals for New Zealand and Western Australia. Was New Zealand better off going it alone?

[HT: Marginal Revolution]

Thursday 12 March 2020

Differing perspectives on introductory economics textbooks

The latest issue of the Journal of Economic Literature has two very interesting articles, offering different perspectives on teaching introductory economics. The first article (ungated earlier version here) is by Samuel Bowles (Santa Fe Institute) and Wendy Carlin (University College London), two of the principals of the CORE team, the authors of the free e-text The Economy. Their article is interesting because it talks about the development of textbook resources over time, with a particular focus of the contribution of Paul Samuelson's seminal 1948 textbook:
Samuelson was aware even then that a substantial fraction of all students in higher education would take an introduction to the subject; those who would go on in economics were a minority...
That key issue (that most first-year economics students will do no economics after that first paper) has not changed in the 70-plus years since Samuelson's text was first published. Bowles and Carlin then go on to present some reasonably sophisticated textual analysis comparing Samuelson's text with two widely used modern textbooks, one by Gregory Mankiw, and one by Paul Krugman and Robin Wells. They then compare the CORE textbook with Krugman and Wells. They identify several important differences in the modern textbooks compared with Samuelson's textbook, such as the:
...shift away from Samuelson’s early engagement with the most pressing economic problems of the day to a focus on economics as individual decision making, “thinking like an economist,” and the application of market-clearing supply and demand models to a larger domain of economic problems.
In contrast, the CORE text is explicitly written with a focus on the issues of our times, like climate change and financial crises, firmly in mind. The other key difference between CORE and other modern textbooks is the topic ordering. Mankiw and Krugman/Wells essentially start with supply and demand, and branch off from there to teach alternative market structures as exceptions to the perfectly competitive market. However, the CORE text goes back to Samuelson's approach, about which Bowles and Carlin note that:
Even within part three, Samuelson adopts an unconventional ordering of topics both by previous and by today’s standards. The firm’s output and pricing decisions are presented first for the monopolistically competitive firm (“includes most firms and industries” p. 492) and then finally a section on the perfectly competitive firm (“includes a few agricultural industries”) in which he introduces right at the start “decreasing costs and the breakdown of competition” (p. 505).
That 'unconventional ordering' of monopolistic competition as the norm, with perfect competition (supply and demand) as an exception, is the model applied in the CORE text. We use the CORE textbook in ECONS101 at Waikato, and I have a lot of sympathy for their approach, not least because it engages students who have done economics at high school in topics that they are unfamiliar with from the beginning (such as constrained optimisation and game theory), demonstrating that first-year economics is not simply a do-over of Year 13 economics.

The second article (ungated earlier version here) is by Gregory Mankiw (Harvard University), author of the widest selling introductory economics textbook. In the article, Mankiw explains his philosophy for writing textbooks:
I have always thought that instructors, especially in introductory courses, are like ambassadors for the economics profession... Just as ambassadors are supposed to faithfully represent the perspective of their nations, instructors in introductory courses (and intermediate courses as well) should faithfully represent the views shared by the majority of professional economists.
There is a clear difference in emphasis from the CORE authors. Bowles and Carlin take a much more explicitly student-centred approach, while Mankiw seems to me to take a discipline- or content-centred approach. Or at least, that's my reading of the two articles. Mankiw then goes on to defend the focus on supply and demand in his textbook:
To understand how market economies work, the single most useful tool is the model of supply and demand. When teaching the introductory course, therefore, this model should be developed as fully and consistently as possible. This tenet was my guiding beacon as I drafted my principles text. 
According to Mankiw, one of the key advantages of utilising supply and demand as a framework is that it allows for a clearer focus on welfare economics. I have a lot of sympathy for this argument as well. We used the Mankiw textbook for more than a decade in ECON100, before switching to the CORE textbook when we introduced ECONS101. Mankiw is still the required textbook for my ECONS102 class (Economics and Society), which has a more explicit focus on welfare economics.

Reading these two articles together was good for gaining an understanding of two competing perspectives on the teaching of introductory economics. It would have been more interesting had the two articles explicitly referenced each other - what would Bowles and Carlin say in response to Mankiw's insistence on supply and demand and the implication that the CORE text undervalues welfare economics? What does Mankiw think about the downplaying of the concept of elasticity in the CORE text? These and many other questions come to mind.

Although these two approaches appear contradictory, they need not be. I think it is useful to expose our economics students to both perspectives, which is why I am glad that they see the CORE text in ECONS101, and the Mankiw text in ECONS102 (albeit with lots of extra material from a range of perspectives added in). I'm increasingly of the opinion that the difference in focus of the two texts sets our students up well for future study in economics. Bowles and Carlin make the case for pluralism in their article, but argue against 'pluralism by juxtaposition', instead arguing that:
Pluralism can also be pursued, as Samuelson aspired to do, by integrating the insights of differing schools of thought and knowledge from other disciplines into a coherent paradigm. This can give students analytical tools borrowed from many schools or disciplines and help them to do economics rather than simply to talk about it. We call this pluralism by integration.
I'd argue that there my students who do both ECONS101 and ECONS102 get a more thorough pluralism by integration, involving two different perspectives. There is no perfect way to teach introductory economics. However, providing students with a range of tools and perspectives on how to understand economic problems is a great way to start.

Monday 9 March 2020

The New Zealand Initiative's new student success measure both goes too far, and not far enough

The New Zealand Initiative released a new research note today, by Joel Hernandez. It's part of an ongoing series of research they've been conducting into secondary school performance in New Zealand (see also my earlier post on their work here). In this new research note, Hernandez looks at secondary school performance, in terms of how well students perform at each level of NCEA and in University Entrance. Specifically, he looks at schools' relative performance (that is, their ranking compared with other schools) in terms of a raw performance measure, and a measure that has been adjusted for a bunch of student-level (family background) and school-level characteristics. The rationale for adjusting the measure is straightforward. As Hernandez notes:
...a more complex statistical model is required to separate the contribution of family background from the contribution of each school. Without it, the Ministry, principals and parents cannot identify a school’s true performance.
The adjusted ranking for a school provides an assessment on how well the school is doing, compared with what would be expected based on the family background of their students, and then ranks schools on that basis. Schools that do better than expected will rank higher on the adjusted measure than schools that do worse than expected. That all seems fair enough.

The research note then presents case studies for three schools, and shows that their ranking differs (in some cases quite substantially) between the raw measure, and the measure adjusted for student and school characteristics. The implication is that ranking schools on the basis of raw NCEA pass rates (or similar measures) does a poor job of capturing school quality, based on the body of students that each school actually has. No arguments here. Hernandez then concludes that the adjusted measure has value for the Ministry of Education, school principals, Boards of Trustees, and the public (that is, parents). I would have to disagree - giving this information to parents would not be a good thing. To see why, let's consider the decision-making process for parents.

Parents, to the extent they are able given other constraints, are looking to provide their children with the best education possible. However, information about school quality is seriously incomplete - most parents probably have little idea whether each available school is good, bad, or indifferent. Even the schools themselves may not know how high a quality the education they provide is, relative to other schools. So parents rightly look for some signal of quality - something that plausibly separates good schools from not-so-good schools. Many parents currently use the schools' decile ratings for this purpose - they assume that schools with a higher decile rating are higher quality schools.

However, the decile rating is not a measure of school quality - it is a measure of the socio-economic background of the families of students who attend the school. At the risk of over-simplification, students from more affluent families tend to go to higher decile schools. So, parents clamour to get their children into higher decile schools, which drives up house prices in school zones for those schools. That makes it even more difficult for the average family to afford to send their children to a high decile school. All of that in spite of the fact that decile rating should convey little information about the quality of the education provided by the school, because that's not what it's designed for.

The decile rating system not being a measure of quality was the problem that the New Zealand Initiative's new measure was designed to improve upon. And it does, but not necessarily in an entirely helpful way. The adjusted measure that Hernandez uses essentially captures the average 'value added' that the school provides its students - it is a measure of how much better (or worse) off they are by attending this school, relative to what would be expected. It essentially assumes that, having controlled for family background, all students would receive the same (average) impact of attending that school. That creates two problems.

First, it may actually make the issue of sorting even worse than before. At the moment, many parents select a school for their children based on the decile rating of the available schools. If the new measure of school quality is released to the public, parents now have some extra information on which to base their choice of school. They might look at the higher decile schools, find the high decile schools with the best quality, and then try to select into those. This will simply drive demand for those schools even higher than before. Previously, all high decile schools were treated similarly. This new measure will mean that not all high decile schools will be the same - some will become much more attractive than others. And the reverse will be true of low decile schools, especially low decile schools that perform poorly on the measure of school quality. In this way, the measure goes too far (if released to the public). Hernandez argues that the measure is "not designed to create new league tables" but that view is seriously naive.

Second, the measure doesn't really provide what parents are actually looking for. Parents want to know which school is going to provide their child with the best quality education. Taking the measure at face value, an uninitiated parent might conclude that high school quality means the same across different schools. But it doesn't - school quality in this measure is based on a comparison of how well the students fare compared to what is expected based on their characteristics. It captures the average effect on students who have the average characteristics of students at that school. Since no students have the average characteristics of students at any school, this measure doesn't actually apply to any students. So, just because a school adds a lot of value to its average student, it doesn't mean that it adds the same value to all of its students. Parents should really want a measure of value added for students with the same characteristics as their child, not the average. In this way, the adjusted measure doesn't go far enough.

Neither of those issues is a knock on the quality work that Hernandez has done. I think it's important. I'm just not sure that releasing this information to the public is necessarily a good thing. The implications need to be carefully thought through first.

Even releasing the results to the schools might not be a good thing. We already see schools trumpeting their raw NCEA performance on the big noticeboards at the school gates, soon after the results are released. I can imagine that schools that do well on this measure would want to publicise their results as widely as possible ("Look at us! We're a high quality school"), while schools that don't do so well would want to keep their results quiet. If you doubt this, consider Hernandez's research note - the one school that did better than expected in the adjusted measure is named in the research note, while the other two schools (that did worse than expected) did not want their identities revealed.

It is good to know that this research has moved on from a consideration of teacher value-added, and is now looking at the school level rather than the teacher level. This measure would provide good data to the Ministry of Education as to which schools are performing well, and which schools are performing not so well. This could easily be incorporated into ERO reports if desired (but noting the caveats above on releasing results to schools).

This approach also has potentially wider application. I wonder whether it would be possible to look at subject rankings (rather than school rankings), or rankings based on grouping particular standards - that is, an analysis at the subject or standard level, rather than the school level. That might provide some interesting results in terms of revising the NCEA curriculum itself.

[HT: New Zealand Herald]

Read more:


Saturday 7 March 2020

Relative prices, incentives, and the move to automated telephone switching

This week in my ECONS101 class, we talked about the Industrial Revolution. Specifically, we used a relatively simple production model to show how changes in the relative price of coal and labour in England created incentives for firms to switch from using labour-intensive production technologies to using coal-intensive production technologies.

That model, and the associated explanation, can be used in a range of situations. For example, I previously used it in a post about taxing robots. However, in this post I want to talk about telephone switching. The Federal Reserve Bank of Richmond's Econ Focus had an article about telephone operators late last year:
Users of the telephone in the late 19th century and early 20th century couldn't dial their calls themselves. Instead, they picked up their handset and were greeted by an operator, almost always a woman, who asked for the desired phone number and placed the call. Technology to automate the process emerged quickly, however: The first automated telephone switching system — a replacement for human operators and their switchboards — came into use with much fanfare in La Porte, Ind., on Nov. 3, 1892, 16 years after Alexander Graham Bell's patent on the telephone.
Yet telephone companies continued relying on the women long afterward. In 1910, only around 300,000 telephone subscribers had automatic service — that is, service in which they dialed calls themselves rather than interacting with an operator — out of more than 11 million subscribers total. The companies of the Bell System did not install their first fully automated office until Dec. 10, 1921, and did not install an automated system in a large city until the following year, two decades after the technology had been demonstrated. Those telephone users who did have access to automated calling were customers of independent phone companies, mostly in small towns and rural areas.
This raises a couple of questions. First, why did the telephone companies switch from human operators to automated switching systems? Second, why did this change happen in small towns and rural areas before big cities? The simple production model can help us explain.

Let's start by setting up the simple production model. It is shown in the diagram below, with capital (e.g. automated switching) on the y-axis and labour on the x-axis. Let's say that there are only two production technologies available to telephone companies, A and B, and that both production technologies would produce the same quantity (and quality) of output (connected telephone calls) for the combination of inputs (capital and labour) shown on the diagram. Production technology A is a labour-intensive technology (human operators) - it uses a lot of labour, and supplements the labour with a bit of capital. Production technology B is a capital-intensive technology (automated telephone switching) - it uses a lot of capital, and a small amount of labour (for keeping the switching machines operating).


How should a telephone company choose between the two competing production technologies A and B? If the firm is trying to maximise profits, then given that both production technologies produce the same quantity and quality of output (connected telephone calls), the firm should choose the technology that is the lowest cost. We can represent the firm's costs with iso-cost lines, which are lines that represent all the combinations of labour and capital that have the same total cost. The iso-cost line that is closest to the origin is the iso-cost line that has the lowest total cost. The slope of the iso-cost line is the relative price between labour and capital - it is equal to -w/(where w is the wage, and p is the cost of a 'unit' of capital).

First, consider the case where labour is relatively cheap and capital is relatively expensive. The iso-cost lines will be relatively flat, since w is small relative to p (so -w/is a small number). In this case, the iso-cost line passing through A (ICA) is closer to the origin than the iso-cost line passing through B (ICB), as shown in the diagram below. So production technology A is the least-cost production technology, and firms should use the relatively labour-intensive production technology (human operators).


Now consider what happens if wages are higher, or the cost of capital is lower. The relative price between labour and capital (-w/p) would increase, and the iso-cost lines would get steeper. This is shown in the diagram below, where the iso-cost line passing through B (ICB') is now closer to the origin than the iso-cost line passing through A (ICA'). So, now production technology B is the least-cost production technology, and firms should use the relatively capital-intensive production technology (automated telephone switching).


So, we already know that telephone companies switched from A to B, and we have an explanation of why it might have happened. Is our model's explanation consistent with the facts? In terms of wages, most telephone operators were women. Going back to the article in Econ Focus:
At first, the telephone industry hired men and boys as operators. But the practice was short-lived. The first woman operator, Emma Nutt, was hired by a telephone service in Boston in 1878, and the hiring of women spread quickly. Women operators were viewed by the companies as more polite to customers, more patient, more reliable, and faster — not to mention cheaper.
That last point is relevant to our model - wages of telephone operators were low. What about the cost of capital? From the article:
The difference with automatic telephone switching was that the cost structure, perhaps surprisingly, favored the smaller firms with their smaller customer bases. With the electromechanical systems of the day, each additional customer was more, not less, expensive. Economies of scale weren't in the picture. To oversimplify somewhat, a network with eight customers needed eight times eight, or 64, interconnections; a network with nine needed 81.
"You were actually getting increasing unit costs as the scope of the network increased," says Mueller. "You didn't get entirely out of the telephone scaling problem until digital switching in the 1960s."
So, the cost of automated telephone switching was higher in larger markets (e.g. large urban areas) than in smaller markets (small towns or rural areas). So, it seems likely then that the relative price of labour to capital (-w/p) was low (low wages, high cost of capital), and even more so in large urban areas. This makes the iso-cost lines flat (and flatter in large urban areas), which favours the labour-intensive technology.

But later:
Together with refinements in the technology, probably the foremost factor was wage inflation during and after the Great War — what is known today as World War I.
Following the war, a steep rise in the wages of the labor pool from which telephone companies drew telephone operators was enough to jolt Bell management into rethinking its attitude toward automatic switching. Thus the Bell System began planning in 1919 to adopt automation.
The increase in wages increases the relative price of labour to capital (-w/p), making the iso-cost lines steeper, and favouring the capital-intensive technology. The change to capital-intensive technology didn't happen overnight though. Sometimes incentives take a while to be acted on:
In 1965, the year after Baker's forecast, the Bell System installed its first permanent fully electronic switching system in Succasunna, N.J.
Digital switching, which was introduced in the 1960s, lowered the price of capital, which in our model makes the iso-cost lines even steeper, and increasing the incentives for telephone companies to switch to automated (digital) telephone switching. As you can see, the simple production model that my ECONS101 class uses in the very first week is quite versatile in explaining changes in production over time.

[HT: Marginal Revolution]

Friday 6 March 2020

Mobile phone use and academic performance

I've posted a couple of times about the effect of laptop use on student performance (see here and here), and more recently about the effect of mobile phones on student learning. That last study, which was based on a mobile phone ban in secondary schools, provided some suggestive but weak evidence that mobile phones reduced student test scores. I recently read another study (open access), by Andrew Lepp, Jacob Barkley, and Aryn Karpinski (all Kent State University), published in 2015 in the journal SAGE Open, that comes to a similar conclusion.

Lepp et al. base their results on an analysis of a survey of 518 university students, who were asked about their total mobile phone use in minutes per day (for "calling, texting, video games, social networking, surfing the Internet, software-based applications, etc."), and whose survey results were matched to their academic record (in particular, their GPA). They found that:
...there was a significant, negative relationship between total daily cell phone use and college GPA...
They then conclude that:
These results suggest that given two college students from the same university with the same class standing, same sex, same smoking habits, same belief in their ability to self-regulate their learning and do well academically, and same high school GPA—the student who uses the cell phone more on a daily basis is likely to have a lower GPA than the student who uses the cell phone less. 
That may be true, but it doesn't really answer the question that we really want to answer, which is: "Does mobile phone use reduce learning?" In my ECONS101 class this week, we talked about the difference between causation and correlation. What Lepp et al. found (and they acknowledge this in their conclusion) is a negative correlation between mobile phone use and academic performance. That could be because mobile phone use causes reduced learning. To that end, Lepp et al. suggest that:
...the negative relationship between cell phone use and academic performance identified here could be attributed to students’ decreased attention while studying or a diminished amount of time dedicated to uninterrupted studying.
However, as I noted in the ECONS101 lecture, just because you can tell a good story about why an observed relationship is causal, that doesn't make it causal. In this case, it is possible there is reverse causation (maybe doing worse in class makes students want to distract themselves more, and they use their mobile phone to do that), or more likely some third factor is affecting both mobile phone use and GPA (maybe more conscientious students use their phone less and also do better in class).

We're going to have to wait for a better study (perhaps purely experimental, or based on a natural experiment) before we'll have a clear idea of whether mobile phones are bad for learning, or not. In particular, even if mobile phone use causes worse student learning, presumably it isn't all forms of phone use, but particular types of use (or timing of use) that is the real problem. Lepp et al.'s study doesn't provide us with any answers to those questions, so really tell us much that we didn't already know (or suspect).

Read more:


Tuesday 3 March 2020

Why study economics? Return on investment edition...

When prospective students (or current students) ask me why they should study economics, I usually talk about the interesting and varied nature of the work (see some examples in the long list of links at the end of this post), and how it provides an analytical mindset and skills that are useful across many different occupations. Economics is a great complement to any other programme of study (in fact, thinking about recent tutors I have had in my papers or my research students, they've variously been studying economics with strategic management, chemistry, environmental sciences, political science, computer science, mathematics, and teaching).

However, another good reason to study economics is that economics graduates earn more than almost any other graduate. To illustrate, the BBC reported earlier this week:
Higher pay still makes it financially beneficial to go to university for most students in England, says research from the Institute for Fiscal Studies...
The study on projected earnings, based on tax data, shows wide variations in the financial winners and losers between different subjects.
  • For women, the financial gains of studying creative arts and languages are "close to zero"
  • Medicine will bring an extra £340,000 for women, economics £270,000 and £260,000 for law
  • Men studying creative arts subjects are projected to lose £100,000, compared to their counterparts who did not go to university 
  • For men in the top-earning subject areas of medicine and economics, the likely gain is £500,000
Those are the gains after you subtract the cost of studying, including the cost of course fees, and the foregone earnings while studying. It's based on real earnings data from England, but the results would be very similar in New Zealand (and anywhere else, for that matter). In case you're wondering why the returns are lower for women, the gender wage gap is important here, as is time out of the workforce (since the data are based on lifetime earnings).

If post-study earnings are important to you, then it is hard to beat economics as a programme of study, regardless of your gender. And you can double down on post-study earnings if you pair economics up with accounting or finance (although I maintain that the most interesting jobs probably pair economics up with something different from everyone else).

[HT: Gemma]

Read more:

Sunday 1 March 2020

The rise and fall of foot-binding in China

Foot-binding was the customary practice in some parts of China that involved tightly binding young girls' feet in order to modify their size and shape. Bound feet involved a trade-off for the family - the girl would increase her chances of marrying well, but at a cost of a lifetime of pain and discomfort, as well as a reduction in the girl's contribution to agricultural labour.

The practice of foot-binding arose in the 10th Century C.E., and continued until it was finally banned in the early 20th Century. Why did such a practice arise, and why did it persist for so long? As with most things, it has to do with incentives.

In a recent job market paper, Xinyu Fan (UCLA) and Lingwei Wu (Hong Kong University of Science and Technology) looked at the changing incentives for foot-binding. They identified a key change in the 10th Century that created the incentives for foot-binding - the introduction of the keju (the civil service examination system). As they explain:
The system was established during the Sui (581-618) and the Tang (618-907) dynasties, consolidated and expanded during the Song (960-1276) and fully institutionalized during the Ming (1368-1644) and Qing (1644-1911). During the post-Song period, the most important central and local officials and bureaucrats were selected through this system...
The exam system generated a social hierarchy and deeply affected social mobility in historical China, serving as a major social ladder for men to climb up.
Prior to the introduction of the keju, marriage was almost entirely within-class. Upper class women would marry upper class men (who would often be civil officials, like their fathers), and lower class women would marry lower class men. The keju system shook up this arrangement, because now the appointment of officials was meritocratic - within the upper-class men, those that performed better on the exams would rise up the civil service ranks faster. This increased the competition for the 'best' men, and one way to attract those men was for young women to increase their attractiveness through foot-binding. As Fan and Wu note:
In a stratified society where marriage is completely assortative in terms of family status and there is no incentive for foot-binding. The introduction of a gender-biased examination system significantly increased men’s mobility and quality dispersion. Historically, this led to the emergence of foot-binding among upper class women. When meritocracy increases further, as happened historically in China, marrying-up benefits continue to increase, and foot-binding diffuses from upper class women to lower class women, exactly the sequence observed in China.
Fan and Wu provide a variety of evidence supporting their theory, including the opportunity cost of foot-binding. In areas of rice cultivation, where more agricultural labour is required, the opportunity cost of foot-binding (and losing the girl's agricultural labour) is higher than in areas of wheat cultivation, where less agricultural labour is required. As predicted by the theory:
...our empirical analysis using county-level from the Republican archives shows that higher suitability of rice relative to wheat and higher suitability for household handicraft are associated with less/more foot-binding among lower class women respectively, and the county exam quota predicts a higher incidence of foot-binding.
Opportunity cost also explains the decline of foot-binding increased:
...after lasting for more than a thousand years, the gender-biased exam system collapsed in 1905. After the Opium Wars in the mid-19th century, Christian missionaries spread God’s message in China. An important part of the missionary work was the establishment of girl’s schools, mostly in coastal cities. During the Republican years, girls had greater opportunity to attend school. The increasing equality of educational opportunities promoted women’s upward mobility, and women’s quality dispersion began to catch up with that of men. In this case, the payoff of foot-binding as a costly beauty investment decreased...
Another economic force driving Chinese women out of foot-binding was the modern industrialization process. Starting in the late Qing, industrialization imposed transport infrastructure, and more integrated markets, which deeply altered the market structure for textile production... the opportunity cost of foot-binding increased, because women now had to leave their homes to work in distant factories...

Fan and Wu don't provide empirical evidence in support of this, but perhaps that is an opportunity for some future research.

In economics, we are particularly interested in the role of incentives. Foot-binding arose because of a change in incentives for young girls (and their families), and eventually died out as a cultural practice when the incentives changed again (albeit, nearly a thousand years later).

[HT: Marginal Revolution, early last year]