Thursday, 31 December 2020

Book review: Reinventing the Bazaar

I just finished reading John McMillan's 2002 book Reinventing the Bazaar. Diane Coyle at The Enlightened Economist has recommended this book many times (including earlier this year), and it's a recommendation that I am glad I followed through on and I fully concur.

The subtitle is "A Natural History of Markets", but I don't think the title or subtitle quite do the book justice. Most treatments of economics, including in most textbooks, treat markets as a given, and fail to really explain the inner workings of market design. McMillan's book does an outstanding job of filling that gap. It seems to be getting rarer over time that I end a book with pages of notes on little bits that I can add to my teaching, but it certainly was the case with this book.

McMillan is neither a market evangelist nor a market skeptic. He cleanly dissects the more extreme arguments on either side, especially in the concluding chapter, and charts a more centrist path throughout the book. I particularly liked this passage:

The market is not omnipotent, omnipresent, or omniscient. It is a human invention with human imperfections. It does not necessarily work well. It does not work by magic or, for that matter, by voodoo. It works through institutions, procedures, rules, and customs.

The focus of the book is on explaining institutions and market design, using real-world examples of what has worked and what has not. There are several pages devoted to the 1980s reforms in New Zealand, which was interesting to see (although based at Stanford University, McMillan is a kiwi, so the use of New Zealand as an example should come as no surprise). However, I don't think everyone would quite agree with the conclusions about the success of taking the 'shock therapy' approach.

McMillian also commits a sin in my eyes by defining health care and education as public goods (which they are not, because they are excludable). I can forgive him that one indiscretion though, because the book is wonderfully written and easy to read. It helps that he and I appear to share similar views about the market, as his conclusion demonstrates:

The market is like democracy. It is the worst form of economy, except for all the others that have been tried from time to time.

I really enjoyed this book, and I highly recommend it to everyone. 

Wednesday, 30 December 2020

Try this: Riding the Korean Wave and using K-Pop to teach economics

I use a bit of popular culture to illustrate economics concepts in class, but some of the videos I use are getting a bit dated (although The Princess Bride movie is a timeless classic). Maybe it's time for a refresh?

In a new working paper, Jadrian Wooten (Pennsylvania State University), Wayne Geerling (Monash University), and Angelito Calma (University of Melbourne) describe using K-Pop examples to illustrate economic concepts. K-Pop is incredibly popular worldwide, and increasingly going mainstream, so it is something that students will likely be familiar with.

Wooten et al. created videos with English subtitles and link them to the sorts of economics that is taught in principles classes. Specifically, in the paper they illustrate with three examples:

  1. EXO-CBX, "Ka-Ching", which can be used to illustrate scarcity, trade-offs, and opportunity costs;
  2. Blackpink, "Kill this love", which can be used to illustrate sunk costs and decision-making; and
  3. BTS, "No", which can be used to illustrate comparative advantage, negative externalities, arms races, and zero-sum games.
The music4econ.com website has many other videos as well (not just K-Pop). Using music videos in teaching is not a new idea (I've posted about it before here), and there are many examples of other forms of pop culture being used to teach economics (such as Broadway musicals, The Office, and The Big Bang Theory). This website can be added to the list.

Enjoy!

[HT: Marginal Revolution]

Monday, 28 December 2020

The gravity model and cultural trade in restaurant meals

One of my favourite empirical models to work with is the gravity model. It is an extremely high performing (in terms of both in-sample and out-of-sample forecast accuracy) model when used in migration and trade contexts, and quite intuitive. Essentially, in a gravity model the flow (of goods and services, or people) from area i to area j is negatively related to the distance between i and j (so, if i and j are further apart, the flows are smaller, most likely because it costs more to move from i to j), and are positively related to the 'economic mass' of i and j (so, if i and/or j is larger, the flows from i to j will be larger).

I have used the gravity model myself (e.g. see this post), and so have my students (e.g. see this post). I particularly like it when I find examples of unexpected uses of the gravity model. For instance, there was this paper on running the gravity model in reverse to find lost ancient cities (which I blogged about here). 

Most of the time, a gravity model of trade involves goods and services that cross borders. However, that is not the case in this recent article by Joel Waldfogel (University of Minnesota), published in the Journal of Cultural Economics (appears to be open access, but just in case there is an ungated earlier version here). Waldfogel is probably best known for his work on the deadweight loss of Christmas (see also here), but in this research he looks at the cultural trade in restaurant dining.

The interesting thing about this article is that the data isn't really trade data at all. Restaurant meals don't cross borders. Instead, it is the intellectual property that is crossing borders, which is why this article relates to the literature on cultural economics. Waldfogel uses:

...Euromonitor data on aggregate and fast-food restaurant expenditure by country, along with TripAdvisor and Euromonitor data on the distribution of restaurants by cuisine.

He links each cuisine to an origin country (i in the description of the gravity model above), and the country location of the restaurant as the destination country (j in the gravity model description). He then calculates measures of 'trade flows' in restaurant meals, both including and excluding fast food, and runs a gravity model using those data. He finds that:

As in many models of trade, distance matters: a 1% increase in distance reduce trade by about 1%... Common language and common colonial heritage also matter.

Those are pretty standard results in the trade literature using gravity models. Then:

Which cuisines are most appealing after accounting for rudimentary gravity factors?... Excluding fast food, the ten most appealing origins are Italy, China, and Japan, which all have similar levels of appeal, followed by the USA, India, France, Mexico, Thailand, Spain, and Turkey. When fast food is included, the USA rises to the top, and the others remain in the same order.

Finally, on the balance of trade in restaurant meals, he finds that, of 44 selected countries:

...three are substantial net exporters: Italy (with net exports of $158 billion), Japan ($44 billion), and Mexico ($17 billion). Substantial net importers include the USA ($134 billion), Brazil ($39 billion), the UK ($20 billion), and Spain ($20 billion).

I was a little surprised that Spain was such a net importer of cuisine from other countries. I guess that reflects that Spanish cuisine isn't as available outside of Spain as many other European cuisines are. The US and UK being large net importers is not a surprise though.

The results are mostly uncontroversial. However, I did take issue with some of the choices. Waldfogel codes all "pizza" restaurants as Italian. I'm not convinced that Pizza Hutt or Domino's count as Italian food - more like generic fast food, most of which was coded to the US. It would be interesting to see whether re-coding pizza would make any difference to the results - possibly not, as Waldfogel does test for the impact of coding "fried chicken" as either domestic or US and that appears to make little difference.

The gravity model is clearly very versatile, and deserves much greater attention in research than it currently receives. This research demonstrates a slightly new direction for it.

[HT: Offsetting Behaviour, last year]

Sunday, 27 December 2020

Putting demography back into economics

As a population economist, I am receptive to the idea that population concepts like migration, population growth and decline, population ageing and the age distribution, etc. are important things for economics students to understand. So, I read with interest this 2018 article by Humberto Barreto (DePauw University), published in the Journal of Economic Education (sorry, I don't see an ungated version online).

In the article, Barreto uses an interactive Excel spreadsheet that creates population pyramids from dummy or live data, to illustrate population concepts for students. The spreadsheet is available freely online here. It's mostly reasonably intuitive, but to get full value from it you probably need to read the full gated article and work through a couple of the exercises Barreto describes. As far as putting demography back into economics is concerned, Barreto quotes 1982 Nobel Prize winner George Stigler (from this 1960 article):

"In 1830, no general work in economics would omit a discussion of population, and in 1930, hardly any general work said anything about population.”

However, the key problem with using a tool like this is not the tool itself, it is the opportunity cost of including population concepts into an economics paper - what will be left out in order to accommodate the population concepts? Barreto does acknowledge this point in the concluding paragraph to his article. In an already crowded curriculum, it is difficult to see where population concepts could best fit. It would be challenging to squeeze it into an introductory economics paper, for instance. In the overall programme of study for an economics student, encountering population concepts would probably make most sense as part of macroeconomics where there are strong complementarities, and where the Solow model already appears (but where population change is treated as exogenous). On the other hand, in universities that already have a demography or population studies programme, teaching population concepts might create an uncomfortable situation with economists teaching in an area of expertise for another discipline. However, I've never been one to respect disciplinary boundaries, with various bits of political science, psychology, and marketing appearing in my papers (albeit through an economics lens).

Anyway, coming back to the topic of this post, Barreto's spreadsheet is interesting and may be of value in helping to understand population concepts that are important for economics students. Enjoy!

Thursday, 24 December 2020

The economics of Christmas trees and the cobweb model of supply and demand

Zachary Crockett at The Hustle has an excellent article on the economics of Christmas trees. It's somewhat specific to the U.S., but there is a lot that will probably be of wider interest. I recommend you read it all, but I want to focus my post on this bit:

...even if all goes well, Christmas tree farmers still have to forecast what the market is going to look like 10 years out: Planting too many trees could flood the market; planting too few could cause a shortage.

History has shown that the industry is a case study in supply and demand:

- In the 1990s, farmers planted too many Christmas trees. The glut resulted in rock-bottom prices throughout the early 2000s and put many farms out of business.

- During the recession in 2008, ailing farmers planted too few trees. As a result, prices have been much higher since 2016.

That reminded me of the cobweb model of supply and demand, which we cover in my ECONS101 class. A key feature of a market that can be characterised by the cobweb model is that there is a production lag -  suppliers make a decision about how much to supply today (based on expectations about the price, which might naively be the observed price today), but the actual price that they receive is not determined until sometime later. In the case of Christmas trees, this is much later (emphasis theirs):

What makes a Christmas tree an unusual crop is its extremely long production cycle: one tree takes 8-10 years to mature to 6 feet.

So, a Christmas tree farmer has to make a decision about how much demand for Christmas trees there will be in 8-10 years' time, and plant today to try to satisfy that demand. Now, let's start with an assumption that Christmas tree farmers are naive - they assume that the price in the future will be the same as the price today, and that's the equilibrium price shown in the diagram below, P*, where the initial demand curve (D0) and supply curve (S0) intersect. Then, the Global Financial Crisis (GFC) strikes. Consumers' incomes fall, and the demand for Christmas trees falls to D1. However, the GFC recession is temporary (as all recessions are), and demand soon returns to normal (D2, which is the same as the original demand curve D0).

Now consider what happens to prices and quantities in this market. During the GFC recession, the price falls to P1 (where the demand curve D1 intersects the supply curve S1). Christmas tree farmers are deciding how much to plant for the future, and they observe the low price P1, which because they are naive, they assume will persist into the future. They decide to plant Q2 trees (this is the quantity supplied on the supply curve S2, when the price is P1). By the time those trees are harvested though, demand has returned to D2, so the price the farmers receive when they harvest the trees will increase to P2 (this is the price where the quantity demanded, from the demand curve D2, is exactly equal to Q2). Now, the farmers are deciding how much to plant for the future again, and they observe the high price P2. They assume the high price will persist into the future, so they decide to plant Q3 trees (this is the quantity supplied on the supply curve S3, when the price is P2). By the time those trees are harvested, the farmers will accept a low price P3 in order to sell them all (this is the price where the quantity demanded, from the demand curve D3, is exactly equal to Q3). Now the farmers will plant less because the price is low and they assume the low price will persist... and so on. Essentially, the market follows the red line (which makes it look like a cobweb - hence the name of the model), and eventually the market gets back to long-run equilibrium (price P*, quantity Q*).

However, in the meantime, there is a cycle of high prices when the number of trees planted was too low, and low prices when the number of trees planted was too high. And that appears to be what has happened in the market for Christmas trees described in the quote from the article above.

A savvy Christmas tree farmer could have anticipated this cycle, and used a strategy of going against the flow. Essentially, that involves taking some counter-intuitive actions - planting more trees when the price is low, and planting fewer trees when the price is high. As I discuss in my ECONS101 class though, there are three conditions that you have to be pretty confident about before the strategy of going against the flow should be adopted:

  1. That this is a market with a production lag (which is pretty clear for Christmas trees);
  2. That the change in market conditions (that kicks the cobweb off) is temporary (you would hope a recession is temporary, and that consumers aren't going to permanently switch to fake trees); and
  3. That other firms have not already realised the profit opportunities in this market (in this case, you should look at what other Christmas tree farmers are doing - if they are planting less during the recession, you should probably be planting more).
If one or more of those conditions don't hold, then going against the flow is not likely to be a good idea. However, if they do hold, the profit opportunities are likely to be high. It seems that there are a lot of Christmas tree farmers in the U.S. who could do with a better understanding of business economics.

Merry Christmas everyone!

[HT: Marginal Revolution]

Sunday, 20 December 2020

Publishing research open access is a double-edged sword

Many universities are encouraging their staff to publish their research in open access journals, or as open access articles in journals that allow this (for a fee). The theory is that open access publications are easier to access, and so are read more often and attract more citations. The number of citations is a key metric for research quality for journal articles, and this flows through into measures of research quality at the institutional level, which contribute to university rankings (e.g. see here for one example).  If open access leads to more citations, then encouraging open access publication is a sensible strategy for a university.

However, it relies on a key assumption - that open access articles receive more citations. In a new NBER working paper (ungated version available here), Mark McCabe (Boston University) and Christopher Snyder (Dartmouth College) follow up on their 2014 article published in the journal Economic Inquiry (ungated version here), where they questioned this assumption. The earlier article showed, looking at the level of the journal issue, that open access increased citations only for the highest quality research, but decreased citations for lower quality research.

In the latest work, McCabe and Snyder first outline a theoretical model that could explain the diverging effect of open access status on high-quality and low-quality articles. The theoretical model explains that:

...open access facilitates acquisition of full text of the article. The obvious effect is to garner cites from readers who cannot assess its relevance until after reading the full text. There may be a more subtle effect going in the other direction. Some readers may cite articles that they have not read, based on only superficial information about its title or abstract, perhaps rounding out their reference list by borrowing a handful of references gleaned from other sources. If the cost of acquiring the article’s full text is reduced by a move to open access, the reader may decide to acquire and read it. After reading it, the reader may find the research a poorer match than initially thought and may decide not to cite it. For the lowest-quality content, the only hope of being cited may be “sight unseen” (pun intended). Facilitating access to such articles may end up reducing their citation counts...

A distinctive pattern is predicted for the open-access effect across the quality spectrum: the open-access effect should be increasing in quality, ranging from a definitively negative open-access effect for the worst-quality articles to a definitively positive effect for the best-quality articles. 

McCabe and Snyder then go on to test their theory, using the same dataset as their 2014 article, but looking at individual journal articles rather than journal issues. Specifically, their dataset includes 100 journals that publish articles on ecology from 1996 to 2005, with over 230,000 articles and 1.2 million observations. They identify high-quality and low-quality articles based on the number of cites in the first two years after publication, then use the third and subsequent years as the data to test the difference in citation patterns between articles that are available open access, and those that are not. They find that:

...the patterns of the estimates across the quality bins correspond quite closely with those predicted by theory. The open-access effect is roughly monotonic over the quality spectrum. Articles in the lowest-quality bins (receiving zero or one cite in the pre-study period) are harmed by open access; those in the middle experience no significant effect; only those in the top bin with 11 or more cites in the pre-study period experience a benefit from open access. Moving from open access through the journal’s own website to open access through PubMed Central pivots the open-access effect so that it is even more sensitive to quality, resulting in greater losses to low-quality articles and greater gains to high-quality articles. PubMed Central access reduces cites to articles in the zero- or one-cite bins by around 14% while increasing cites to articles in the bin with 11 or more cites by 11%.

So, it appears that publishing open access is not necessarily an optimal strategy for a researcher - this would only be true for those researchers who are confident that a particular article is in the top quintile of research quality. For some researchers this is true often, but by definition it cannot be true often for every researcher (unless they work for Lake Wobegon University). Moreover, for most universities, where the majority of their staff are not publishing in the top quintile of research quality, a policy of open access for all research must certainly lower citation counts, the perceived quality of research, and university rankings that rely on measures of research quality.

[HT: Marginal Revolution]

Saturday, 19 December 2020

The challenges of projecting the population of Waikato District

Some of my work was referenced on the front page of the Waikato Times earlier this week:

Towns between Hamilton and Auckland are crying out for more land and houses for the decades ahead, as population is set to rapidly climb.

Waikato District, between Hamilton and Auckland, will need nearly 9,000 new houses by 2031, as up to 19,000 more people are projected to move there.

Pōkeno has already transformed from a sleepy settlement to a sprawling town, while Ngāruawāhia, north of Hamilton, is in demand for its new housing developments...

Waikato University associate professor of economics Michael Cameron told Stuff there had been “rapid” growth in Waikato District in the last 10 years, which he expects to continue.

In the last decade, the population grew by 27 per cent, or 18,647 people.

“In relative terms, Waikato has been one of the districts experiencing the fastest population growth in New Zealand.

“We’ve projected growth faster than Statistics NZ, and the growth has even overtaken our own projections,” Cameron said.

Most of that growth has been from Aucklanders spilling over the Bombay Hills into Pōkeno, and Cameron tipped that town to keep growing faster.

“There’s a lot of emphasis on central Auckland, but a lot of growth and industry is happening in the South.

“You can see it when you drive along the motorway. Drury is growing, Pukekohe is growing and Pōkeno is growing.”

Waikato District is in a bit of a sweet spot, strategically located between two fast-growing cities (Auckland and Hamilton), and not far from one of the fastest growing areas in the country (the western Bay of Plenty). It isn't much of a stretch to project high future growth for Waikato District.

The main challenges occur when trying to project where in the district that growth will occur. Most territorial authorities in New Zealand are either centred on a single main settlement (e.g. Rotorua, or Taupo), or have a couple of large towns (e.g. Cambridge and Te Awamutu, in Waipa District). Waikato District has several mid-sized towns (Ngaruawahia, Huntly, Pokeno, Raglan), and a bunch of smaller settlements that are also likely to attract population growth (e.g. Tuakau, Taupiri, and Te Kauwhata). So, there are lots of options as to where a growing future population might be located.

That makes small-area population projections that are used by planners (such as those that I have produced for Waikato District Council) somewhat endogenous. If the projections say that Pokeno is going to grow, the council zones additional land for residential growth in Pokeno, and developers develop that land into housing, and voila!, Pokeno grows. The same would be true of any of the other settlements, and this is a point that Bill Cochrane and I made in this 2017 article published in the Australasian Journal of Regional Studies (ungated earlier version here).

The way we solved the challenge is to outsource some of the endogeneity, by using a land use change model to statistically downscale the district-level population to small areas (that's what that 2017 article describes). However, that doesn't completely solve the problem of endogeneity, because the land use model includes assumptions about the timing of zoning changes and the availability of land for future residential growth - if those assumptions put that land use change in other locations, or changed the order of the opening up of zoned land, the population growth would follow.

It is possible to develop more complicated models that incorporate both supply and demand of housing in order to project the location of future growth. However, from what I have seen of these approaches, the added complexity does not improve the quality of the projection (however, it might improve the believability of those projections). For now, models based on land use change are about as good as we can get.

Thursday, 17 December 2020

The (short-term) downturn in the New Zealand and Australian funeral industry

In most countries, unfortunately, the funeral services industry is having a bit of a boom time. Not so in New Zealand though, as the National Business Review (maybe gated) reports:

Death in the age of Covid-19 is no laughing matter, to be sure. But a perverse result of the pandemic is that New Zealanders and Australians are having to deal with loss far less often than usual.

Death rates in both countries have dropped dramatically thanks to the near extinction of winter ‘flu bugs, a lower road toll, and people generally staying closer to home and out of harm’s way.

In the year to September, 1473 fewer New Zealanders died than in the same period last year, a fall of 4.3%. Between March and September the statistical drop was much more stark, with 1680 fewer Kiwis passing on than at the same time in 2019.

October figures indicate we continue to cling to life, with 2558 deaths in four weeks compared with 2662 in a corresponding period last year.

Australia has seen a similar trend, with the death rate down 2-4% in 2020...

This is a good thing, unless you are a funeral director.

The resulting effects on the ‘death care’ sector, as it calls itself, can be seen in the latest results from InvoCare and Propel Funeral Partners, ASX-listed funeral service firms with large operations across Australasia.

InvoCare has around 23% and 21% of the Australian and New Zealand markets respectively, while Propel accounts for 6.3% of the Australian industry and is rapidly growing its presence in this country. InvoCare also has operations in Singapore.

InvoCare’s results for the half year to June 30 clearly show what happens when fewer people die.

Its revenue was down 6.2% to A$226.5m as a direct result of Covid-19, it told the ASX.

Operating earnings after tax fell 48% to A$11.7m, and the company reported an after-tax loss of A$18m although this was driven by a market adjustment to funds held for prepaid funerals, it said.

In New Zealand, Invocare's underlying ebitda for the half was down 23% to A$4.5m, despite the help of $1.6m in government wage subsidies. Revenue fell 16% to A$23.3m. 

Not only has the demand for funerals declined, but the lockdown period has demonstrated that lavish funerals are not as necessary as many thought, and so spending on each funeral has also declined:

Not only are funeral directors receiving fewer customers, but the ones they are looking after are spending less.

This is hardly surprising given Australian Covid restrictions limited the number of mourners at funerals to 10, while New Zealand banned funerals outright during the first lockdown.

This led to families choosing “lower value brands and direct cremation offerings”, InvoCare told the market.

The industry concedes that now people have seen it is possible to lay loved ones to rest without putting on a buffet and buying out a florist’s stock, there is growing awareness that funerals can be done more cost-effectively. 

All may not be lost though. The funeral industry is hurting in New Zealand and Australia right now, but everyone dies eventually. You'd be hard-pressed to find an industry with a more stable long-term future. And it is possible that the funeral industry might actually be due for a boom after the coronavirus has passed.

Some researchers put forward the 'dry tinder' hypothesis (see here and here) as an explanation for the high proportion of coronavirus deaths occurring in Sweden. They argued that, because Sweden had a relatively light flu season in 2019 with few deaths among older people, there were more weak or high-risk older people in the population when the coronavirus struck, leading to a higher number of deaths. The hypothesis is supported by some data, but as far as I am aware it is not widely accepted. If the hypothesis is true though, it may be that New Zealand and Australia, having had a quiet flu season in 2020 as well as few coronavirus deaths, may be due a worse season in 2021. Or in 2022, if social distancing, hand washing and wearing masks, etc. are still widespread practices next year (which seems likely).

It is probably not time just yet to sell off your shares in the funeral industry.


Wednesday, 16 December 2020

Is Twitter in Australia becoming less angry over time?

A couple of days ago, I posted about Donald Trump's lack of sleep and his performance as president. One of the findings of the research I posted about was that Trump's speeches and interviews were angrier after a night where he got less sleep. However, if you're like me, you associate Twitter with the angry side of social media, so late-night Twitter activity would seem likely to make anyone angry, sleep deprivation or not.

One other thing that seems to make people angry is the weather, particularly hot weather. So, it seems kind of natural that sooner or later some researchers would look at the links between weather and anger on social media. Which is what this recent article by Heather Stevens, Petra Graham, Paul Beggs (all Macquarie University), and Ivan Hanigan (University of Sydney), published in the journal Environment and Behavior (sorry, I don't see an ungated version, but there is a summary available on The Conversation), does.

Stevens et al. looked at data on emotions coded from Twitter, average daily temperature, and assault rates, for New South Wales for the calendar years 2015 to 2017. They found that:
...assaults and angry tweets had opposing seasonal trends whereby as temperatures increased so too did assaults, while angry tweets decreased. While angry tweet counts were a significant predictor of assaults and improved the assault and temperature model, the association was negative.

In other words, there were more assaults in hot weather, but angry tweets were more prevalent in cold weather. And surprisingly, angry tweets were associated with lower rates of assault. Perhaps assault and angry Twitter use are substitutes? As Stevens et al. note in their discussion:

It is possible that Twitter users are able to vent their frustrations and hence then be less inclined to commit assault.

Of course, this is all correlation and so there may be any number of things going on here. However, the main thing that struck me in the article was this figure, which shows the angry tweet count over time:

The time trend in angry tweets is clearly downward sloping (see the blue line) - angry tweeting is decreasing over time on average. Stevens et al. don't really make a note of this or attempt to explain it. You might worry that this is driving their results, since temperatures are increasing slowly over time due to climate change. However, their key results include controls for time trends. Besides, you can see that there is a seasonal trend to the angry tweeting data around the blue linear trend line.

I wonder - is this a general trend, or is there something special about Australia, where Twitter is becoming more hospitable? The mainstream media seems to suggest that Twitter is getting angrier over time, not less angry. Or, is this simply an artefact of the data, which should lead us to question the overall results of the Stevens et al. paper? You can play with the WeFeel Twitter emotion data yourself here, as well as downloading tables. It clearly looks like anger is decreasing over time, but it may be a result of changing trends in the use of Twitter (or who is using Twitter) over time, especially in the change of language use over time.

I would want to see some additional analysis on other samples, and using other methods of scoring the emotional content of Twitter activity, before I conclude that Twitter is angrier when it is colder, or that Twitter anger is negatively associated with assault. On the plus side, the WeFeel data looks like something that may be worth exploring further in other research settings, if it can be shown to be robust.

Tuesday, 15 December 2020

Using economics to explain course design choices in teaching

Given that I am an economist, it probably comes as no surprise that I use economics insights in designing the papers I teach. So, extra credit tasks that incentivise attendance in class can be explained using opportunity costs and marginal analysis (marginal costs and benefits), and offering choice within and between assessment tasks can be explained using specialisation. I also illustrate related concepts in the examples I use. For example, in my ECONS102 class the first tutorial includes a question about the optimal allocation of time for students between two sections of the test.

Although economics underpins many of my course design choices, I don't make the rationale for those design choices clear to students, but that's exactly what this 2018 article by Mariya Burdina and Sue Lynn Sasser (both University of Central Oklahoma), published in the Journal of Economic Education (sorry, I don't see an ungated version online), promotes. They recommend that:

...instructors of economics use economic reasoning when providing the rationale for course policies described in the syllabus. In this context, the syllabus becomes a learning tool that not only helps economics instructors increase students’ awareness of course policies, but at the same time makes course content more applicable to real-life situations.

To me, the problem is that making these choices clear and using economics to explain them engages students with concepts and explanations that they are not necessarily prepared for, since those concepts will not be fully explained until later in the paper. It's also not clear from Burdina and Sasser's article that the approach even has positive benefits. They present some survey evidence comparing one class that had economic explanations as part of the syllabus discussion and one class that didn't, but they find very few statistically significant differences in attitudes towards the course policies. This is very weak evidence that the approach changes students' understanding of the rationale for course design.

Overall, I'm not convinced that Burdina and Sasser's approach is worth adopting. I'll continue to use economic rationales for course design decisions, but I will continue to make those rationales clear only when students explicitly ask about them.

Monday, 14 December 2020

The poor performance of 'Sleepy Donald' Trump

Back in 2018, I wrote a post about how sleep deprivation (as measured by late-night activity on Twitter) affected the performance of NBA players. It appears the research that post was based on inspired other researchers, as this new paper by Douglas Almond and Xinming Du (both Columbia University) published in the journal Economics Letters (appears it may be open access, but just in case there is an ungated version here) demonstrates. [*] Almond and Du look at the late-night Twitter activity of U.S. President Donald Trump over the period from 2017 to 2020. Specifically, they first show that Trump's Twitter activity begins about 6am each morning (and this is fairly constant across the whole period examined). So, when Trump tweeted after 11pm at night, Almond and Du infer that he must have less than seven hours sleep that night.

Using their dataset, Almond and Du first demonstrate that Trump's late-night Twitter activity has been increasing over time, meaning that he has been getting less sleep over time:

The likelihood of late tweeting increases by 0.22 in 2019 and 0.38 in 2020 relative to the omitted year (2017). This is equivalent to a 183% and 317% increase relative to the 2017 mean, respectively. Additionally, the number of late-night tweets increases over time. He posts roughly one more tweet per night in 2020, a sixfold increase compared with 2017 when he tweeted late about once per week...

So, by 2020 Trump was having more than four times as many late nights as in 2017, and tweeting late at night more frequently. So, what effects do these late nights have? Almond and Du turn to some proxy measures of the quality of the President's work the next day - the number of likes, retweets, and replies that are received by his tweets the day after a late night, compared with tweets the day after he gets more sleep. They find that:

...tweets after a late-tweeting night receive 7400 fewer likes, 1300 fewer retweets and 1400 fewer replies, or 8%, 6.5% and 7% fewer reactions relative to the mean. We interpret these less-influential postings as lower tweet quality.

And, just like your average toddler, a late night affects the President's mood (measured by the emotion of his speeches and interviews as recorded by the Fact Checker website):

...the proportion of happy transcripts decreases 4.4 percentage points (4.9%) following a late night. Despite his being happy in 88% [of] transcripts, late-tweeting nights and more late tweets appear to make him less happy the following day... Meanwhile, the proportion of angry transcripts increases by 2.9 percentage points after a late night, a nearly three-fold increase compared with the mean 1.1%.

Finally, they look at the betting odds of Trump (and his main opponent/s) winning the 2020 Presidential election, and find that:

...a significant relationship between late tweeting and his competitor’s odds. After a late night, more people believe the leading candidate other than Trump is more likely to win and wager on Trump’s opponent. The implied chance of his competitor’s winning increases by .6 percentage points, or 4.8% relative to the mean.

Overall, it appears that lack of sleep negatively affected Trump's performance (just as it does for NBA players). It may even have reduced his chance of re-election last month. Perhaps he should have worried less about 'Sleepy Joe' Biden, and more about 'Sleepy Donald' Trump?

*****

[*] Although the last line of my 2018 post posed the question of whether President Trump's late-night Twitter activity affected his Presidential performance, I can unfortunately take no credit for this research.

Read more:


Sunday, 13 December 2020

Climate change risk, disaster insurance, and moral hazard

One problem that insurance companies face is moral hazard - where one of the parties to an agreement has an incentive, after the agreement is made, to act differently than they would have acted without the agreement and bring additional benefits to themselves, and to the expense of the other party. Moral hazard is a problem of 'post-contractual opportunism'. The moral hazard problem in insurance occurs because the insured party passes some of the risk of their actions onto the insurance company, so the insured party has less incentive to act carefully. For example, a car owner won't be as concerned about keeping their car secure if they face no risk of loss in the case of the car being stolen.

Similar effects are at play in home insurance. A person without home insurance will be very careful about keeping their house safe, including where possible, lowering disaster risk. They will avoid building a house on an active fault line, or on an erosion-prone clifftop, for example. In contrast, a person with home insurance has less incentive to avoid these risks [*], because much of the cost of a disaster would be borne by the insurance company.

Unfortunately, there are limited options available to deal with moral hazard in insurance. Of the four main ways of dealing with moral hazard generally (better monitoring, efficiency wages, performance-based pay, and delayed payment), only better monitoring is really applicable in the case of home insurance. That would mean the insurance company closely monitoring homeowners to make sure they aren't acting in a risky way. However, that isn't going to work in the case of disaster insurance. Instead, insurance companies tend to try to shift some of the risk back onto the insured party through insurance excesses (the amount that the loss must exceed before the insurance company is liable to pay anything to the insured - this is essentially the amount that the insured party must contribute towards any insurance claim).

And that brings me to this article from the New Zealand Herald from a couple of weeks ago:

Thousands of seaside homes around New Zealand could face soaring insurance premiums - or even have some cover pulled altogether - within 15 years.

That's the stark warning from a major new report assessing how insurers might be forced to confront the nation's increasing exposure to rising seas - sparking pleas for urgent Government action.

Nationally, about 450,000 homes that currently sit within a kilometre of the coast are likely to be hit by a combination of sea level rise and more frequent and intense storms under climate change...

The report, published through the Government-funded Deep South Challenge, looked at the risk for around 10,000 homes in Auckland, Wellington, Christchurch and Dunedin that lie in one-in-100-year coastal flood zones.

That risk is expected to increase quickly.

In Wellington, only another 10cm of sea level rise - expected by 2040 - could push up the probability of a flood five-fold - making it a one-in-20-year event.

International experience and indications from New Zealand's insurance industry suggest companies start pulling out of insuring properties when disasters like floods become one-in-50-year events.

By the time that exposure has risen to one-in-20-year occurrences, the cost of insurance premiums and excesses will have climbed sharply - if insurance could be renewed at all.

Because insurance companies have few options for dealing with moral hazard associated with disaster risk, their only feasible option is to increase insurance excesses, and pass more of the risk back onto the insured. Increasing the insurance premium also reflects the higher risk nature of the insurance contract. Even then, in some cases it is better for the insurance company to withdraw cover entirely from houses with the highest disaster risk.

I'm very glad that the Deep South report and the New Zealand Herald article avoided the trap of recommending that the government step in to provide affordable insurance, or subsidise insurance premiums for high-risk properties. That would simply make the moral hazard problem worse. Homeowners (and builders/developers) need the appropriate incentives related to building homes in the highest risk areas. Reducing the risk to homeowners (and buyers) by subsidising their insurance creates an incentive for more houses to be built in high-risk locations. However, as the New Zealand Herald article notes:

Meanwhile, homeowners were still choosing to buy, develop and renovate coastal property, and new houses were being built in climate-risky locations, said the report's lead author, Dr Belinda Storey of Climate Sigma.

"People tend to be very good at ignoring low-probability events.

"This has been noticed internationally, even when there is significant risk facing a property.

"Although these events, such as flooding, are devastating, the low probability makes people think they're a long way off."

Storey felt that market signals weren't enough to effect change - and the Government could play a bigger role informing homeowners of risk.

Being unable to insure one of these properties creates a pretty strong disincentive to buying or building them. Perhaps withdrawal of insurance cover isn't a problem, it's the solution to a problem? 

*****

[*] Importantly, the homeowner with insurance doesn't face no incentive to avoid disaster risk, because while the loss of the home and contents may be covered by insurance, there is still a risk of loss of life, injury, etc. in the case of a disaster, and they will want to reduce that risk.

Wednesday, 2 December 2020

Edward Lazear, 1948-2020

I'm a little late to this news, but widely respected labour economist Ed Lazear passed away last week. The New York Times has an excellent obituary, as does Stanford, where Lazear had been a professor since the mid-1990s.

Lazear is perhaps best known to most people as the chairman of George W. Bush's Council of Economic Advisors at the time of the Global Financial Crisis. However, my ECONS102 students would perhaps recognise him as the originator of the idea of tournament effects in labour economics, as an explanation for why a small number of workers in certain occupations receive much higher pay than others that are only slightly less productive. His contributions to economics ranged across labour economics and the economics of human resources, as well as the economics of education, immigration, and productivity. Many past Waikato economics students would have been exposed to his work in a third-year paper on the economics of human resources that, sadly, we no longer teach.

Lazear's book, with the title Personnel Economics, has been recommended to me by several people, but I have yet to purchase a copy. Eventually, you may see a book review on it at some point in the future. Similarly, I anticipate additional bits of content from Lazear popping up in my ECONS102 topic on the labour market. Unfortunately, he was never in the conversation for a Nobel Prize, with many other labour economists likely higher up in the queue. Nevertheless, he will be missed.

[HT: Marginal Revolution]


Sunday, 29 November 2020

Climate change denial, narratives, and loss aversion

The Conversation had an interesting Climate Explained article a few weeks ago by Peter Ellerton (University of Queensland), which answered the question:

Why do humans instinctively reject evidence contrary to their beliefs?

The question was, of course, asked in the context of climate change denial. Ellerton's response included this:

We understand the world and our role in it by creating narratives that have explanatory power, make sense of the complexity of our lives and give us a sense of purpose and place.

These narratives can be political, social, religious, scientific or cultural and help define our sense of identity and belonging. Ultimately, they connect our experiences together and help us find coherence and meaning.

Narratives are not trivial things to mess with. They help us form stable cognitive and emotional patterns that are resistant to change and potentially antagonistic to agents of change (such as people trying to make us change our mind about something we believe).

If new information threatens the coherence of our belief set, if we cannot assimilate it into our existing beliefs without creating cognitive or emotional turbulence, then we might look for reasons to minimise or dismiss it.

I like the framing about understanding the world through the narratives we tell ourselves. However, Ellerton could easily have gone a bit further in his explanation, linking the unwillingness to accept new information that threatens our narrative to the behavioural economics concepts of endowment effects and loss aversion, as I have previously done in this 2018 post:

People are loss averse. We value losses much more than equivalent gains (in other words, we like to avoid losses much more than we like to capture equivalent gains). Loss aversion makes people subject to the endowment effect - we are unwilling to give up something that we already have, because then we would face a loss (and we are loss averse). Or at least, there would have to be a big offsetting gain in order to convince us to give something up that we already have. The endowment effect applies to objects (the original Richard Thaler experiment that demonstrated endowment effects gave people coffee mugs), but it also applies to ideas.
I've thought for a long time that ideology was simply an extreme example of the endowment effect and loss aversion in practice. Haven't you ever wondered why it's so difficult to convince some people of the rightness of your way of thinking? It's because, in order for them to agree with you, that other person would have to give up their own way of thinking, and that would be a loss (and they are loss averse). It seems unlikely that the benefits of agreeing with you are enough to offset the loss they feel from giving up their prior beliefs, at least for some people. Once you consider loss aversion, it's easy to see how ideologies can become entrenched. An ideology is simply lots of people suffering from loss aversion and the endowment effect.

Climate change denial is a good example of an ideological viewpoint. People are endowed with a particular view about climate change. They are unwilling to give up that view, because that would involve a loss to them (a loss of one of their beliefs), and people are loss averse (they want to avoid losses). So, people are reluctant to adjust their internal narratives about climate change, even in the face of overwhelming evidence, because they are loss averse.

Read more:


Monday, 23 November 2020

Book review: The Soulful Science

I've been a bit quiet on the blog the last week or so, because I've been busy with population projections work (more on that in a future post). However, I have found time to read, and I just finished Diane Coyle's 2008 book The Soulful Science. The book contains a summary of recent advances (at least, up to 2008) in economics research, and to my mind she does an even better job of it than similar books of a similar vintage such as Economics 2.0 (which I reviewed here). Coyle has an ambitious goal for the book, expressed in its first sentences:

I want to persuade you that economics gets an unfairly bad press. Even though economists are widely criticized for either failing to predict the financial crash, or for causing it, or sometimes both, economics is nevertheless entering a golden age.

It seems to me that those who might benefit most from reading a book with that purpose in mind, are unlikely to be willing to read it. As Coyle notes:

The popular unpopularity of economics rests on perceptions which are twenty or thirty years out of date and were always a bit of a caricature anyway.

In my experience, people have a misguided view of what economics is and what economists do. Towards the end of the book, Coyle discusses what the nonspecialist sees of economics, being:

...largely the kind of macroeconomic debate covered in the news programs and newspapers: the forecasts about how much the economy will grow or how severe the recession will be, what will happen to inflation or the dollar, whether the financial markets will go up or down. Most of this economics is:

(a) of poor quality and spuriously precise, as it's not possible to forecast these things in any detail...

(b) jargon-ridden and possibly not understood even by the person - often not actually an economist but an investment manager or media pundit - spouting the jargon on television; and

(c) being used for a purpose such as advancing one political party or gaining ones' employer some good PR. 

Nothing has really changed in the twelve years since this book was published. The problems of the misunderstanding of what economics is, and negative public perception of economics, are not new issues, and aren't going away any time soon.

So, one book isn't going to unwind economics' 'popular unpopularity'. However, I do see great value in this book, for beginning economics students trying to get a sense of the possibilities. Even though the book, and the research to which it refers, is getting a bit dated, it still outlines a number of interesting (and relatively recent) developments in economics research, including advances in economic history, models of economic growth, life satisfaction, and the economics of information. I certainly made a number of notes that I will use in teaching next year.

Coyle is also an excellent writer, and incredibly well-read (as a quick few minutes spent on her blog, The Enlightened Economist, provides ample evidence to support). She also embeds a lot of subtle (and sometimes not so subtle) humour into the book. For instance, where else could you read about Jeremy Bentham's testicles?:

In fact, Bentham was an all-round eccentric. He was ahead of his time in wearing knitted woollen underpants (so it was discovered post mortem by Mr. Smith): most Victorian men simply tucked their shirt tails between their legs. And, whether or not due to the scratch of wool on his testicles, Bentham was the intellectual father of utilitarianism, the philosophy that can be summed up as "the greatest happiness of the greatest number."

I really enjoyed this book, and I highly recommend it to current and future students of economics. It captures more 'mainstream' economics research than books like Freakonomics, but does so in a way that is engaging and entertaining. I wish there were more books like this, capturing economics research advances in the twelve years since.

Saturday, 14 November 2020

PhD students shouldn't be picking fruit

An email from the University of Auckland to some PhD students blew up in the media on Thursday and Friday. As the New Zealand Herald reported:

Auckland University has advised its doctoral students to "take a holiday from your academic work" - and go fruit picking.

The email, sent to all PhD students in the university's School of Cultures, Languages and Linguistics, has drawn astonished and sarcastic comments from students.

"As we near the end of the year, some of you may be wondering about whether to take a holiday from your academic work schedule," the email says.

"If it is possible for you to take a break, we really recommend that you do so. It has been a very difficult year, and most of us have not left Auckland at all. A break out of the city doing a very different activities [sic] can refresh the mind and body and help you have a productive year in 2021.

This rightly led to a collective 'WTF?' from PhD students. However, much of the commenting online seems to be of the form "When I was a student, we picked fruit during summer...". This demonstrates how little the general public understands what it is that PhD students do, and a failure to consider the opportunity cost of fruit picking.

Most undergraduate students are not studying over summer. So, taking two or three months of their summer to pick fruit will have no effect on how long it takes those students to graduate. The opportunity cost of fruit picking is fairly low for undergraduate students - these students give up time that they could have spent on summertime leisure pursuits, or for most of them, in some other employment.

In contrast, a PhD student is undertaking a sustained period of self-directed study (under the supervision of a panel of academic staff). If a PhD student takes two or three months off to pick fruit, then that means it will take them two or three months longer to complete their thesis. The opportunity cost of fruit picking is that the student potentially gives up three months of much higher earnings that they could have achieved after they have completed their PhD. Sure, those higher earnings don't happen until up to three years later, but no one's discount rate is so high that a barely-above-minimum-wage fruit picking job is a superior alternative. On top of that, many international PhD students are spending time away from their family, and taking longer to complete their thesis means more time away.

I'm sure that some students will be looking for work over summer, and some of those students will end up working in the horticulture industry. But PhD students should not be among them, and the University of Auckland should know better.

Thursday, 12 November 2020

Climate and internal migration in Kiribati, and what that might tell us about climate refugee flows

I've posted a few times about the effect of climate change on migration, including on my own research about the effects of climate change on internal migration in New Zealand. I'm often asked about how climate change is going to affect international migration to New Zealand, and whether we will face a flood of 'climate refugees' in the near future. Especially, people tend to focus on the effect of sea level rise on island nations.

There are a number of important points to make in relation to climate refugee flows to New Zealand. First, New Zealand is a long way away from the most populous places that will be most affected by sea level rise, including Bangladesh, the Mekong Delta, and the Red River delta in South and South East Asia (there are many places more distant that may be even more affected). Travelling a long distance entails a high cost that is prohibitive to most people. Second, the places that are relatively close and affected by sea level rise, like the Pacific Islands, generally do not have large populations. The most populous islands tend to also be larger and have inland areas that will be less affected by sea level rise. Migrating inland is a much lower cost alternative than migrating internationally (albeit with its own challenges). Third, even if people choose to migrate internationally, New Zealand is just one of many alternative destinations they could choose, including Australia, the United States, or other Pacific islands. Finally, if New Zealand chose to admit a large number of climate refugees on humanitarian grounds and covered the cost of their travel and settlement, that still only covers the monetary cost. People have an attachment to place, and moving away entails a psychic cost (for example, see here). That explains why many people prefer to migrate short distances rather than long distances, even when they have the means to migrate further away.

It is on the basis of those points that I make the case that climate refugee flows to New Zealand are likely to be small. There is a theoretical model underlying most of those points, and that is related to the costs of migration.

A new article by Hugh Roland and Katherine Curtis (both University of Wisconsin-Madison), published in the journal Population and Environment (sorry, I don't see an ungated version online), demonstrates the importance of costs in the migration decision, in the context of environmental change. Roland and Curtis use five-year origin-destination migration data from the Kiribati Censuses of 2005 and 2015. They set out to compare two competing environment-migration theories:

According to the traditional, dominant framework known as the environmental scarcity thesis, poor environmental conditions may prompt out-migration in search of more hospitable natural environments and better livelihoods. In contrast, the environmental capital thesis asserts that resource scarcity and limited financial means associated with poor environmental conditions may actually restrict out-migration...

Notice that the environmental capital thesis exacerbates the points I made above, in relation to migration costs. Roland and Curtis test these theories by comparing how migration rates to the main island of Tarawa have changed over time between islands that are more isolated from Tarawa, and those that are less isolated. As they explain:

Isolation dampens the migration-promoting effect of declining natural resources asserted in the environmental scarcity thesis. However, isolation exacerbates the migration-prohibiting effect of declining natural resources outlined in the environmental capital thesis. With this theoretical distinction in mind, we anticipate that the migration-incentivizing role that environmental and economic challenges play in the environmental scarcity hypothesis only pertains to contexts in which migration costs are reasonable and, associated, distances to potential destinations are short. In remote settings, the environmental capital thesis is likely the more applicable framework.

Kiribati has been experiencing acute and increasing impacts of climate change over the period Roland and Curtis study. They expect to confirm that the environmental capital thesis dominates, and indeed, that is what they find based on cross-sectional comparisons:

Analysis of migration probabilities shows that out-migration to Tarawa is higher among the least geographically isolated islands as compared with the more isolated islands... Consistent with the environmental capital thesis, probabilities of Tarawa-bound migration from the more spatially proximate North and Central Gilbert Islands in 2000–2005 are generally larger than probabilities for the more distant South Gilbert Islands... While small numbers, the direction of the differences in out-migration is consistent with the environmental capital thesis and contrasts with the environmental scarcity thesis.

Then, looking at changes over time:

At first glance, the increase in out-migration among the North and Central Gilbert Islands appears consistent with the environmental scarcity thesis: as environmental, related economic, and other conditions decline, residents migrate to new places in search of better opportunities and livelihoods. For more geographically isolated islands, however, we generally find negative changes in migration probabilities. Such declines are consistent with the environmental capital thesis: isolation exacerbates the migration-prohibiting influence of environmental degradation. The positive change in outmigration probabilities for the North and Central Gilbert Islands contrasts with the negative changes in out-migration probabilities found for most of the more isolated islands...

The differences in the changes in out-migration probabilities between more and less geographically isolated islands support the environmental capital thesis. Migration is markedly lower from more isolated islands than from less isolated islands and generally decreases during a period in which environmental and economic conditions worsened.

Migration costs are an important constraint on migration. If climate change reduces access to the resources necessary to fund migration, people will not be able to migrate, even as the climate continues to worsen. The environmental capital thesis may be thought of as a type of climate-induced poverty trap. I think that this research is also instructive in terms of wider migration flows arising from climate change in the Pacific, because this dynamic is likely to apply (perhaps even more so) in the case of international migration.

It would be really interesting to conduct a similar study looking at how Pacific international migration flows are changing over time and how isolation, or a more proximate estimate of migration costs, affects those migration flows. I would expect to see something similar, justifying my contention that we are unlikely to face a flood of climate refugees from the Pacific in the near future.

Read more:


Monday, 9 November 2020

Economists in schools of public affairs, and compensating differentials

New Zealand doesn't have any schools of public affairs, but they are a reasonably common feature of U.S. universities (Victoria University does have a School of Government, but I'm not sure that is quite the same). Economists in the U.S. are employed in both economics departments, and in schools of public affairs. You would think that they would be paid the same (conditional on their 'quality' as an academic) regardless of which school or department they are employed in. Not so, according to this 2019 working paper by Lori Taylor, Kalena Cortes, and Travis Hearn (all Texas A&M University).

They first compile salary and demographic data from 2152 academics employed in schools of public affairs, economics departments, or political science departments, from the 33 public universities with schools of public affairs ranked in the top 50 in the U.S. (the other 17 are private universities, where data on salaries are not readily available). Their data demonstrate three key facts:

First, leading schools of public affairs employ a large number of economists. The 33 leading public affairs departments in our sample employed more than 100 economists, or 12 percent of the economists in the sample.

Second, a disproportionate number of the economists employed by schools of public affairs were female... 19 percent of the faculty in departments of economics were female; whereas 23 percent of the economists in departments of political science were female and 35 percent of the economists in schools of public affairs were female...

Third, average salaries in schools of public affairs were lower than those in traditional economics departments. On average—and without adjustment for faculty rank or institution reporting differences—salaries were 33.5 percent higher in departments of economics than they were in schools of public affairs. Among economists, average salaries were 11 percent higher in departments of economics than they were in schools of public affairs.

Taylor et al. then go on to explore the differences in salaries between schools of public affairs and departments of economics (and political science) in a bit more detail. In particular, they are interested in whether, given that schools of public affairs employ more women, and there is a gender gap in salaries, the proportion of female faculty makes a difference. It turns out it doesn't, and they report that:

...we found a significant, negative differential for female faculty members, but controlling for gender did not eliminate the public affairs discount...

...female faculty members were paid significantly less than male faculty members regardless of discipline or department.

Interestingly, there was no difference arising from the seniority of faculty in the schools or departments either. However, then they go on to investigate measures of 'research productivity' (or quality) and find that:

...controlling for citation metrics as well as years since degree and faculty rank, we estimated that economists in schools of public affairs earned at least 28 percent more than otherwise similar faculty members, and economists in departments of economics earned 17 percent more than economists in schools of public affairs. On the other hand, political scientists were better paid in a school of public affairs than in a traditional department of political science, even after controlling for research productivity.

Including citation metrics (as a measure of research productivity) rendered the gender difference in salaries statistically insignificant, suggesting that the difference in salaries was capturing differences in research productivity. That is quite a different result from much of the earlier literature on this topic (for example, see my earlier post here).

This working paper seems a bit unfocused to me. It starts with the premise of comparing salaries between schools of public affairs and departments of economics, but quickly strays into evaluating the gender gap in salaries. The analysis and the exposition would be a lot clearer if they could distinguish those two aims. The finding that the gender gap is explained by differences in research productivity needs a bit more unpacking, especially given that it contradicts the previous literature.

However, the most interesting finding to me is the salary penalty for economists who work in schools of public affairs, and that the penalty remains statistically significant even after controlling for research productivity. Departments of economics have been labelled a toxic environment for women (for example, see my earlier post here). Could the public affairs salary penalty reflect a compensating differential? Are (female) economists willing to accept a reduction in salary in order to locate in a department that has a more welcoming environment (and in reverse, do they require a higher salary to compensate for the toxic working environment in a department of economics)? The public affairs salary penalty for economists was higher for women than men, and was statistically significant in some (but not all) of Taylor et al.'s results, after controlling for research productivity (see Table 4 in the working paper). That provides some further evidence that the salary penalty might be capturing a compensating differential that is more salient for women than for men.

On the face of it, the salary penalty for working in a school of public affairs would tend to suggest that economists should avoid working there. That probably holds true for male economists. However, if it reflects a positive compensating differential, which it probably does for women, then that changes the parameters of the decision and may favour jobs at the schools of public affairs. The problem with taking that interpretation though, is that it may 'lock in' the cultural differences between schools and departments that are the very source of the compensating differential. Food for thought.

[HT: Marginal Revolution, last year]

Sunday, 8 November 2020

The story of sexual economics should not be written by psychologists

On Thursday, I posted about the economics of sex robots. In particular, I drew attention to search models as a way of thinking through the economics related to sex. The key driver in a search model is the relative bargaining power of the parties to the agreement. If some change gives a party more relative bargaining power, they will get a better deal.

However, search models are not the only way to think about the economics of sex. This 2017 article by Roy Baumeister (Florida State University) and co-authors, published in the Journal of Economic Psychology (open access), instead uses the workhorse model of microeconomics - supply and demand. They note that:
Sexual economics theory rests on standard basic assumptions about economic marketplaces, such as the law of supply and demand. When demand exceeds supply, prices are high (favoring sellers, that is, women). In contrast, when supply exceeds demand, the price is low, favoring buyers (men)...

Importantly, they aren't describing the market for sexual services (i.e. prostitution). Instead:

 ...often what is sold is not just sex but exclusive access to sex with a particular person.

How do Baumeister et al. justify their theory? As follows:

The core idea is that women are the sellers and men are the buyers. This starts with the abundant evidence that ‘‘everywhere sex is understood to be something females have that males want”...

Because the man typically wants sex more than the woman, she has a power advantage. According to the ‘‘principle of least interest,” the person who desires something less has greater control and can demand that the other (more desirous) person sweeten the deal by offering additional incentives or concessions... Hence sexual economics theory begins with the assumption that female sexuality has exchange value, whereas male sexuality does not...

In return for sex, women can obtain love, commitment, respect, attention, protection, material favors, opportunities, course grades or workplace promotions, as well as money. Throughout the history of civilization, one standard exchange has been that a man makes a long-term commitment to supply the woman with resources (often the fruits of his labor) in exchange for sex — or, often more precisely, for exclusive sexual access to that woman’s sexuality. Whether one approves of such exchanges or condemns them is beside the point. Rather, the key fact is that these opportunities exist almost exclusively for women. Men usually cannot trade sex for other benefits.

The onset of a sexual relationship thus involves the man and woman choosing each other. In perhaps overly simple terms, he chooses her presumably on the basis of her sex appeal, that is, how much he expects to enjoy having sex with her. Meanwhile, she chooses him on the basis of the resources he can provide, that is, on the basis of nonsexual benefits he can furnish to her. This exchange defines the nature of the same-sex competition. Women compete to seem more sexually attractive than their rivals. Men compete to seem a better provider than their rivals.

The rest of the article describes differences in competition between women, and between men. It is interesting to read, but I'm more concerned about the framing of the model. It's not clear to me that the authors, all of whom are psychologists of various types, have really thought through the plausibility of the economic model they are attempting to use.

The basic model of supply and demand relates to a perfectly competitive market, which has a number of characteristics: (1) many buyers and sellers; (2) homogeneous 'products'; (3) complete information; and (4) no barriers to entry into or exit from the market. Under those conditions, neither buyers nor sellers have any control over the price - they are 'price takers', and the market 'price' is determined by the interaction of supply and demand.

Now, thinking about sexual economics, it's not clear to me that either (2) or (3) is satisfied. Every person is different, with different preferences. So, the assumption of homogeneous products cannot be fulfilled. Also, we don't know everything about other people we might like to match with (at least, not at first, so complete information is also not available. So, the market is not perfectly competitive, and therefore cannot be described by supply and demand curves. [*]

Another important problem is that, in the supply and demand model, sellers can sell more than one unit of the product (in fact, they will continue to sell until the point where their marginal cost of production is equal to the price), and can sell to more than one buyer. And buyers will buy more than one unit of the product. None of this seems to be a fair characterisation of sex (unless psychologists inhabit quite a different world from the rest of us).

Now, if you read through the quote from the Baumeister et al. article above, and then go back and read my description of search models from Thursday's post, it should be immediately clear that the search model is a better characterisation of sexual economics. Moreover, it doesn't rely on the assumptions of homogeneous products or complete information, and definitely copes with faithful matches between single individuals.

The sad thing is that the rest of the Baumeister article is, as I said above, really interesting. And, the narrative probably stands up well if you take out the supply and demand framing, and replace it with a framing based on a search model. Someone needs to re-write an improvement on that article.

*****

[*] I'm being a little bit harsh here. As I note in my ECONS101 class, even though the demand and supply model relates to perfectly competitive markets, it still does a good job of describing qualitatively the changes in the price and quantity that will result from a change in market conditions, even when the market is not perfectly competitive.

Thursday, 5 November 2020

The economics of sex robots

In my ECONS101 class, we cover search models of the labour market. Unlike the supply and demand model, search models do not rely on a concept of market equilibrium. Instead, it is the relative bargaining power of the parties (employers and workers) that determine the wage.

The simple explanation works like this. Each matching of a worker to a job creates a surplus that is shared between the worker and the employer. Because job matching creates a surplus, this provides the worker with a small amount of market power (or bargaining power). That is because if the worker rejects the job offer, the employer has to start looking for someone else to fill the vacancy. The employer is somewhat reluctant to start their search over, so the worker can use that to their advantage. The division of the surplus created by the match, and therefore the wage, will depend on the relative bargaining power of the worker and employer. If the worker has relatively more bargaining power, the wage will be higher. And if the employer has relatively more bargaining power, the wage will be lower.

This search model doesn't just apply to the labour market. You can also apply it to many situations that involve matching two or more parties. Which brings me to this post on sex robots by Diana Fleischman. Sex involves matching (unless you go it alone). The agreement on the what-where-how of sex will depend on the relative bargaining power of the sexual partners. The increasing availability of increasingly realistic sex robots looks likely to shake things up, because sex robots and women are substitutes (see also this earlier post on pornography and marriage as substitutes). As Fleischman explains:

What does this mean for women? When the sex ratio changes, so too do sexual norms; sex robots are going to emulate an increase in the ratio of women to men. Contrary to a prediction based on the idea that men would wield greater patriachal [sic] control if they were in higher numbers, a larger percentage of women relative to men on University campuses is associated with women who are more likely to have casual sex and less likely to be virgins. When there are more men than women, women are much less likely to have casual sex. The majority sex (in this case men) competes for the minority sex (in this case women) and the minority sex calls the shots. When there is a female majority in the population, women compete for access to mates with casual sex. Whereas a male majority competing for access to scarce women compete with long-term commitment.

Sex robots will emulate a majority women ratio, shifting women to compete for men’s attention by requiring less courtship and commitment in exchange for sex.

Taking a heteronormative perspective, the availability of sex robots reduces the relative bargaining power of women, and therefore increases the relative bargaining power of men. That means that men may be able to extract more of the surplus from potential sexual liaisons. That is, men may be able to get more of what they want. Fleischman notes that:

The long-term ramifications are unclear, especially the way long-term technologies and cultural norms will interact. Perhaps women will discover they have to make the costs of courtship both low and transparent to compete with sex robots.

Women, having to compete with sex robots, may have to offer men more. But not so fast:

Or, perhaps, new technology could enable women to recombine their genes with one another, making men enamored with sex robots (or men generally) totally redundant.

New technology for recombining genes and completely excluding men won't rebalance bargaining power back towards women. The technology necessary to reproduce without involving sex has existed for some time. Fleischman is conflating the reproductive goal of sex, with the pleasure goal of sex. To rebalance bargaining power back towards women, women need their own sex robots. Sex robots for all!

[HT: Marginal Revolution]

Tuesday, 3 November 2020

Charles Plott's strategies for getting published

There are plenty of critiques of peer review, one of which is that it is incremental and leads to the most novel research failing to get published. If you are a researcher doing incredibly novel research, using methods that reviewers don't fully understand because those methods are not yet widely employed, then you could have real difficulty in getting your best work into top journals. That has been the case for many emerging fields. In economics, in (relatively) recent times (prior to the 1990s), that applies to behavioural economics and to experimental economics. The challenge, then, is how to get your work accepted if you are one of these researchers employing novel or misunderstood methods.

In a new article published in the journal Oxford Economic Papers (ungated version here), Andrej Svorencik (University of Mannheim) documents how one of the pioneers of experimental economics, Charles Plott, overcame the reluctance of editors and journal reviewers to accept the validity of laboratory experiments. As Svorencik writes:
Whereas there were no public detractors of experimentation in economics, the early and most prolific experimenters, such as Charles Plott and Vernon Smith, encountered skeptics and systematic rejections of their submitted papers. Getting them published required tenacity on the writers’ part to go through several rounds of often heated discussions with editors and referees. These iterations present a unique perspective on the arguments raised against experiments in economics and the specific strategies developed by experimental economists to counter them.

Svorencik uses the 'research corpus' of Plott, covering letters and responses to editors and reviewers dating from the mid-1970s to the mid-1990s, and establishes nine different strategies that Plott employed to disarm reviewers and convince editors that his publications using experimental economics should be published:

S1 Asking for knowledgeable referees because previous referees were ignorant of experimental economics;
S2 Claiming that results are interesting, relevant for theory, and have applications;
S3 Claiming that the experiments present real situations;
S4 Claiming that the theory applies to simple cases;
S5 Citing basic research;
S6 Conducting more experiments;
S7 Shifting of the burden of proof;
S8 Steering clear of a specialized journal;
S9 Claiming that field has been confused with method.

One particular example of strategy S4 struck me as particularly important. From a 1979 letter that Plott wrote to George Borts, the editor of the American Economic Review:

The laboratory processes are simple and very special markets... but they are nevertheless real markets which should be governed by the same principles that are supposed to govern all markets. The justification for studying them is the same as the justification for studying the simple special cases and special types of any complicated phenomenon.

In order to see why these markets are real, one need only apply directly the theory of derived demand. It works as follows. Let Ri(xi) be the revenue received by individual i from some source expressed as a function of the number of units (xi) he has to sell. Standard derived demand theory tells us that δRi/δxi is limit price (inverse demand) function for this individual. It is important to note that the theory places no restriction upon the source of the revenue so when the source is an experimenter the derived limit price function for this individual is just as real as when the source is a business. Furthermore, the theory places no restriction on what x is called (unless the individual gets consumption pleasures from it) so the theory applies equally as xi becomes baseball cards, shirts, food, or ‘commodities’ created especially for the purposes of an experiment. There are no ‘side payments’ or incidental sources of enjoyment so as long as the individual prefers more money to less we can be assured the preferences for units of x have been induced. The individual is indeed a ‘demander.’

The supply side of the market is handled similarly. Each supplier, j, faces an individualized cost function Cj(xj) which indicates what j must pay the experimenter as a function of units purchased for resale. Profits to j, which are j’s to keep, are simply the revenues received by j over costs Cj(xj). Clearly that δCj(xj)/δxj is a real marginal cost function. The fact that it was constructed by the experimenter makes the concept no less relevant because the concept is intended to apply universally.

We have then a valued and scarce resource. Almost any textbook will say that those conditions are sufficient for the existence of an economic problem. The laboratory markets are thus real markets and the principles of economics should apply to them as readily as they are supposed to apply to any other market.

Plott's responses to editors and reviewers was very forceful, and it appears that more often than not, he got his way. And generations of experimental economists have benefited from his efforts, as by the 1990s economics research using laboratory experiments had been broadly accepted and was regularly being published in top journals. Svorencik's article provides a key insight into how this process happened, and is a really interesting contribution to the history of economic thought.