Thursday 31 December 2020

Book review: Reinventing the Bazaar

I just finished reading John McMillan's 2002 book Reinventing the Bazaar. Diane Coyle at The Enlightened Economist has recommended this book many times (including earlier this year), and it's a recommendation that I am glad I followed through on and I fully concur.

The subtitle is "A Natural History of Markets", but I don't think the title or subtitle quite do the book justice. Most treatments of economics, including in most textbooks, treat markets as a given, and fail to really explain the inner workings of market design. McMillan's book does an outstanding job of filling that gap. It seems to be getting rarer over time that I end a book with pages of notes on little bits that I can add to my teaching, but it certainly was the case with this book.

McMillan is neither a market evangelist nor a market skeptic. He cleanly dissects the more extreme arguments on either side, especially in the concluding chapter, and charts a more centrist path throughout the book. I particularly liked this passage:

The market is not omnipotent, omnipresent, or omniscient. It is a human invention with human imperfections. It does not necessarily work well. It does not work by magic or, for that matter, by voodoo. It works through institutions, procedures, rules, and customs.

The focus of the book is on explaining institutions and market design, using real-world examples of what has worked and what has not. There are several pages devoted to the 1980s reforms in New Zealand, which was interesting to see (although based at Stanford University, McMillan is a kiwi, so the use of New Zealand as an example should come as no surprise). However, I don't think everyone would quite agree with the conclusions about the success of taking the 'shock therapy' approach.

McMillian also commits a sin in my eyes by defining health care and education as public goods (which they are not, because they are excludable). I can forgive him that one indiscretion though, because the book is wonderfully written and easy to read. It helps that he and I appear to share similar views about the market, as his conclusion demonstrates:

The market is like democracy. It is the worst form of economy, except for all the others that have been tried from time to time.

I really enjoyed this book, and I highly recommend it to everyone. 

Wednesday 30 December 2020

Try this: Riding the Korean Wave and using K-Pop to teach economics

I use a bit of popular culture to illustrate economics concepts in class, but some of the videos I use are getting a bit dated (although The Princess Bride movie is a timeless classic). Maybe it's time for a refresh?

In a new working paper, Jadrian Wooten (Pennsylvania State University), Wayne Geerling (Monash University), and Angelito Calma (University of Melbourne) describe using K-Pop examples to illustrate economic concepts. K-Pop is incredibly popular worldwide, and increasingly going mainstream, so it is something that students will likely be familiar with.

Wooten et al. created videos with English subtitles and link them to the sorts of economics that is taught in principles classes. Specifically, in the paper they illustrate with three examples:

  1. EXO-CBX, "Ka-Ching", which can be used to illustrate scarcity, trade-offs, and opportunity costs;
  2. Blackpink, "Kill this love", which can be used to illustrate sunk costs and decision-making; and
  3. BTS, "No", which can be used to illustrate comparative advantage, negative externalities, arms races, and zero-sum games.
The music4econ.com website has many other videos as well (not just K-Pop). Using music videos in teaching is not a new idea (I've posted about it before here), and there are many examples of other forms of pop culture being used to teach economics (such as Broadway musicals, The Office, and The Big Bang Theory). This website can be added to the list.

Enjoy!

[HT: Marginal Revolution]

Monday 28 December 2020

The gravity model and cultural trade in restaurant meals

One of my favourite empirical models to work with is the gravity model. It is an extremely high performing (in terms of both in-sample and out-of-sample forecast accuracy) model when used in migration and trade contexts, and quite intuitive. Essentially, in a gravity model the flow (of goods and services, or people) from area i to area j is negatively related to the distance between i and j (so, if i and j are further apart, the flows are smaller, most likely because it costs more to move from i to j), and are positively related to the 'economic mass' of i and j (so, if i and/or j is larger, the flows from i to j will be larger).

I have used the gravity model myself (e.g. see this post), and so have my students (e.g. see this post). I particularly like it when I find examples of unexpected uses of the gravity model. For instance, there was this paper on running the gravity model in reverse to find lost ancient cities (which I blogged about here). 

Most of the time, a gravity model of trade involves goods and services that cross borders. However, that is not the case in this recent article by Joel Waldfogel (University of Minnesota), published in the Journal of Cultural Economics (appears to be open access, but just in case there is an ungated earlier version here). Waldfogel is probably best known for his work on the deadweight loss of Christmas (see also here), but in this research he looks at the cultural trade in restaurant dining.

The interesting thing about this article is that the data isn't really trade data at all. Restaurant meals don't cross borders. Instead, it is the intellectual property that is crossing borders, which is why this article relates to the literature on cultural economics. Waldfogel uses:

...Euromonitor data on aggregate and fast-food restaurant expenditure by country, along with TripAdvisor and Euromonitor data on the distribution of restaurants by cuisine.

He links each cuisine to an origin country (i in the description of the gravity model above), and the country location of the restaurant as the destination country (j in the gravity model description). He then calculates measures of 'trade flows' in restaurant meals, both including and excluding fast food, and runs a gravity model using those data. He finds that:

As in many models of trade, distance matters: a 1% increase in distance reduce trade by about 1%... Common language and common colonial heritage also matter.

Those are pretty standard results in the trade literature using gravity models. Then:

Which cuisines are most appealing after accounting for rudimentary gravity factors?... Excluding fast food, the ten most appealing origins are Italy, China, and Japan, which all have similar levels of appeal, followed by the USA, India, France, Mexico, Thailand, Spain, and Turkey. When fast food is included, the USA rises to the top, and the others remain in the same order.

Finally, on the balance of trade in restaurant meals, he finds that, of 44 selected countries:

...three are substantial net exporters: Italy (with net exports of $158 billion), Japan ($44 billion), and Mexico ($17 billion). Substantial net importers include the USA ($134 billion), Brazil ($39 billion), the UK ($20 billion), and Spain ($20 billion).

I was a little surprised that Spain was such a net importer of cuisine from other countries. I guess that reflects that Spanish cuisine isn't as available outside of Spain as many other European cuisines are. The US and UK being large net importers is not a surprise though.

The results are mostly uncontroversial. However, I did take issue with some of the choices. Waldfogel codes all "pizza" restaurants as Italian. I'm not convinced that Pizza Hutt or Domino's count as Italian food - more like generic fast food, most of which was coded to the US. It would be interesting to see whether re-coding pizza would make any difference to the results - possibly not, as Waldfogel does test for the impact of coding "fried chicken" as either domestic or US and that appears to make little difference.

The gravity model is clearly very versatile, and deserves much greater attention in research than it currently receives. This research demonstrates a slightly new direction for it.

[HT: Offsetting Behaviour, last year]

Sunday 27 December 2020

Putting demography back into economics

As a population economist, I am receptive to the idea that population concepts like migration, population growth and decline, population ageing and the age distribution, etc. are important things for economics students to understand. So, I read with interest this 2018 article by Humberto Barreto (DePauw University), published in the Journal of Economic Education (sorry, I don't see an ungated version online).

In the article, Barreto uses an interactive Excel spreadsheet that creates population pyramids from dummy or live data, to illustrate population concepts for students. The spreadsheet is available freely online here. It's mostly reasonably intuitive, but to get full value from it you probably need to read the full gated article and work through a couple of the exercises Barreto describes. As far as putting demography back into economics is concerned, Barreto quotes 1982 Nobel Prize winner George Stigler (from this 1960 article):

"In 1830, no general work in economics would omit a discussion of population, and in 1930, hardly any general work said anything about population.”

However, the key problem with using a tool like this is not the tool itself, it is the opportunity cost of including population concepts into an economics paper - what will be left out in order to accommodate the population concepts? Barreto does acknowledge this point in the concluding paragraph to his article. In an already crowded curriculum, it is difficult to see where population concepts could best fit. It would be challenging to squeeze it into an introductory economics paper, for instance. In the overall programme of study for an economics student, encountering population concepts would probably make most sense as part of macroeconomics where there are strong complementarities, and where the Solow model already appears (but where population change is treated as exogenous). On the other hand, in universities that already have a demography or population studies programme, teaching population concepts might create an uncomfortable situation with economists teaching in an area of expertise for another discipline. However, I've never been one to respect disciplinary boundaries, with various bits of political science, psychology, and marketing appearing in my papers (albeit through an economics lens).

Anyway, coming back to the topic of this post, Barreto's spreadsheet is interesting and may be of value in helping to understand population concepts that are important for economics students. Enjoy!

Thursday 24 December 2020

The economics of Christmas trees and the cobweb model of supply and demand

Zachary Crockett at The Hustle has an excellent article on the economics of Christmas trees. It's somewhat specific to the U.S., but there is a lot that will probably be of wider interest. I recommend you read it all, but I want to focus my post on this bit:

...even if all goes well, Christmas tree farmers still have to forecast what the market is going to look like 10 years out: Planting too many trees could flood the market; planting too few could cause a shortage.

History has shown that the industry is a case study in supply and demand:

- In the 1990s, farmers planted too many Christmas trees. The glut resulted in rock-bottom prices throughout the early 2000s and put many farms out of business.

- During the recession in 2008, ailing farmers planted too few trees. As a result, prices have been much higher since 2016.

That reminded me of the cobweb model of supply and demand, which we cover in my ECONS101 class. A key feature of a market that can be characterised by the cobweb model is that there is a production lag -  suppliers make a decision about how much to supply today (based on expectations about the price, which might naively be the observed price today), but the actual price that they receive is not determined until sometime later. In the case of Christmas trees, this is much later (emphasis theirs):

What makes a Christmas tree an unusual crop is its extremely long production cycle: one tree takes 8-10 years to mature to 6 feet.

So, a Christmas tree farmer has to make a decision about how much demand for Christmas trees there will be in 8-10 years' time, and plant today to try to satisfy that demand. Now, let's start with an assumption that Christmas tree farmers are naive - they assume that the price in the future will be the same as the price today, and that's the equilibrium price shown in the diagram below, P*, where the initial demand curve (D0) and supply curve (S0) intersect. Then, the Global Financial Crisis (GFC) strikes. Consumers' incomes fall, and the demand for Christmas trees falls to D1. However, the GFC recession is temporary (as all recessions are), and demand soon returns to normal (D2, which is the same as the original demand curve D0).

Now consider what happens to prices and quantities in this market. During the GFC recession, the price falls to P1 (where the demand curve D1 intersects the supply curve S1). Christmas tree farmers are deciding how much to plant for the future, and they observe the low price P1, which because they are naive, they assume will persist into the future. They decide to plant Q2 trees (this is the quantity supplied on the supply curve S2, when the price is P1). By the time those trees are harvested though, demand has returned to D2, so the price the farmers receive when they harvest the trees will increase to P2 (this is the price where the quantity demanded, from the demand curve D2, is exactly equal to Q2). Now, the farmers are deciding how much to plant for the future again, and they observe the high price P2. They assume the high price will persist into the future, so they decide to plant Q3 trees (this is the quantity supplied on the supply curve S3, when the price is P2). By the time those trees are harvested, the farmers will accept a low price P3 in order to sell them all (this is the price where the quantity demanded, from the demand curve D3, is exactly equal to Q3). Now the farmers will plant less because the price is low and they assume the low price will persist... and so on. Essentially, the market follows the red line (which makes it look like a cobweb - hence the name of the model), and eventually the market gets back to long-run equilibrium (price P*, quantity Q*).

However, in the meantime, there is a cycle of high prices when the number of trees planted was too low, and low prices when the number of trees planted was too high. And that appears to be what has happened in the market for Christmas trees described in the quote from the article above.

A savvy Christmas tree farmer could have anticipated this cycle, and used a strategy of going against the flow. Essentially, that involves taking some counter-intuitive actions - planting more trees when the price is low, and planting fewer trees when the price is high. As I discuss in my ECONS101 class though, there are three conditions that you have to be pretty confident about before the strategy of going against the flow should be adopted:

  1. That this is a market with a production lag (which is pretty clear for Christmas trees);
  2. That the change in market conditions (that kicks the cobweb off) is temporary (you would hope a recession is temporary, and that consumers aren't going to permanently switch to fake trees); and
  3. That other firms have not already realised the profit opportunities in this market (in this case, you should look at what other Christmas tree farmers are doing - if they are planting less during the recession, you should probably be planting more).
If one or more of those conditions don't hold, then going against the flow is not likely to be a good idea. However, if they do hold, the profit opportunities are likely to be high. It seems that there are a lot of Christmas tree farmers in the U.S. who could do with a better understanding of business economics.

Merry Christmas everyone!

[HT: Marginal Revolution]

Sunday 20 December 2020

Publishing research open access is a double-edged sword

Many universities are encouraging their staff to publish their research in open access journals, or as open access articles in journals that allow this (for a fee). The theory is that open access publications are easier to access, and so are read more often and attract more citations. The number of citations is a key metric for research quality for journal articles, and this flows through into measures of research quality at the institutional level, which contribute to university rankings (e.g. see here for one example).  If open access leads to more citations, then encouraging open access publication is a sensible strategy for a university.

However, it relies on a key assumption - that open access articles receive more citations. In a new NBER working paper (ungated version available here), Mark McCabe (Boston University) and Christopher Snyder (Dartmouth College) follow up on their 2014 article published in the journal Economic Inquiry (ungated version here), where they questioned this assumption. The earlier article showed, looking at the level of the journal issue, that open access increased citations only for the highest quality research, but decreased citations for lower quality research.

In the latest work, McCabe and Snyder first outline a theoretical model that could explain the diverging effect of open access status on high-quality and low-quality articles. The theoretical model explains that:

...open access facilitates acquisition of full text of the article. The obvious effect is to garner cites from readers who cannot assess its relevance until after reading the full text. There may be a more subtle effect going in the other direction. Some readers may cite articles that they have not read, based on only superficial information about its title or abstract, perhaps rounding out their reference list by borrowing a handful of references gleaned from other sources. If the cost of acquiring the article’s full text is reduced by a move to open access, the reader may decide to acquire and read it. After reading it, the reader may find the research a poorer match than initially thought and may decide not to cite it. For the lowest-quality content, the only hope of being cited may be “sight unseen” (pun intended). Facilitating access to such articles may end up reducing their citation counts...

A distinctive pattern is predicted for the open-access effect across the quality spectrum: the open-access effect should be increasing in quality, ranging from a definitively negative open-access effect for the worst-quality articles to a definitively positive effect for the best-quality articles. 

McCabe and Snyder then go on to test their theory, using the same dataset as their 2014 article, but looking at individual journal articles rather than journal issues. Specifically, their dataset includes 100 journals that publish articles on ecology from 1996 to 2005, with over 230,000 articles and 1.2 million observations. They identify high-quality and low-quality articles based on the number of cites in the first two years after publication, then use the third and subsequent years as the data to test the difference in citation patterns between articles that are available open access, and those that are not. They find that:

...the patterns of the estimates across the quality bins correspond quite closely with those predicted by theory. The open-access effect is roughly monotonic over the quality spectrum. Articles in the lowest-quality bins (receiving zero or one cite in the pre-study period) are harmed by open access; those in the middle experience no significant effect; only those in the top bin with 11 or more cites in the pre-study period experience a benefit from open access. Moving from open access through the journal’s own website to open access through PubMed Central pivots the open-access effect so that it is even more sensitive to quality, resulting in greater losses to low-quality articles and greater gains to high-quality articles. PubMed Central access reduces cites to articles in the zero- or one-cite bins by around 14% while increasing cites to articles in the bin with 11 or more cites by 11%.

So, it appears that publishing open access is not necessarily an optimal strategy for a researcher - this would only be true for those researchers who are confident that a particular article is in the top quintile of research quality. For some researchers this is true often, but by definition it cannot be true often for every researcher (unless they work for Lake Wobegon University). Moreover, for most universities, where the majority of their staff are not publishing in the top quintile of research quality, a policy of open access for all research must certainly lower citation counts, the perceived quality of research, and university rankings that rely on measures of research quality.

[HT: Marginal Revolution]

Saturday 19 December 2020

The challenges of projecting the population of Waikato District

Some of my work was referenced on the front page of the Waikato Times earlier this week:

Towns between Hamilton and Auckland are crying out for more land and houses for the decades ahead, as population is set to rapidly climb.

Waikato District, between Hamilton and Auckland, will need nearly 9,000 new houses by 2031, as up to 19,000 more people are projected to move there.

Pōkeno has already transformed from a sleepy settlement to a sprawling town, while Ngāruawāhia, north of Hamilton, is in demand for its new housing developments...

Waikato University associate professor of economics Michael Cameron told Stuff there had been “rapid” growth in Waikato District in the last 10 years, which he expects to continue.

In the last decade, the population grew by 27 per cent, or 18,647 people.

“In relative terms, Waikato has been one of the districts experiencing the fastest population growth in New Zealand.

“We’ve projected growth faster than Statistics NZ, and the growth has even overtaken our own projections,” Cameron said.

Most of that growth has been from Aucklanders spilling over the Bombay Hills into Pōkeno, and Cameron tipped that town to keep growing faster.

“There’s a lot of emphasis on central Auckland, but a lot of growth and industry is happening in the South.

“You can see it when you drive along the motorway. Drury is growing, Pukekohe is growing and Pōkeno is growing.”

Waikato District is in a bit of a sweet spot, strategically located between two fast-growing cities (Auckland and Hamilton), and not far from one of the fastest growing areas in the country (the western Bay of Plenty). It isn't much of a stretch to project high future growth for Waikato District.

The main challenges occur when trying to project where in the district that growth will occur. Most territorial authorities in New Zealand are either centred on a single main settlement (e.g. Rotorua, or Taupo), or have a couple of large towns (e.g. Cambridge and Te Awamutu, in Waipa District). Waikato District has several mid-sized towns (Ngaruawahia, Huntly, Pokeno, Raglan), and a bunch of smaller settlements that are also likely to attract population growth (e.g. Tuakau, Taupiri, and Te Kauwhata). So, there are lots of options as to where a growing future population might be located.

That makes small-area population projections that are used by planners (such as those that I have produced for Waikato District Council) somewhat endogenous. If the projections say that Pokeno is going to grow, the council zones additional land for residential growth in Pokeno, and developers develop that land into housing, and voila!, Pokeno grows. The same would be true of any of the other settlements, and this is a point that Bill Cochrane and I made in this 2017 article published in the Australasian Journal of Regional Studies (ungated earlier version here).

The way we solved the challenge is to outsource some of the endogeneity, by using a land use change model to statistically downscale the district-level population to small areas (that's what that 2017 article describes). However, that doesn't completely solve the problem of endogeneity, because the land use model includes assumptions about the timing of zoning changes and the availability of land for future residential growth - if those assumptions put that land use change in other locations, or changed the order of the opening up of zoned land, the population growth would follow.

It is possible to develop more complicated models that incorporate both supply and demand of housing in order to project the location of future growth. However, from what I have seen of these approaches, the added complexity does not improve the quality of the projection (however, it might improve the believability of those projections). For now, models based on land use change are about as good as we can get.

Thursday 17 December 2020

The (short-term) downturn in the New Zealand and Australian funeral industry

In most countries, unfortunately, the funeral services industry is having a bit of a boom time. Not so in New Zealand though, as the National Business Review (maybe gated) reports:

Death in the age of Covid-19 is no laughing matter, to be sure. But a perverse result of the pandemic is that New Zealanders and Australians are having to deal with loss far less often than usual.

Death rates in both countries have dropped dramatically thanks to the near extinction of winter ‘flu bugs, a lower road toll, and people generally staying closer to home and out of harm’s way.

In the year to September, 1473 fewer New Zealanders died than in the same period last year, a fall of 4.3%. Between March and September the statistical drop was much more stark, with 1680 fewer Kiwis passing on than at the same time in 2019.

October figures indicate we continue to cling to life, with 2558 deaths in four weeks compared with 2662 in a corresponding period last year.

Australia has seen a similar trend, with the death rate down 2-4% in 2020...

This is a good thing, unless you are a funeral director.

The resulting effects on the ‘death care’ sector, as it calls itself, can be seen in the latest results from InvoCare and Propel Funeral Partners, ASX-listed funeral service firms with large operations across Australasia.

InvoCare has around 23% and 21% of the Australian and New Zealand markets respectively, while Propel accounts for 6.3% of the Australian industry and is rapidly growing its presence in this country. InvoCare also has operations in Singapore.

InvoCare’s results for the half year to June 30 clearly show what happens when fewer people die.

Its revenue was down 6.2% to A$226.5m as a direct result of Covid-19, it told the ASX.

Operating earnings after tax fell 48% to A$11.7m, and the company reported an after-tax loss of A$18m although this was driven by a market adjustment to funds held for prepaid funerals, it said.

In New Zealand, Invocare's underlying ebitda for the half was down 23% to A$4.5m, despite the help of $1.6m in government wage subsidies. Revenue fell 16% to A$23.3m. 

Not only has the demand for funerals declined, but the lockdown period has demonstrated that lavish funerals are not as necessary as many thought, and so spending on each funeral has also declined:

Not only are funeral directors receiving fewer customers, but the ones they are looking after are spending less.

This is hardly surprising given Australian Covid restrictions limited the number of mourners at funerals to 10, while New Zealand banned funerals outright during the first lockdown.

This led to families choosing “lower value brands and direct cremation offerings”, InvoCare told the market.

The industry concedes that now people have seen it is possible to lay loved ones to rest without putting on a buffet and buying out a florist’s stock, there is growing awareness that funerals can be done more cost-effectively. 

All may not be lost though. The funeral industry is hurting in New Zealand and Australia right now, but everyone dies eventually. You'd be hard-pressed to find an industry with a more stable long-term future. And it is possible that the funeral industry might actually be due for a boom after the coronavirus has passed.

Some researchers put forward the 'dry tinder' hypothesis (see here and here) as an explanation for the high proportion of coronavirus deaths occurring in Sweden. They argued that, because Sweden had a relatively light flu season in 2019 with few deaths among older people, there were more weak or high-risk older people in the population when the coronavirus struck, leading to a higher number of deaths. The hypothesis is supported by some data, but as far as I am aware it is not widely accepted. If the hypothesis is true though, it may be that New Zealand and Australia, having had a quiet flu season in 2020 as well as few coronavirus deaths, may be due a worse season in 2021. Or in 2022, if social distancing, hand washing and wearing masks, etc. are still widespread practices next year (which seems likely).

It is probably not time just yet to sell off your shares in the funeral industry.


Wednesday 16 December 2020

Is Twitter in Australia becoming less angry over time?

A couple of days ago, I posted about Donald Trump's lack of sleep and his performance as president. One of the findings of the research I posted about was that Trump's speeches and interviews were angrier after a night where he got less sleep. However, if you're like me, you associate Twitter with the angry side of social media, so late-night Twitter activity would seem likely to make anyone angry, sleep deprivation or not.

One other thing that seems to make people angry is the weather, particularly hot weather. So, it seems kind of natural that sooner or later some researchers would look at the links between weather and anger on social media. Which is what this recent article by Heather Stevens, Petra Graham, Paul Beggs (all Macquarie University), and Ivan Hanigan (University of Sydney), published in the journal Environment and Behavior (sorry, I don't see an ungated version, but there is a summary available on The Conversation), does.

Stevens et al. looked at data on emotions coded from Twitter, average daily temperature, and assault rates, for New South Wales for the calendar years 2015 to 2017. They found that:
...assaults and angry tweets had opposing seasonal trends whereby as temperatures increased so too did assaults, while angry tweets decreased. While angry tweet counts were a significant predictor of assaults and improved the assault and temperature model, the association was negative.

In other words, there were more assaults in hot weather, but angry tweets were more prevalent in cold weather. And surprisingly, angry tweets were associated with lower rates of assault. Perhaps assault and angry Twitter use are substitutes? As Stevens et al. note in their discussion:

It is possible that Twitter users are able to vent their frustrations and hence then be less inclined to commit assault.

Of course, this is all correlation and so there may be any number of things going on here. However, the main thing that struck me in the article was this figure, which shows the angry tweet count over time:

The time trend in angry tweets is clearly downward sloping (see the blue line) - angry tweeting is decreasing over time on average. Stevens et al. don't really make a note of this or attempt to explain it. You might worry that this is driving their results, since temperatures are increasing slowly over time due to climate change. However, their key results include controls for time trends. Besides, you can see that there is a seasonal trend to the angry tweeting data around the blue linear trend line.

I wonder - is this a general trend, or is there something special about Australia, where Twitter is becoming more hospitable? The mainstream media seems to suggest that Twitter is getting angrier over time, not less angry. Or, is this simply an artefact of the data, which should lead us to question the overall results of the Stevens et al. paper? You can play with the WeFeel Twitter emotion data yourself here, as well as downloading tables. It clearly looks like anger is decreasing over time, but it may be a result of changing trends in the use of Twitter (or who is using Twitter) over time, especially in the change of language use over time.

I would want to see some additional analysis on other samples, and using other methods of scoring the emotional content of Twitter activity, before I conclude that Twitter is angrier when it is colder, or that Twitter anger is negatively associated with assault. On the plus side, the WeFeel data looks like something that may be worth exploring further in other research settings, if it can be shown to be robust.

Tuesday 15 December 2020

Using economics to explain course design choices in teaching

Given that I am an economist, it probably comes as no surprise that I use economics insights in designing the papers I teach. So, extra credit tasks that incentivise attendance in class can be explained using opportunity costs and marginal analysis (marginal costs and benefits), and offering choice within and between assessment tasks can be explained using specialisation. I also illustrate related concepts in the examples I use. For example, in my ECONS102 class the first tutorial includes a question about the optimal allocation of time for students between two sections of the test.

Although economics underpins many of my course design choices, I don't make the rationale for those design choices clear to students, but that's exactly what this 2018 article by Mariya Burdina and Sue Lynn Sasser (both University of Central Oklahoma), published in the Journal of Economic Education (sorry, I don't see an ungated version online), promotes. They recommend that:

...instructors of economics use economic reasoning when providing the rationale for course policies described in the syllabus. In this context, the syllabus becomes a learning tool that not only helps economics instructors increase students’ awareness of course policies, but at the same time makes course content more applicable to real-life situations.

To me, the problem is that making these choices clear and using economics to explain them engages students with concepts and explanations that they are not necessarily prepared for, since those concepts will not be fully explained until later in the paper. It's also not clear from Burdina and Sasser's article that the approach even has positive benefits. They present some survey evidence comparing one class that had economic explanations as part of the syllabus discussion and one class that didn't, but they find very few statistically significant differences in attitudes towards the course policies. This is very weak evidence that the approach changes students' understanding of the rationale for course design.

Overall, I'm not convinced that Burdina and Sasser's approach is worth adopting. I'll continue to use economic rationales for course design decisions, but I will continue to make those rationales clear only when students explicitly ask about them.

Monday 14 December 2020

The poor performance of 'Sleepy Donald' Trump

Back in 2018, I wrote a post about how sleep deprivation (as measured by late-night activity on Twitter) affected the performance of NBA players. It appears the research that post was based on inspired other researchers, as this new paper by Douglas Almond and Xinming Du (both Columbia University) published in the journal Economics Letters (appears it may be open access, but just in case there is an ungated version here) demonstrates. [*] Almond and Du look at the late-night Twitter activity of U.S. President Donald Trump over the period from 2017 to 2020. Specifically, they first show that Trump's Twitter activity begins about 6am each morning (and this is fairly constant across the whole period examined). So, when Trump tweeted after 11pm at night, Almond and Du infer that he must have less than seven hours sleep that night.

Using their dataset, Almond and Du first demonstrate that Trump's late-night Twitter activity has been increasing over time, meaning that he has been getting less sleep over time:

The likelihood of late tweeting increases by 0.22 in 2019 and 0.38 in 2020 relative to the omitted year (2017). This is equivalent to a 183% and 317% increase relative to the 2017 mean, respectively. Additionally, the number of late-night tweets increases over time. He posts roughly one more tweet per night in 2020, a sixfold increase compared with 2017 when he tweeted late about once per week...

So, by 2020 Trump was having more than four times as many late nights as in 2017, and tweeting late at night more frequently. So, what effects do these late nights have? Almond and Du turn to some proxy measures of the quality of the President's work the next day - the number of likes, retweets, and replies that are received by his tweets the day after a late night, compared with tweets the day after he gets more sleep. They find that:

...tweets after a late-tweeting night receive 7400 fewer likes, 1300 fewer retweets and 1400 fewer replies, or 8%, 6.5% and 7% fewer reactions relative to the mean. We interpret these less-influential postings as lower tweet quality.

And, just like your average toddler, a late night affects the President's mood (measured by the emotion of his speeches and interviews as recorded by the Fact Checker website):

...the proportion of happy transcripts decreases 4.4 percentage points (4.9%) following a late night. Despite his being happy in 88% [of] transcripts, late-tweeting nights and more late tweets appear to make him less happy the following day... Meanwhile, the proportion of angry transcripts increases by 2.9 percentage points after a late night, a nearly three-fold increase compared with the mean 1.1%.

Finally, they look at the betting odds of Trump (and his main opponent/s) winning the 2020 Presidential election, and find that:

...a significant relationship between late tweeting and his competitor’s odds. After a late night, more people believe the leading candidate other than Trump is more likely to win and wager on Trump’s opponent. The implied chance of his competitor’s winning increases by .6 percentage points, or 4.8% relative to the mean.

Overall, it appears that lack of sleep negatively affected Trump's performance (just as it does for NBA players). It may even have reduced his chance of re-election last month. Perhaps he should have worried less about 'Sleepy Joe' Biden, and more about 'Sleepy Donald' Trump?

*****

[*] Although the last line of my 2018 post posed the question of whether President Trump's late-night Twitter activity affected his Presidential performance, I can unfortunately take no credit for this research.

Read more:


Sunday 13 December 2020

Climate change risk, disaster insurance, and moral hazard

One problem that insurance companies face is moral hazard - where one of the parties to an agreement has an incentive, after the agreement is made, to act differently than they would have acted without the agreement and bring additional benefits to themselves, and to the expense of the other party. Moral hazard is a problem of 'post-contractual opportunism'. The moral hazard problem in insurance occurs because the insured party passes some of the risk of their actions onto the insurance company, so the insured party has less incentive to act carefully. For example, a car owner won't be as concerned about keeping their car secure if they face no risk of loss in the case of the car being stolen.

Similar effects are at play in home insurance. A person without home insurance will be very careful about keeping their house safe, including where possible, lowering disaster risk. They will avoid building a house on an active fault line, or on an erosion-prone clifftop, for example. In contrast, a person with home insurance has less incentive to avoid these risks [*], because much of the cost of a disaster would be borne by the insurance company.

Unfortunately, there are limited options available to deal with moral hazard in insurance. Of the four main ways of dealing with moral hazard generally (better monitoring, efficiency wages, performance-based pay, and delayed payment), only better monitoring is really applicable in the case of home insurance. That would mean the insurance company closely monitoring homeowners to make sure they aren't acting in a risky way. However, that isn't going to work in the case of disaster insurance. Instead, insurance companies tend to try to shift some of the risk back onto the insured party through insurance excesses (the amount that the loss must exceed before the insurance company is liable to pay anything to the insured - this is essentially the amount that the insured party must contribute towards any insurance claim).

And that brings me to this article from the New Zealand Herald from a couple of weeks ago:

Thousands of seaside homes around New Zealand could face soaring insurance premiums - or even have some cover pulled altogether - within 15 years.

That's the stark warning from a major new report assessing how insurers might be forced to confront the nation's increasing exposure to rising seas - sparking pleas for urgent Government action.

Nationally, about 450,000 homes that currently sit within a kilometre of the coast are likely to be hit by a combination of sea level rise and more frequent and intense storms under climate change...

The report, published through the Government-funded Deep South Challenge, looked at the risk for around 10,000 homes in Auckland, Wellington, Christchurch and Dunedin that lie in one-in-100-year coastal flood zones.

That risk is expected to increase quickly.

In Wellington, only another 10cm of sea level rise - expected by 2040 - could push up the probability of a flood five-fold - making it a one-in-20-year event.

International experience and indications from New Zealand's insurance industry suggest companies start pulling out of insuring properties when disasters like floods become one-in-50-year events.

By the time that exposure has risen to one-in-20-year occurrences, the cost of insurance premiums and excesses will have climbed sharply - if insurance could be renewed at all.

Because insurance companies have few options for dealing with moral hazard associated with disaster risk, their only feasible option is to increase insurance excesses, and pass more of the risk back onto the insured. Increasing the insurance premium also reflects the higher risk nature of the insurance contract. Even then, in some cases it is better for the insurance company to withdraw cover entirely from houses with the highest disaster risk.

I'm very glad that the Deep South report and the New Zealand Herald article avoided the trap of recommending that the government step in to provide affordable insurance, or subsidise insurance premiums for high-risk properties. That would simply make the moral hazard problem worse. Homeowners (and builders/developers) need the appropriate incentives related to building homes in the highest risk areas. Reducing the risk to homeowners (and buyers) by subsidising their insurance creates an incentive for more houses to be built in high-risk locations. However, as the New Zealand Herald article notes:

Meanwhile, homeowners were still choosing to buy, develop and renovate coastal property, and new houses were being built in climate-risky locations, said the report's lead author, Dr Belinda Storey of Climate Sigma.

"People tend to be very good at ignoring low-probability events.

"This has been noticed internationally, even when there is significant risk facing a property.

"Although these events, such as flooding, are devastating, the low probability makes people think they're a long way off."

Storey felt that market signals weren't enough to effect change - and the Government could play a bigger role informing homeowners of risk.

Being unable to insure one of these properties creates a pretty strong disincentive to buying or building them. Perhaps withdrawal of insurance cover isn't a problem, it's the solution to a problem? 

*****

[*] Importantly, the homeowner with insurance doesn't face no incentive to avoid disaster risk, because while the loss of the home and contents may be covered by insurance, there is still a risk of loss of life, injury, etc. in the case of a disaster, and they will want to reduce that risk.

Wednesday 2 December 2020

Edward Lazear, 1948-2020

I'm a little late to this news, but widely respected labour economist Ed Lazear passed away last week. The New York Times has an excellent obituary, as does Stanford, where Lazear had been a professor since the mid-1990s.

Lazear is perhaps best known to most people as the chairman of George W. Bush's Council of Economic Advisors at the time of the Global Financial Crisis. However, my ECONS102 students would perhaps recognise him as the originator of the idea of tournament effects in labour economics, as an explanation for why a small number of workers in certain occupations receive much higher pay than others that are only slightly less productive. His contributions to economics ranged across labour economics and the economics of human resources, as well as the economics of education, immigration, and productivity. Many past Waikato economics students would have been exposed to his work in a third-year paper on the economics of human resources that, sadly, we no longer teach.

Lazear's book, with the title Personnel Economics, has been recommended to me by several people, but I have yet to purchase a copy. Eventually, you may see a book review on it at some point in the future. Similarly, I anticipate additional bits of content from Lazear popping up in my ECONS102 topic on the labour market. Unfortunately, he was never in the conversation for a Nobel Prize, with many other labour economists likely higher up in the queue. Nevertheless, he will be missed.

[HT: Marginal Revolution]