Sunday, 26 February 2017

Book Review: Diversity Explosion

I just finished reading "Diversity Explosion: How new racial demographics are remaking America", by William Frey. I thought this would be quite relevant to my work on the CADDANZ (Capturing the Diversity Dividend for Aotearoa New Zealand) project, and a good follow-up to reading Thomas Schelling's "Micromotives and Macrobehavior" (which I reviewed a couple of weeks ago).

Overall, I found the book to be a really interesting read, but that was mainly for the first few chapters, and the last few chapters. Frey essentially uses the book to outline the demographic (specifically, ethnic) changes that the US has undergone over the period 2000-2010. He takes a longer perspective in parts, but mostly the focus is on the (sometimes dramatic) subnational changes that have occurred and will continue into the future. A couple of the trends were quite surprising to me such as this bit:
...white-only flight to the suburbs is a thing of the past. In fact, nearly one-third of large metropolitan suburbs showed a loss of whites between 2000 and 2010, and Hispanics are now the biggest drivers of growth of the nation's metropolitan population in both cities and suburbs. Today, it is racial minorities, in their quest for the suburban dream, who are generating new growth and vitality in the suburbs, just as immigrant groups did in the cities in an earlier era.
And this bit:
Thirteen of the 20 cities with the largest black populations (including nine of the ten largest) registered declines in their black populations in 2000-10...
The latter trend is related to the former, in the sense that there has been a huge increase in suburbanisation of the black population since 2000, to the extent that Hispanics (rather than blacks) now constitute the largest minority group in U.S. cities as a whole.

Some parts of the book seemed rather quaint to me (or to you they may be odd), such as the old-fashioned treatment of race vs. ethnicity, where Hispanic is an ethnic classification, but all others are racial. All the papers I have read that talk about "non-Hispanic white" population suddenly made a bit more sense, although the classification is really worrying. To be fair, Frey himself notes:
It can be argued that the distinction between race and ethnicity, as the Census Bureau applies it to the Hispanic population, is an artificial one.
We can only work with the data that are available, and until the US Census Bureau gets its act together on ethnic classification, US studies are pretty much stuck with the current situation.

This bit also made me smile:
It was punctuated by the arrival in 2011 of the first "majority-minority" birth cohort: the first cohort in which the majority of U.S. babies were nonwhite minorities...
I smiled because it reminded me of this cartoon, which sums up the white-centric nature of the above quote (and to some extent, the whole book):


The early chapters introduce the idea of the 'cultural generation gap' between "the diverse youth population and the growing, older, still predominantly white population". This generation gap will be exacerbated by the faster population growth of the more youthful minorities (especially Hispanics) compared with slower population growth among whites.

Most of the middle chapters will not be of interest to the general reader (unless you are really interested in the finer points of the subnational distribution of ethnic groups in the U.S.), but the final chapters are more interesting, including where Frey attempts to draw out the political or electoral implications of the change ethnic mix. I wish I had read this book before the 2016 presidential election, and in light of that result there are some sentences that Frey might want back, such as:
If Romney could have eked out a victory, perhaps with greater white voter turnout, it would probably have been the "last hurrah" for a party strategy that relied primarily on whites as its base.
Which, of course, was exactly Trump's strategy (if you believe there was a strategy). A map later in that chapter was really interesting, because it laid out the number of states that Obama won in the 2012 election, due to minorities but not whites. With the benefit of hindsight, perhaps viewing those maps would have made some of the Clinton-boosters take pause.

Anyway, there are a number of interesting graphics in the book, with thoughtful (and though-provoking) commentary. Probably I will look to reproduce some of the graphics (but using New Zealand data) sometime in the future. This book would be a good read for anyone interested in ethnicity statistics, particularly in the context of the U.S.

[HT: Tahu Kukutai, for the cartoon, which she used in her Pathways conference presentation (PDF) last year]

Saturday, 25 February 2017

Kenneth Arrow, 1921-2017

Last week one of the greatest minds in economics passed away. In 1972 at the age of 51, Kenneth Arrow was the youngest ever to win the Nobel Prize in economics (a record he still holds), and five of his students have gone on to win Nobel Prizes as well, including Eric Maskin, John Harsanyi, Michael Spence, and Roger Myerson. Maskin's student Jean Tirole has also won a Nobel Prize, so that is a 'family tree' that is really unmatched (at least in economics).

Arrow's contributions are really too many to describe in detail (A Fine Theorem has made a start - the first of four promised posts is here), but I see four main areas:

  1. Social choice theory - Arrow's impossibility theorem, which we discuss in ECON110, details how there can be no 'perfect' majority-rule voting system that satisfies a set of axioms that most people would expect from a 'fair' voting system;
  2. General equilibrium - Arrow, along with Gerard Debreu, proved the (theoretical) existence of a market-clearing equilibrium set of prices across all markets, and that the resulting equilibrium would be efficient (this is one of the bases for economists' claims that competitive markets are efficient);
  3. Health economics - Arrow wrote the first paper that teased out the complexities of health care as a good (which we discuss in ECON110), including the fundamentals of asymmetric information that Spence and others built on; and
  4. Measurement of risk - Arrow outlined the mathematical concepts that economists use to measure risk.
Of course, he made many other contributions than those outlined above. Tim Harford has a great post here, and the New York Times obituary is here. And here is a recent (2016) interview with the great man. He will be missed.

[Update: The second post from A Fine Theorem is here]

Friday, 24 February 2017

Climate change won't much affect internal migration in NZ

Climate change is likely to be one of the key challenges facing humankind over the coming century (or more). We are likely facing increases in mean temperature, desertification, rising sea levels, and increasing frequency and intensity of extreme weather. But how big is the impact likely to be on a country like New Zealand, anyway?

In a new working paper, I evaluate the impact of climate change on internal migration in New Zealand, and what that means for the future spatial distribution of population. That is, which regions are likely to gain population from climate change, and which will lose population? I make use of a gravity modelling framework (which I have written about before). Essentially, a gravity model suggests that the migration flow between two regions is positively related to the population of the origin and the population of the destination, and negatively related to the distance between the two places. I tried out a bunch of climate variables from NIWA to find those that appeared to have the biggest impact on internal migration, using data on inter-regional migration from the last four Censuses (1991-2013).

Three climate variables are found to have statistically significant associations with internal migration: (1) mean sea level pressure in the destination; (2) surface radiation in the origin; and (3) wind speed at ten metres at the destination. The sign of the effects suggest that migrants attracted to areas with more settled weather (higher mean sea level pressure); migrants are less likely to move away from areas with more sunlight hours (but interestingly, don't move towards those areas); and migrants prefer to avoid moving to areas that are windier.

I then embedded the gravity model within a cohort-component population projection model, which is something that Jacques Poot and I have been working on for a number of years. I used the projections model to evaluate the effect of different climate change scenarios on regional populations out to a horizon of 2100.

Including the three climate variables in the population projection model makes a small difference to the regional population distribution. The inclusion of climate variables increases the projected populations of Northland, Bay of Plenty, Gisborne, Hawke’s Bay, Taranaki, and Nelson. The overall impact is quite small, as you can see from the diagram below for Northland. The orange line tracks the projected population of Northland excluding any impact of climate, while the grey line includes the impact of climate. Bear in mind that Northland shows the biggest effects in relative terms - the effects on other regions are smaller.


I also looked at the effect of different climate change scenarios, and the difference between different climate scenarios is negligible. The diagram below shows the projections under different climate scenarios for the Southland region. As you can see, there is little difference between them (and that result is similar for other regions as well).



Overall, the results suggest that, while statistically significant, climate change will have a negligible effect on the population distribution of New Zealand at the regional level. This is not to say that climate change will not have important and substantial effects at very localised levels, as a result of sea level rise, for instance. However, most if not all of the displacement of people will be within regions. For example, maybe those displaced by sea level rise simply move a little further inland, or we build walls to keep the sea at bay.

Read the full working paper here.

Thursday, 23 February 2017

Price discrimination in tourism... India edition

This short post by Alex Tabarrok at Marginal Revolution contained this picture:


So, foreign nationals at the National Museum of India in Delhi pay more than thirty times the price that Indian nationals do. Of course, this type of price discrimination is a topic I've written about before. Here's what I said then (in the context of entry to Ayutthaya in Thailand, but the principle is identical):
Of course, this is an example of price discrimination - where different consumers (or groups of consumers) are charged different prices for the same good or service, and where the difference in price does not arise because of a difference in cost. So, in this case there are two groups (foreigners and locals) paying different prices for the same thing (entry into Ayutthaya, or some other tourist attraction).
How can they get away with this? Well first, price discrimination is not illegal. If it were, then you couldn't haggle over any prices (and haggling is almost mandatory if you are shopping at the markets in Thailand and don't want to get ripped off!). Second, the seller needs some degree of market power - they need to be able to set the price. Since there are few substitutes for seeing Ayutthaya and there is only one supplier, that guarantees some market power here. Ok, so that's the basic market condition for price setting sorted.
For price discrimination to work though, you need to meet three conditions:
1. Different groups of customers (a group could be made up of one individual) who have different price elasticities of demand (different sensitivity to price changes);
2. You need to be able to deduce which customers belong to which groups (so that they get charged the correct price); and
3. No transfers between the groups (since you don't want the low-price group re-selling to the high-price group).
Those conditions are generally met in the case of tourist attractions. Foreign tourists have low sensitivity to price (low price elasticity of demand) for a few reasons - there are few substitutes for visiting Ayutthaya (or other tourist attraction). Foreign tourists have usually also travelled a long way at great cost to get to Thailand, so the cost of entry into Ayutthaya is pretty small in the overall cost of their holiday. For these reasons, the foreign tourists are relatively insensitive to price and raising the price of entry isn't going to keep them away in great numbers.
Tabarrok asks:
 Is this fair or ethical? Would it be legal in the United States?
I don't know about the US, but it's not illegal here (or, evidently in India or Thailand). In fact, as I argued in that earlier post I'm surprised we don't see more of it in New Zealand. Price discrimination is a legitimate way for firms to extract additional profits from consumers who are willing to pay higher prices. It's just that it isn't usually quite as overt as in Alex's example.

Wednesday, 22 February 2017

Judge rules that snuggies are blankets, not clothes

Whatever faults you think New Zealand's GST regime has, the one thing it is definitely not is overly complex. At least, not in comparison to similar VAT or GST regimes in other countries. New Zealand's GST applies at the same rate (currently 15%) to pretty much all goods and services with the exception of some zero-rated services like dwelling rents, interest payments and some other specified financial services (see here if you are interested, or if you are an insomniac looking for a cure). Overall, this makes it fairly easy to work out which goods and services attract the tax, and what rate applies. It also avoids costly and stupid legal cases like the great Jaffa Cake controversy in the UK (a controversy that was reignited by an episode of The Great British Bakeoff - I kid you not). Anyway, on the original controversy, the Guardian reported:
As a nation we face many challenges and issues that no doubt we should be keenly debating. However, when gathered around their kettles you are just as likely to hear a couple of Brits trying to decide on the precise phylogeny of the biscuits, bars and assorted cakes that they are about to dispatch with their brew. To many of us, trying to win acceptance for our world view on whether a certain item is a cake or biscuit seems like quite an important task.
Typically, this debate has raged most fiercely over the Jaffa Cake, which despite being called a cake and having sponge cake as its base will always be viewed by many as a biscuit. However, within the industry its long been known that there is a Cinderella figure in the "cake or biscuit" debate, the teacake, which was once a cake, then became a biscuit and then a cake again.
"Does it really matter?" you may wearily sigh. Well, in the case of the Jaffa Cake it mattered a great deal to the taxman, as the somewhat bizarre rules concerning which products were subject to VAT and which escaped it due to zero-rating were a bit woolly in regard to the inhabitants of the biscuit kingdom. As a cake, the Jaffa was zero-rated, and given how many of them we see off as a nation that equates to a great deal of missed revenue. Biscuits are zero-rated too, unless they are "luxury" items, which according to the guidelines includes any that have chocolate on top. Cakes, no matter how opulent or fancy, are always classed as a staple food and zero-rated. In 1991 McVitie's and the taxman famously had their day in court and after a 12-inch-wide Jaffa Cake was produced as evidence they found, that while the product also had characteristics of biscuits or confectionery which was not cake, it had sufficient characteristics of cakes to be a cake for the purposes of zero-rating.
That's right. A court was asked to rule on whether a Jaffa Cake was a cake (and therefore attracted no VAT) or a 'luxury biscuit' (in which case VAT would apply). In a similar vein, I read with interest this story about Snuggies from last week:
Snuggies -- the wearable fleece coverings found on infomercials -- is now considered a blanket and cannot be taxed as clothing following a decision by the United States Court of International Trade.
The U.S. Justice Department argued in court that Snuggies are apparel and should be subject to higher taxes than blankets. The court disagreed, and found that Snuggies should be considered blankets and taxed at a lower amount.
Judge Mark Barnett's ruling means that instead of paying 14.9 percent duties when bringing Snuggies into the U.S., importers will only have to pay 8.5 percent duties, according to Bloomberg News.
Chalk it up to the craziness of tax and tariff administration. Luckily we have no such foolishness here. Yet. Which is why we should avoid any temptation to carve out exemptions or reduced rates of GST, such as for 'fresh food'. I'm sure based on the two examples above you can imagine some interesting cases of what constitutes 'fresh food' if this sort of proposal went ahead.


Monday, 20 February 2017

The irrationality of NFL play-callers

I recently read two papers that both essentially conclude (based on different aspects) that NFL play callers are not rational (or more specifically, not rational and risk neutral - an important point I'll return to at the end of the post). Recall that a rational decision-maker weighs up the costs and benefits of a decision, and when faced with mutually exclusive options (such as choosing which play to run in an NFL game), they should choose the option with the greatest net benefit (benefits minus costs).

The first paper (by Jonathan Hartley, an MBA student at the Wharton School at the University of Pennsylvania) looks at play-callers' choices between an extra point attempt and a two-point attempt following a touchdown. A rational and risk-neutral play-caller should choose whichever play provides the greatest expected benefit (expected number of points). In this case, Hartley found:
Between 2002 and 2014, the extra point conversion rate was 99.2% (out of 7738 attempts). As the average two point conversion rate remained 0.475, the expected points from a two-point conversion remains 0.95 below the automatic 0.992 points...
Over 2 seasons since the implementation of the new rules [increasing the distance the extra point try is attempted from], the extra point conversion rate has fallen from 0.992 to 0.95. Moreover, the total number of 2 point conversion attempts per season has nearly doubled...
In other words, when the NFL changed the extra point to being attempted from a greater distance (thereby making it more difficult), the expected value of an extra point try fell from 0.992 to 0.95 points. The expected value of a two-point conversion remained steady at 0.95 points. So, a rational and risk-neutral play-caller should now be indifferent between an extra point attempt and a two-point conversion. However, as Hartley shows in the paper, most teams still attempt very few two-point conversions, even those teams that have a history of success at them. The paper itself is pretty rough, but I wish the MBA students here could do this sort of work!

The second paper, by Noha Emara (Rutgers), David Owens (Haverford College), John Smith (Rutgers), and Lisa Wilmer (Florida State), is forthcoming in the Journal of Behavioral and Experimental Economics (ungated earlier version here), and looks at serial correlation in play-calling. Serial correlation occurs when you have a time series (like a series of plays) and where each observation in the time series is related (positively or negatively) to the observation or observations earlier in the time series. Obviously, an NFL offensive play-caller wants to call players in a random way - what we call a mixed strategy. Mixed strategy is particularly important in sports - think of the choice of where to serve in tennis, or where to shoot a penalty or which way to dive as a goalkeeper in soccer (see here or here for more on this). If an NFL offensive play-caller doesn't effectively randomise their play calling, then the defence can potentially exploit some prior knowledge of the play about to be called.

Humans are rubbish at trying to create random series, and indeed that's what Emara et al. found, based on their dataset of more than 200,000 plays from the 2000-2012 NFL seasons:
...the previous pass variable is negative and significant in each specification. This provides evidence that, even after controlling for down, distance, field position, and other observables, play calling exhibits significant negative serial correlation. The Previous pass-Previous failure interaction estimate is negative significant in both of the specifications where it appears, suggesting that play calling becomes even more negatively serially correlated following a failed play.
To translate, play-callers are significantly more likely to call a running play after a previous passing play, and to call a passing play after a previous running play, than would be expected if they were selecting plays randomly. And on top of that, if the previous play was a failure (e.g. if it lost yards), then they are even more likely to change the play type on the following play.

To make things worse, Emara et al. find evidence that teams would be better off if they ran more plays that were the same as the previous play:
We find that a rush following a rush gains 0.14 more yards than a rush following a pass. We also find that a pass following a pass gains 0.21 more yards than a pass following a rush. Estimates are more pronounced when we also control for whether the previous play was a failure. We find that a rush gains 0.24 more yards more following a failed rush than following a failed pass. Also, a pass gains 0.34 more yards following a non-failed pass than following a non-failed rush.
In summary, we find evidence that the efficacy of a play, as measured by yards gained, increases if it follows a play of the same type.
The results is even stronger on second down plays, but not so much for third down plays. However, the take-away message, like that of the first paper, is that play-callers are not being purely rational.

However, there is a caveat here. If we think that, based on this evidence, that play-callers should be calling more two-point conversions and switching up play types less often, then we may be forgetting that there is also a wider game at play here. If play-callers are risk averse, then this affects their decision-making. The two-point conversion may have the same expected value as an extra point attempt, but it is riskier (see also this post on NBA three-pointers from last week), so a risk averse play caller may avoid the two-point conversion more than the simple comparison of expected values would suggest.

But what about the play-callers in the second paper? Emara et al. have thought about this, and this is what they offer:
Perhaps teams feel pressure not to repeat the play type on offense, in order to avoid criticism for being too “predictable” by fans, media, or executives who have difficulty detecting whether outcomes of a sequence are statistically independent. Further, perhaps this concern is sufficiently important so that teams accept the negative consequences that arise from the risk that the defense can detect a pattern in their mixing.
Making play calls that the fans think are predictable (but which are actually more random) may make the play-caller themselves at risk of losing their job (or at least, of looking like they are doing a poor job). So, play-callers may attempt to make their play calls look more random by switching (from run to pass or vice versa) more often than they should, even though this is actually less random and costs the team in terms of yards gained per play.

The question is, now that these trends are known, will any team want to exploit them?

[HT: Marginal Revolution, here and here]

Sunday, 19 February 2017

Is social media reducing teen pregnancy?

A couple of years ago, I wrote a post about some Melissa Kearney research showing that the MTV show 16 and pregnant reduced teen pregnancy. Here's what I said then:
So, the overall conclusion? If you believe in the IV approach, which many economists do, then 16 and Pregnant caused a significant reduction in teen births. If you don't believe in it, then watching more MTV is related to a significant reduction in teen births. Either of these is an interesting result in its own right.
Closer to home, New Zealand's teenage birth rate peaked at 32.85 per 1000 women aged 15-19 in 2008 and has declined since (to 24.89 in 2012; lower than the 29.4 in the U.S. in 2012). I wonder how many girls in New Zealand are watching 16 and Pregnant?
James from last year's ECON110 class shared this story with me (from March last year), which I've been holding onto for a while:
Teen pregnancies are at their lowest in eight years and some experts think social media could be partly responsible.
Data from Statistics New Zealand has revealed the number of teenage pregnancies in New Zealand among women under 20 years old has almost halved since 2007 - the year that social media became a global phenomenon.
In 2007, 4955 women under 20 fell pregnant. But last year there were just 2865 births to under 20-year-olds and a large majority of those were to 18- and 19-year-olds.
Some researchers have credited the stark drop to better access to contraception, better sex education and better parenting.
Others, however, have suggested social media may have played a part.
A leading paediatrics expert, University of Auckland Associate Professor Simon Denny, said it was possible social media had contributed to a reduction in "risk behaviours" including teenagers having unprotected sex.
"What we have seen is this reduction and at the same time we have had this explosion in social media," Dr Denny said.
"There are some suggestions that young people are spending more time inside rather than going outside and engaging in risk behaviours but there is no hard evidence on this at this point.
That, folks, is mistaking correlation for causation. While it makes a plausible story that teenagers are too busy on Snapchat and Instagram to have sex, the relationship may not be causal. Perhaps the causality runs in the opposite direction - teenagers are having less sex, so they spend more time on social media instead? Perhaps there is some third factor that has caused both an increase in social media use and a decrease in unprotected sex? Or more plausibly, perhaps these are just two trends that look like they're moving together but may actually be unrelated. If you doubt that the last of those would happen, try this highly significant correlation:


If you need any more, I suggest you go to Tyler Vigen's excellent site, spurious correlations (I particularly like that the number of movies Nicholas Cage has appeared in correlates highly with the number of drownings in swimming pools).

Anyway, on a more serious note, whether social media use and teen pregnancy are more than just spuriously correlated is an interesting research question. I'm sure a suitably motivated honours or masters student could dig into the data for New Zealand (and/or other countries) to find out.

[HT: James from my ECON110 class]

Saturday, 18 February 2017

Tim Harford on trade

Tim Harford quite often writes about trade. I particularly liked some of the things he wrote in this post last week, since they echo ideas that I discuss in both ECON100 and ECON110:
Economists disagree about most things, but for a couple of centuries they’ve agreed about the merits of free trade, basically for the reasons outlined above. But some readers may be faintly aware of cracks in that consensus — haven’t economists realised that free trade is sometimes bad?
Broadly, the answer is “no” — economists remain thoroughly persuaded of the merits of international trade. But there are cautionary notes. First, modern trade agreements tend to be loaded with rules — food safety, financial regulation, intellectual property — that are not about tariffs. Some of these rules are closely connected with trade itself: long arguments at customs can restrict trade just as surely as a border tariff. But others have little to do with trade, and sometimes the rules are simply bad. So you can favour free trade yet oppose some “free-trade” agreements, as many economists do.
I encourage you to read all of Harford's post, if you have any doubts about why free trade is a good idea. However, 'free trade' is not the same as 'free trade agreements' since most free trade agreements aren't about making trade more free at all, in the sense of reducing tariffs and other trade barriers. The Trans-Pacific Partnership is a case in point. While New Zealand may have gained from reduced tariffs for our exports into the U.S. and Japan, the neo-colonial provisions on intellectual property and investor-state dispute resolution to me made it a hard sell that the net effect was positive. And when you consider the distributional impacts of the agreement (where the gains would be concentrated among farmers and other exporters, with the potential costs borne by everyone), it becomes an even harder sell. And it is this last point (the distributional consequences of trade) that has arguably contributed to Brexit, Trumpism, and other populist movements.

Harford also covers a similar point:
But deep down, trade is just another kind of productive technology — a technology that turns Minis into camembert [MC: To understand that point, you need to read Harford's whole post]. Like any productive technology, it makes us richer. But it creates winners and losers, and the winners may take their good fortune for granted while the losers are acutely aware of what they’ve lost. The losers have votes too. And if they’re frustrated about China, let’s see what happens if self-driving vehicles put several million truckers and taxi drivers out of work.
It's important for us not to lose sight of the fact that there are winners and losers in trade, and as this post notes, those who lose may face long-term consequences that governments have not traditionally allowed adequate compensation for.

Friday, 17 February 2017

Why Pokemon Go probably won't save us from obesity

When Pokemon Go came out last year, many people lauded the possibility of the augmented reality game to increase physical activity and health. I even wrote a semi-serious piece advocating that the government subsidise the game. However, the bubble has burst, and the game is pretty much dead now (at least, compared to where it began).

Aside from the overall decline in player numbers though, it seems that any benefits in terms of increased physical activity have been grossly overestimated. In its annual Christmas issue, the British Medical Journal had an (open access) article by Katherine Howe (Harvard) and others, on the effect of Pokemon Go on physical activity. The authors conducted a survey of 1182 iPhone 6 users in the U.S., and used screen captures of the number of steps the Pokemon Go players and non-players took. They compared players (before and after starting to play Pokemon Go) with non-players (before and after the median start date for the players), in a difference-in-difference analysis. Here's what they found:
Playing Pokémon GO was common across various subgroups of the population... players, however, tended to be younger, have a lower education and household income, and be obese, and were more likely to be single and less likely to be black compared with non-players... In the four weeks before installation of Pokémon GO, participants who played the game took on average 4256 (SD 2697) steps daily. The corresponding number for non-players in the four weeks preceding 8 July (median date of Pokémon GO installation among the players) was 4126 (SD 2930). After installation of the game, the daily steps among players increased sharply before gradually returning to the pre-installation levels, whereas the number of daily steps for non-players remained at similar levels throughout the study period...  The difference in difference analysis confirmed the pattern: Pokémon GO was associated with an increase in daily steps of 955 (95% confidence interval 697 to 1213) during the first week, the effect was gradually attenuated over the subsequent weeks, and by week 6 it was not significant...
In other words, Pokemon Go had a very small effect on physical activity - the authors note that an extra 1000 steps is about 11 minutes of walking, which is much less than any guidelines recommend. And that effect had more than halved within four weeks, and was essentially gone within six weeks. Hardly cause to hail the game as a solution to obesity. Back to the treadmill, I guess.

[HT: Stats Chat, back in December]

Read more:

Wednesday, 15 February 2017

Brexit negotiations as a game of chicken

In a post last month, Tim Harford perceptively characterised the posturing between Britain and the European Union over Brexit as a game of chicken:
First: to be an effective negotiator often means accepting some risk of disaster. The simplest model of this is the game of “Chicken”, in which two leather-clad rebels get into their cars, and drive towards each other at a furious pace. The first one to veer off the road loses his dignity, unless neither of them swerve, in which case both of them will lose a lot more than that.
Chicken is an idiotic game, whose players have little to gain and much to lose. But Chicken teaches us that you can gain an advantage by limiting your own options. Imagine detaching your steering wheel and flamboyantly discarding it as you race headlong towards your opponent. Victory would be guaranteed. Nobody would drive straight at a car that cannot steer out of the way. But here’s a worrisome prospect: what if, as you hurl your own steering wheel out of the window, you notice that your rival has done exactly the same thing?
All this matters because both the UK and the EU are doing their best to give the impression that they’ve thrown their steering wheels away. Control of immigration is non-negotiable, says Theresa May. Fine, says the EU — in that case membership of the single market is out of the question. Fine, says May: we’re out. Don’t let the door hit you as you leave, says the EU.
It’s easy to see why both sides are behaving like this — it’s the logic of Chicken. But the eventual result may be something no sane person wants: a car crash. In May’s recent speech, she set out her willingness to risk such a crash by saying she might walk away without a deal. That does make some sense: it’s how you act if you want to win a game of Chicken. But there are games of Chicken that nobody wins.
The Brexit negotiations chicken game is laid out in the table below. The EU and the UK can choose to 'make concessions', or to 'play hardball'. If both make concessions, the outcome is essentially pretty neutral (a payoff of zero for both of them). However, if either the EU or the UK plays hardball while the other makes concessions, whichever of them plays hardball comes out better off (positive payoff) at the expense of the other (negative payoff).  Finally, if both play hardball, both will be much worse off (very negative payoffs).


Where are the Nash equilibriums in this game? To identify them, we can use the 'best response' method. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the definition of Nash equilibrium).

For our game outlined above:
  1. If the UK makes concessions, the EU's best response is to play hardball (since + is better than 0) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If the UK plays hardball, the EU's best response is to make concessions (since - is better than --);
  3. If the EU makes concessions, the UK's best response is to play hardball (since + is better than 0); and
  4. If the EU plays hardball, the UK's best response is to make concessions (since - is better than --).
Note that there are two Nash equilibriums, where one of the EU or the UK plays hardball, and the other makes concessions. However, both of them want to be the one playing hardball. This is a type of coordination game, and it is likely that both the EU and the UK will try to play hardball (but leading both to incur big losses!).

The solution to getting your preferred equilibrium outcome in the chicken game is to make a credible commitment (such as removing the steering wheel that Harford suggests for the classic chicken game). In the case of Brexit though, it isn't clear how either side can make a credible commitment to the hardball strategy, and both are already moving their feet towards the accelerator. But if neither are willing to make concessions, the outcome is clear.

Read more:


Tuesday, 14 February 2017

Gender differences in risk-taking in pro basketball

One of the challenges of economics research is that you can't set up experiments. For instance, you can't easily randomly assign people to different treatments (e.g. gender) to see if or how their behaviour (or outcomes) are affected. Sure, you can run laboratory or field experiments, but then you wonder if the laboratory conditions affect the results and whether they hold in the 'real world'. Sports data provides an interesting solution to some of these issues, since they involve known rules and the incentives are usually pretty clear (most sports competitors are trying to win).

Which brings me to this IZA discussion paper from last year by René Böheim, Christoph Freudenthaler, and Mario Lackner (all from Johannes Kepler University Linz). In the paper, the authors use data from the NBA and WNBA playoffs from 2002-03 to 2013-14 to investigate whether there are differences in risk taking behaviour between men and women. This is an important question to investigate, since it may go some of the way towards explaining the gender wage gap, for instance. Risk-taking behaviour is notoriously difficult to observe, but in sports data it is readily apparent to the knowledgeable fan when a player is undertaking a high-risk strategy.

Specifically, Böheim et al. look at the probability that a player attempts a three-point shot in the closing minutes of a playoff game. A three-point shot is more risky than a standard two-point shot - it carries a higher reward (greater probability of winning, particularly if your team is down by two points), but a higher risk (since the probability of successfully sinking a three-pointer is less than for a two-pointer). They find that:
male teams increase their risk-taking towards the end of matches when they are trailing by a small amount and a successful risky strategy could secure the winning of the match. Our key finding shows that female teams, in contrast, reduce their risk-taking in these situations. The less time left in a match, the larger is the gap. A detailed investigation shows however that this difference is the result of risk-taking in matches where the costs of an unsuccessful risky strategy are relatively lower. In situations where the costs of an unsuccessful risky strategy are large - losing the match and, in consequence, the round in the elimination tournament - we find no difference in risk-taking between male and female teams. One potential explanation for our results might be male overcon fidence. If this were the reason for our findings, we would expect to observe a lower probability to throw successfully or to win the match. We do not find such evidence.
The results are interesting, but I don't fully buy their interpretation. To get more specific, they found that male teams increase their risk-taking (i.e. they attempt more three-pointers) when they are behind in the game (by up to two points), but this effect is only present when the team is ahead or tied in the playoff series overall. A team that is trailing in the playoff series (e.g. down 1-2 in a best-of-seven NBA finals series) does not engage in significantly more risk taking when down by one or two points.

My prior here would be that teams that are trailing in the series would be more likely to throw caution to the wind in an attempt to catch up, but Böheim et al. find the opposite (but only for men). The authors put this finding down to the costs of an unsuccessful risky strategy being lower (for those teams that are leading or tied in the series, compared with those who are trailing). However, I would expect that the benefits of the risky strategy are also lower (for teams that are leading or tied in the series), so there isn't a clear theoretical case for their interpretation (although the empirical case is that the risk-reward is positive for teams that are ahead in the series).

An alternative explanation may be to do with the difference in the cost-benefit calculation for the individual player versus the team. If your team is ahead in the series, then the series MVP is more likely to come from your team, so running up your personal score carries a high individual benefit (in addition to the team's benefit), whereas the cost is borne mostly by the team since no one will remember the three-pointers you missed when your team was ahead in the series. On the other hand, if your team is behind in the series and you undertake the risky strategy of attempting three-pointers, both the benefits (the fans remember you as the guy who got them back into the series) and the costs (the fans may remember you as the guy who wasted all those scoring opportunities by putting up crazy three-point attempts) fall on the individual. So, overall the benefit-cost calculation tends towards more risky strategies by the individual player if the team is ahead in the series.

But only for men. Is that because women are more team-oriented, so the difference in individual vs. team costs and benefits is less important? Or perhaps the monetary and status rewards for being a series MVP are greater in the NBA than the WNBA? Men may take more risks, but the key question of why is hardly settled.

Bonus video clip (this sort of shot wasn't included in the dataset, as there is little in the way of strategy when it comes to buzzer beaters; and yes, I know it's not pro basketball, but it is cool nonetheless):



[HT: Marginal Revolution, back in July last year]

Sunday, 12 February 2017

Inflation for the rich and inflation for the poor

A couple of weeks ago, I wrote a post about Statistics New Zealand's new household living-cost price indexes. Overall, these indexes showed slightly higher inflation for those in the lowest income quintile (the lowest income earners) compared with inflation for those in higher quintiles. Xavier Jaravel (Stanford) has a new paper on a similar topic (using U.S. data). From the abstract:
Using detailed barcode-level data in the US retail sector, I find that from 2004 to 2013 higher-income households systematically experienced a larger increase in product variety and a lower inflation rate for continuing products. Annual inflation was 0.65 percentage points lower for households earning above $100,000 a year, relative to households making less than $30,000 a year. I explain this finding by the equilibrium response of firms to market size effects: (A) the relative demand for products consumed by high-income households increased because of growth and rising inequality; (B) in response, firms introduced more new products catering to such households; (C) as a result, continuing products in these market segments lowered their price due to increased competitive pressure.
More evidence that we should be careful how we interpret the overall inflation rate based on a single Consumer Price Index, and that it is probably appropriate for benefits, superannuation, and the minimum wage to be indexed to the median wage (or a similar measure of income) rather than the CPI.

[HT: Marginal Revolution]

Wednesday, 8 February 2017

Book Review: Micromotives and Macrobehavior

When Thomas Schelling passed away last year, I mentioned that I hoped to read his 1978 book, "Micromotives and Macrobehavior", soon. I'm happy to say that I have now done so, and I even posted a little snippet from it last week. Overall, I think the book has aged really well. There is one chapter where Schelling is essentially talking about choices over the chromosomes in our children, where the technology is well beyond what he had envisaged as being possible, but if one reads that chapter and mentally replaces every instance of "chromosome" with "gene", it still seems to fit very well.

What is the book about? Schelling sums it up best himself, on p.13 (emphasis in the original):
What this book is about is a kind of analysis that is characteristic of a large part of the social sciences, especially the more theoretical part. That kind of analysis explores the relation between the behavior characteristics of the individuals who comprise some social aggregate, and the characteristics of the aggregate.
Schelling is very good at expressing mathematical models in ways that make them relatively easy to understand. That is not to say that you will understand them if you don't understand basic mathematics (especially algebra), only that people who do understand basic mathematics will quickly pick up the ideas that Schelling is putting forward.

There are a number of contributions that this book makes, that are interesting to the general reader. The first is how people sort or segregate themselves, and may do so based on very weak preferences for not being in the minority. This idea (the so-called checkerboard model of segregation) has underpinned a lot of interesting empirical research on ethnic segregation (which one of my PhD students is currently working on as well in a study of ethnic diversity in Auckland). The second contribution is about how people's choices are influenced by the choices of others (as represented by population-level totals or averages). Do you prefer to do the same things as others, or different? The third contribution is analysis of the multi-person prisoners' dilemma, where the payoff to each player in the game depends on the choices made by many others (recall that in the standard prisoners' dilemma, there are only two prisoners). This contribution defies a simple exposition, so I encourage you to read it for yourself.

I also found this bit interesting (in relation to choosing children's genes):
IQ might be treated as a competitive trait; valuable as it may be for its own sake, it may be construed particularly valuable in a competitive society, whether the competition is based on IQ measurements themselves, on the school success to which IQ may contribute, or on competitive success in one's career. If it were widely believed that the genetic mixtures within most parents made it possible by chromosomal selection to raise the expected IQ of a child by many points above what it would have been by chance selection of the chromosomes; and if it became widely believed in certain social classes that nearly everybody was taking advantage of this opportunity; parents might feel coerced into practicing selection not out of any dissatisfaction with the prospective intelligence of their children, but to keep up with the new generation.
This is really interesting since I think it links to Robert Frank's ideas of competition at least in the short term (see for example here) - while the resources necessary to choose children's genes to select for higher intelligence (or alternatively for other desirable physical traits) are scarce (and costly), this is an option that is only available to the wealthy. And only a few genes might be targeted. The middle class then might desire similar selection for their children, and demand increases. To keep their relative advantage in the gene selection process, the wealthy might select on even more genes (a more costly process), and so on. Some food for thought anyway.

Overall, as I noted above this book has aged really well, is still very relevant to many things, and is definitely worth a read.

Saturday, 4 February 2017

Students, rental shortages, and renting over the summer

The simple model of supply and demand teaches us that, when there is a shortage of a good, the price should rise. This is easily explained. As shown in the diagram below, if the current rent (R1) is below the equilibrium rent (R0), the quantity of rental properties demanded (QD1) exceeds the quantity of rental properties supplied (QS1). There is a shortage. In other words, at least some of the tenants who want to rent at the current market rent (R1) miss out on a property. So, what do they do? If they are willing and able to pay a higher rent, they could find themselves a willing landlord, and offer to pay slightly more than R1, to ensure they don't miss out. So, tenants will bid the rent up, until eventually the market reaches equilibrium at R0, where the quantity demanded and quantity supplied are both equal to Q0.


Now, consider this story from the New Zealand Herald from earlier this week:
Student tenants have paid thousands of dollars over summer for empty flats and apartments in a bid to secure their accommodation for 2017 as Auckland faces a growing rental shortage.
Occupancy levels reached a record high at Ray White's city branch with tenants continuing to pay rent when they returned home for holidays rather than lose their accommodation.
"A year ago students would end their tenancy in November, go home for the summer and return in February to rent another apartment," Delanie Horrobin of Ray White said.
"Now they are staying on because they are concerned they won't have somewhere to return to."...
"Our waitlist is at a record high of 50 and it is going to get worse with February and March our busiest months."
Peter Thompson from Barfoot and Thompson said there was inevitable price increases whenever rental accommodation was in short supply.
He said the start of the year was always more expensive as students scrambled to find accommodation and families settled for the school year...
"January is always busiest for us and the shortage means rent increases," Thompson said. "Come March the prices should come down."
Whenever students come back into university cities, the demand for rental properties increases, and shortages become apparent (we see the same stories about rents every January, as I have remarked on many times). Rents go up for everyone, but as in the story above, by March the rents have gone down again.

Why would a student be willing to rent over the summer, when they aren't even there? As noted in the analysis above, we expect the rent to increase when demand is high. And if you consider the rental 'price' as the rent paid over a whole year, this is another example of this (albeit somewhat hidden). If in order to secure a rental property a student tenant now has to pay for 52 weeks of rent instead of 40 (because now they pay for November-March as well as the rest of the year), then the annual rent paid by a tenant for that property increases (assuming it would otherwise be both vacant and un-rented over the summer). And we should expect nothing less from landlords. Why would you rent a property to students for 40 weeks, if there are willing tenants ready to rent for the full year?

Read more:


Thursday, 2 February 2017

The rise and fall of craft beer?

I was interested in this long article by Michael Donaldson in the New Zealand Herald last week, entitled "End of the golden age of craft beer" (the Herald also had a follow-up editorial the next day). Donaldson writes:
But despite massive growth in the industry, [Epic brewer Luke] Nicholas fears there's a regression of sorts at work - it's almost as hard now to sell beer as it was back in the day when he was knocking on doors until his "knuckles are bleeding".
"The other day I was trying to figure when we were in the 'golden age' of having the right number of breweries and customers and I think it was 2012 - that was the time before all the me-toos started coming on board.
"Every wannabe homebrewer, every person with money who wanted to buy a brewery, every branding guy who says, 'Look at this double-digit growth, I'm jumping on that.'
"People who are doing that now are already late because things are going to get tough."
That bit struck me because it reminded me of a model that we discuss in some detail in ECON100, based on dynamic supply and demand (or what Steven Lim calls in our ECON100 class 'the cyclical patterns model'). Here's how it works.

Consider a perfectly competitive market, as shown in the diagram on the left below. The diagram on the right will track changes in firms' profits over time. Initially (at Time 0) the market is at equilibrium (where demand D0 meets supply S0) with price P0, and firms are making profits π0. Now say there is a permanent increase in demand at Time 1, to D1 (this increase in demand is the discovery of craft beer by hipsters). Prices increase to P1, and firm profits also increase (to π1). There are no barriers to entry (this is a perfectly competitive market), so the higher profits encourage new firms to enter this market (new brewers flood the market, as noted in the quote by Nicholas above). Supply increases to S2 (more producers) at Time 2. Price falls to P2, and firm profits also fall (to π2). This is where we are probably heading now.


Of course, that's not the end of this little story. Now, at Time 2 profits are low and some firms will exit the market (no barriers to exit because this is a perfectly competitive market). Supply decreases to S3 (fewer producers) at Time 3. Price increases to P3, and firm profits increase to π3. As you can see from the profits over time (in the right-hand diagram), a cycle of high profits-low profits-high profits- etc. is created.

How realistic is this model? If you have a long memory, you may recall something similar in the kiwifruit industry in the 1980s (see for example page 6 of this paper about Katikati). Initially, a few farms made big profits. More farms planted kiwifruit. A few years later, the price collapsed due to oversupply. Many kiwifruit farmers went bust, and investors lost big. There are other similar examples as well, like venison in the 1990s.

All that is required are two things: (1) a perfectly competitive market (or one that is close to it); and (2) some shock to kick things off. In the case of craft beer, is it a competitive market? Remember that the characteristics of a competitive market are: (1) there are many buyers and sellers; (2) all products are homogeneous (the same); (3) information flows quickly and accurately; (4) there is freedom of entry into and exit from the market. The most problematic of these in the case of craft beer is the last one. Is there free entry and exit from the market? Now that craft brewers can simply lease existing plant from other brewers, I'd argue that there probably is low barriers to entry, which makes this market somewhat (but not perfectly) competitive.

There are a couple of other things to take away from the model above. First, a smart player in this market would adopt a 'hit-and-run' strategy. If prices and profits (and the costs of entry) are low, this might be a good time to get into the industry, but if prices and profits are high, this might be a good time to cash out (and bank a large capital gain). Maybe think twice before investing in that new craft brewer, unless you are confident the boom will continue (how confident are you that we haven't yet reached peak hipster?).

Second, the amplitude of the cycles on the right of the diagram gets smaller over time. This is because people are learning, i.e. more people recognising the cycle and trying to take advantage of it, such as by getting out before it hits the peak. Which may go some way to explain the motivations of the previous owners of Emerson's or Tuatara on their decisions to sell out to the major brewers.

Lots of investors got burned in the kiwifruit industry in the 1980s. Will craft beer burn investors next?