Tuesday 31 January 2017

Thomas Schelling on equilibrium

I've been reading Thomas Schelling's book "Micromotives and Macrobehaviour". Usually I would wait and write a review (which I will later), but in this case I wanted to share a quote from the book now rather than wait. Schelling was an excellent writer (as many others noted at the time of his death - see here for some links), and this bit on equilibrium is important, particularly given that economics is often criticised for the focus on equilibrium outcomes (that may never arrive):
The point to make here is that there is nothing particularly attractive about an equilibrium. An equilibrium is simply a result. It is what is there after something has settled down, if something ever does settle down. The idea of equilibrium is an acknowledgement that there are adjustment processes; and unless one is particularly interested in how dust settles, one can simplify analysis by concentrating on what happens after the dust has settled...
There may be many things wrong with "equilibrium analysis", including the possibility that it oversimplifies by neglecting processes of adjustment, or exaggerates the prevalence of equilibrium by neglecting shifts in the parameters that determine the equilibrium. But nobody should resist "equilibrium analysis" for fear that, if he acknowledges that something is in equilibrium, he will have acknowledged that something is all right. The body of a hanged man is in equilibrium when it finally stops swinging, but nobody is going to insist that the man is all right. An unnecessary source of distrust of economic analysis is the assumption that when an economist discusses equilibrium he is expressing approval. I believe that assumption is usually - not always, but usually - a mistake.
Well said. More on Schelling's book to come in a future book review post.

Thursday 26 January 2017

The minimum wage increase and the 'right' measure of inflation

The minimum wage increases to $15.75 per hour on 1 April. The New Zealand Herald had an interesting article yesterday on this:
It will deliver $65m a year in higher wages for the 119,500 people now earning below $15.75 an hour - implying that private employers will have to pay about $35.6m extra on top of the extra cost to taxpayers.
The 3.3 per cent increase from the current minimum of $15.25 a year is at the high end of expectations, considering that consumer prices rose by only 0.4 per cent in the year to last September.
Business NZ urged the Government to keep increases to "no greater than inflation as measured by the consumers price index" until a full review of the minimum wage policy.
Should we be comparing the 3.3 per cent increase in the minimum wage to the 0.4 per cent increase in consumer prices? I argue not, for three reasons.

First, and most obviously, new inflation data out today shows that the annual rate of inflation in the December year was actually 1.3 per cent. But this is far from the most important reason.

Second, the CPI measure of inflation does not take into account the (rapidly rising) cost of home ownership, so certainly understates the increase in the cost of living. And as noted by the New Zealand Initiative, reported here:
New Zealand is not really suffering from an "inequality crisis" but instead a crisis of rising housing costs hurting the poor more than the rich...
That's still not the most important reason though, since I imagine that most people earning the minimum wage are not homeowners (at least, not in Auckland given the out-of-control house prices there!). Rents are included in the CPI, so it probably does reflect changes in the cost of living for renters on the minimum wage.

Third, and most important, the CPI measures the level of consumer prices for all consumers on average. And no one is an average consumer. Fortunately, Statistics New Zealand now publishes a series of household living-cost price indexes. The latest data is from the September 2016 quarter (when overall the CPI increased by 0.3 per cent), so we can expect the latest numbers when they are released to be somewhat higher. The great thing about these indexes is that they show the change in prices experienced by different households, including by income level. This is based on the prices most relevant to households within that group (so you can look at a price index for beneficiaries, superannuitants, etc. as well as by income quintile).

Someone on the minimum wage full-time is likely to be in the lowest income quintile (the bottom 20% of income earners). For the September quarter, their price index increased by 0.5 per cent, which was greater than the 0.1 per cent for all households (and the 0.3 per cent for the second quintile). In fact, in almost all quarters going back to the start of that data in 2009, lower income households have experienced a higher rate of price inflation than all households on average.

So, quite apart from arguing that we should be narrowing the gap between the minimum wage and a living wage (however measured), it is appropriate that the minimum wage rises faster than the overall increase in consumer prices, simply because the prices relevant to lower income households generally have been rising faster than the rate of inflation overall.

Tuesday 24 January 2017

Sir Tony Atkinson, 1944-2017

I was saddened to hear of the passing of the British economist Sir Tony Atkinson recently. Atkinson was one of the giants of the economic analysis of poverty and inequality and a sometime co-author of both Thomas Piketty and Joseph Stiglitz. His latest book, "Inequality: What Can Be Done?" is currently sitting in my to-be-read-soon pile (along with Branko Milanovic's latest book on global inequality). The Financial Times has a good obituary of Atkinson. Here's one bit:
For more than 50 years Atkinson battled for economics to take poverty and inequality seriously, crediting his interest in the subject to a stint of voluntary service working at a deprived hospital in Hamburg in the mid-1960s. From 1967 when he took up a fellowship at St John’s College, Cambridge, he dedicated himself to the theory and the practicalities of understanding differences in society.
Tutor2u.net has collected tributes to Atkinson here, and this piece by Beatrice Cherrier is a nice reminiscence that covers his many contributions. He will be missed.

Monday 23 January 2017

Birkin handbags and the signalling value of Veblen goods

I was recently interesting by this Brooke Unger article published in 1843 magazine last year, about the economics of Birkin handbags. The usual story about very high-end luxury goods (like Birkin handbags) is that they are Veblen goods - goods where the extremely high price is a signal of the high status of the purchaser. This conspicuous consumption can come in many forms. However, when it comes to Birkin handbags it turns out there is more to the story than simple conspicuous consumption:
So-called Veblen goods reverse the normal logic of economics. With most goods, demand falls as price rises; with Veblen goods, the higher the price, the higher the demand, for the more expensive they are, the more effectively they proclaim the status of their owners. The gap between the cost of producing a Birkin and the price tag suggests that it falls into this category.
Yet in a couple of ways, Birkins do not look like classic Veblen goods. First, they’re not all that conspicuous. Almost everyone can identify the provenance of Gucci’s double-G spangled Dionysus shoulder bag; only initiates can spot a Birkin. So Veblen’s theory needs to be adapted to explain the power of inconspicuous but expensive goods. The authors of “Signalling status with luxury goods: the role of brand prominence”, which appeared in the Journal of Marketing in 2010, do so by dividing the rich into two groups: “parvenus”, who want to associate themselves with other rich people and distinguish themselves from have-nots, and “patricians”, who want to signal to each other but not to the masses. They theorise that more expensive luxury goods, aimed at patricians, will have less obvious branding than cheaper ones. Sure enough, they found that Gucci and Louis Vuitton charge more for quieter handbags and Mercedes slaps bigger emblems on its cheaper cars. People who cannot afford luxury but want to look as if they can (“poseurs”) go for big logos: counterfeiters usually copy louder goods.
The interesting bit is that the luxury goods that people buy depend on to whom they want to send signals. Remember that a signal is only effective if it has two characteristics: (1) it is costly; and (2) it is costly in a way that makes it unattractive for those with 'low-quality' attributes to attempt. With a Birkin handbag, the first characteristic is assured. What about the second? One of the ways the second characteristic can be achieved is if it is more costly (in some way) for those with 'low-quality' attributes. Which brings me to this bit from the article:
You cannot walk into an Hermès boutique and expect to walk out with a violet ostrich 30cm bag with palladium hardware, or indeed a Birkin of any description. You have to place an order, and wait. Hélène Le Blanc, then a lawyer working in Paris, was initially rebuffed when she approached the flagship shop in Faubourg Saint-Honoré several years ago. Once she persuaded the saleswoman that she was serious, and willing to wait, she was presented with binders filled with leather samples and hardware options, and allowed to place an order...
In an episode of “Sex and the City” from 2001 Samantha jumps a five-year queue by claiming she wants the bag for actress Lucy Liu.
Yes, the second bit of that quote is fictional, but let's say for argument's sake that celebrities and other sought-after clientele don't have to wait as long to get a bag. The time spent waiting is part of the cost, so if you are an average Jane wanting a Birkin bag, then the waiting time will be longer (and hence the cost for Jane will be higher than for a celebrity).

This bit also struck me:
But as Solca observes, there are good commercial reasons why rationing by queue rather than price can make sense. First, it gives Hermès a buffer: even if demand drops, sales will not. Second, it creates surplus demand for the bags, which overflows into demand for other Hermès products. Much of the firm’s business consists of selling consolation prizes: wallets, belts, beach towels and so on. As J.N. Kapferer of the Inseec Luxury Institute in Paris observes, the wait induces “impatient buyers to switch to other products of the brand, to calm their hunger until the much-awaited object of desire is achieved.”
In ECON100, we talk about selling complementary goods as a way of capturing value and increasing profits, and this is another example of that. Again, the unwillingness to wait for the handbag (and instead buying a belt or beach towel) further demonstrates the signalling value of the handbag itself.

[HT: Marginal Revolution]

Thursday 19 January 2017

Book review: Economics of the Undead

I just finished reading "Economics of the Undead", edited by Glen Whitman and James Dow. The premise of the book is a collection of chapters applying economics (and some other social sciences) to better understanding the economics of vampires and zombies. It sounded like a really interesting book (though I admit it has been on my bookshelf for a couple of years), but it took me a while to get through it. Mainly, that was because I found the book to be quite uneven. Some chapters were excellent (such as the chapter What happens next? Endgames of a zombie apocalypse, by Kyle William Bishop, David Tufte, and Mary Jo Tufte), but some were quite weak (I won't single any particular chapter out for this). To be fair, this is a problem with many edited volumes where many different authors contribute chapters on related topics. In this case, often it felt like the examples were forced (maybe they were glamoured?) when they didn't quite fit, while several opportunities were missed.

However, there were several highlights, including this from James Dow in the chapter Packing for the zombie apocalypse:
If you wanted the country to plan for a zombie apocalypse, you might think the most important thing would be to set up installations to preserve modern technology. However, existing books do that pretty well (although less well each year as the Internet - which would not survive the apocalypse - increasingly takes over). What is really needed is a setting that preserves the older technologies today; knowledge that might have been lost otherwise but will be needed after the zombies come.
Indeed, I've often pondered what skills would be in demand if there were a zombie apocalypse. For instance, who is going to make the shoes you need to outrun the zombie hordes? If you think specialisation and trade is bad, in a zombie apocalypse it would become very clear very quickly why doing everything yourself is not a good idea. What do you mean there isn't enough time to fight zombies and grow food?

I also liked this bit, from the aforementioned Bishop et al. chapter:
...economics and biology are complementary, and both suggest the existence of another endgame. What might that look like? In economic terms, zombies face a tragedy of the commons: each individual zombie wants to eat more humans, but if they all do this, then no zombie will have any humans left to eat.
The solution is obvious, isn't it: private property rights for zombies over humans - let each zombie farm their own humans to eat. Ok, Bishop et al. didn't say that, but I thought it needed to be said.

Michael O'Hara's chapter on Zombies as an invasive species gives us this:
The fact that zombies fit standard definitions of invasive species cannot be disputed. The Global Invasive Species Program (GISP) states that "biological invasion occurs when a species enters a new environment, establishes itself there and begins to change the populations of species that existed there before, as well as disturbing the balance of plant and animal communities." In the case of a zombie invasion, this change of the existing population of species is quite literal.
The chapter essentially concludes that control is probably more cost effective than prevention, when it comes to the zombie apocalypse (but try telling that to the first people turned into zombies!).

Dan Farhat (University of Otago) contributed a chapter on using agent-based modelling to model a vampire population within a (human) town, which builds on a similar paper that I blogged about three years ago.

Overall, the book will be of most interest to those who like both economics and undead in pop culture, although even then (like me) not all chapters will appeal.

Tuesday 17 January 2017

Tax cuts, and misinterpreted average and marginal tax rates

Many, many people don't understand the difference between average and marginal tax rates. Articles like this one by Leicester Gouwland don't help:
The bulk of the default tax collected is from people who have moved from the 17.5 per cent bracket to the 30 per cent bracket, that is a 71 per cent increase in tax.
The people who have moved from the 10.5 per cent bracket to the 17.5 per cent have suffered a 66 per cent increase in tax, however, if they now earn more than $24,000, they qualify for the independent earner rebate.
Neither of those two statements is correct. To see why, let's take a step back first. The average (income) tax rate for a taxpayer is the tax that they pay divided by their income. So, if a taxpayer have income of $50,000 and pay $8,020 in tax, their average tax rate is 16.04% (8,020 / 50,000). The marginal tax rate is the proportion of the next dollar they earn that would be paid in tax.

Most income tax systems are described in terms of marginal tax rates, as Gouwland does early in his article for New Zealand:
The current tax brackets on personal income are; 10.5 per cent for income up to $14,000; 17.5 per cent for income up to $48,000; 30 per cent for income up to $70,000; and then 33 per cent for any income more than $70,000.
So, for our taxpayer that has income of $50,000, their marginal tax rate (the tax rate they would pay on the next dollar they earn) is 30%, even though their average tax rate is only 16.04%.

Now, this is where most people go wrong. A person who moves tax brackets doesn't face a huge increase in their average tax rate, only their marginal tax rate. It's not correct to say that, for someone who moves "from the 17.5 per cent bracket to the 30 per cent, that is a 71 per cent increase in tax". To illustrate, let's take a taxpayer who was previously earning $45,000 (in the 17.5 per cent tax bracket) and give them a pay rise to $50,000 (in the 30 per cent tax bracket). Does their tax payment go up by 71% (as Gouwland suggests)? Hell no. It only goes up by 16.3%, from $6,895 to $8,020. [*] That's still a big increase, but it's certainly not 71%! And remember that their before-tax income has gone up by 11.1% as well.

Similarly, has someone who has moved "from the 10.5 per cent bracket to the 17.5 per cent ...suffered a 66 per cent increase in tax"? Let's take a taxpayer who was previous earning $12,000 (in the 10.5 per cent tax bracket), and increase their income to $16,000 (in the 17.5 per cent tax bracket). Their tax payment goes from $1,260 to $1,820, an increase of 44.4% (but in the context of a 33.3% increase in pre-tax income).

So, more care is needed in interpreting average and marginal tax rates, and articles like Gouwland's certainly don't help.

*****

[*] The tax paid by a taxpayer with an income of $45,000 is calculated as [$14,000 x 0.105] + [($45,000 - $14,000) x 0.175] = $6,895. The tax paid by a taxpayer with an income of $50,000 is calculated as [$14,000 x 0.105] + [($48,000 - $14,000) x 0.175] + [($50,000 - $48,000) x 0.3] = $8,020.

[**] The tax paid by a taxpayer with an income of $12,000 is calculated as [$12,000 x 0.105] = $1,260. The tax paid by a taxpayer with an income of $16,000 is calculated as [$14,000 x 0.105] + [($16,000 - $14,000) x 0.175] = $1,820.

Monday 16 January 2017

I should have changed my name to Aaron A. Aardvark

Alphabetic discrimination in economics was a topic of casual conversation while I was completing my PhD, mainly as a result of this paper (ungated earlier version here) by Liran Einav (Stanford) and Leeat Yariv (CalTech). I was reminded of this paper when I read the William Olney paper that I blogged about a couple of weeks ago, that had the researchers' surname first initial as a control variable.

Now, Matthias Weber (Bank of Lithuania, and Vilnius University) has a new and very readable paper that reviews the literature on alphabetical discrimination. It is argued that economists with surnames closer to the start of the alphabet (e.g. A, or B) have an advantage in terms of career progression, etc. compared with economists with surnames closer to the end of the alphabet (e.g. Y, or Z). This arises because of the convention in economics to list authors on multi-authored papers in alphabetical order.

How does this lead to an advantage? There are a couple of likely mechanisms. First, only the name of the first author appears in a citation when there are three or more authors. The rest of the authors disappear into the 'et al.' So, if visibility is important for name recognition, then authors who are more-often the first author (those with names closer to the start of the alphabet under an alphabetical ordering convention) would benefit most. Second, disciplines that don't use an alphabetical ordering convention typically order the authors by their relative contribution, so that the first author had the largest or most important contribution to the article. Therefore, readers in these disciplines may assume that the first author under an alphabetical ordering was also the author who made the largest contribution to the article, and give them greater credit (again, advantaging those with names closer to the start of the alphabet).

The alphabetical author ordering convention is peculiar to economics and to a few other disciplines -Weber notes "‘Business & Finance’, ‘Economics’, ‘Mathematics’, and ‘Physics, Particles & Fields’" as the disciplines that use this system, with others using a contributions-based ordering.

There are three main papers that have investigated the effect of alphabetical discrimination in economics, and Weber provides a useful summary of all three (along with other papers of interest). The first paper is the Einav and Yariv article linked above. They use data on faculty from the top 35 economics departments in the U.S., and find that:
Faculty with earlier surname initials are significantly more likely to receive tenure at top ten economics departments, are significantly more likely to become fellows of the Econometric Society, and, to a lesser extent, are more likely to receive the Clark Medal and the Nobel Prize.
The latter results (on the John Bates Clark Medal and the Nobel Prize) are not statistically significant, so are suggestive at best. The size of the effect on tenure is quite large:
In the regression for top five departments, each letter closer to the front of the alphabet increases the probability of being tenured by about 1 percent.
The same result holds for the top ten economics departments, but fades as lower quality departments are included (probably because the best economists with names early in the alphabet are already employed at the top universities).

The second paper (ungated version here) is one I hadn't read before, by Georgios Efthyvoulou (Birkbeck College, University of London). Efthyvoulou extends the Einav and Yariv analysis by looking at the top 17 and bottom 51 (of 68) economics departments in the U.S. He finds that, when restricting the sample to full professors only:
...having a last name initial “A” instead of “Z” increases the probability of being employed by a top economic institution, by slightly more than 20%.
The results for economists at all academic ranks are statistically insignificant. Efthyvoulou also finds similar results for the U.K. (but again they are statistically insignificant). More interestingly, he looks at the number of downloads and citations on RePEc, and finds that:
...being an A-author, and not a Z-author, increases (the logarithm of) the total number of file downloads by 13% and (the logarithm of) the total number of abstract views by 11%.
The third paper is this one (ungated earlier version here) by Mirjam van Praag and Bernard van Praag (both University of Amsterdam). The van Praags use data on all articles published in eleven top economics journals from 1997 to 1999, and look at the effect of alphabetical ranking (i.e. the first initial of the author's surname) on productivity. They find:
...a significantly negative effect of 'letter' on scientific performance; this indicates a reputational advantage of A-authors over Z-authors, resulting in an increased scientific output of 3.4 articles... and a 0.16... -article higher annual productivity.
All of this suggests that economists with an "A" surname have a distinct advantage over those with a "Z" surname, and all of the articles linked above have recommended some move towards either a contributions-based ordering or a random ordering of authors (the latter being a point that Debraj Ray and Arthur Robson have also made). However, no such change has yet been made, so clearly I missed an opportunity to change my name to Aaron A. Aardvark.

Saturday 14 January 2017

The limit to human lifespan, or not?

Back in October, an article was published in the journal Nature by Xiao Dong, Brandon Milholland, and Jan Vijg (all Albert Einstein College of Medicine in New York), entitled "Evidence for a limit to human lifespan". Dong et al. demonstrate what they suggest is evidence that natural human lifespan reached a peak in the mid-1990s, at about 114.9 years. Here is their key graph, which shows the maximum reported age at death for each year in their dataset (from the International Database on Longevity):


You can clearly see that, prior to 1995, the trend-line is upward sloping, and after 1995, the trend-line is downward sloping. This result was widely reported in the media (see for example here, and here, and here).

However, these results were also rubbished (see for example this piece by Hester van Santen). Van Santen makes the obvious point that the authors chose 1995 as their potential break point in the data:
An important bit of information: Vijg assumed this break in the trend in advance, he told NRC on the phone – in Dutch, as the biologist was born in Rotterdam. Vijg then had the computer calculate two underlying ‘trends’, one for the period before 1995 and one for after. These are the lines seen on the graph.
That’s not how these things are supposed to be done.
“No,” confirms statistician Van der Heijden. “You need to have solid theoretical substantiation before you start. When you infer that kind of turnaround using only the data, there’s a good chance that what you’re seeing is mere coincidence.”
The key point is that there wasn't anything special about 1995 in particular that makes it the best theoretical choice. If they had chosen some other year to break their data, the result might disappear (which van Santen suggests, but doesn't actually demonstrate for us).

I have another gripe with the Nature paper however. The data points used to construct their regressions are annual maximum recorded age at death from the dataset. So, there is only one data point for each year. So, the orange regression line (1995 onwards) is based on only twelve data points. No regression line is valid based on only twelve observations, and it's statistically illiterate to think otherwise.

Van Santen concludes (emphasis theirs):
Statistical evidence to support the assertion that the oldest living people haven’t gotten any older since 1995 is weak, according to two professors. We find the idea that the maximum human lifespan is 115 years to be unfounded.
However, it's important not to overstate this conclusion. Just because one of Dong et al.'s results is suspect, that is not enough to suggest that there is no maximum human lifespan, only that Dong et al. haven't provided enough evidence to support it. I really like this post by Matt Ridley from a few years ago, which makes some good points:
For all the continuing improvements in average life expectancy, the maximum age of human beings seems to be stuck. It’s still very difficult even for women to get to 110 and the number of people who reach 115 seems if anything to be falling. According to Professor Stephen Coles, of the Gerontology Research Group at University of California, Los Angeles, your probability of dying each year shoots up to 50 per cent once you reach 110 and 70 per cent at 115...
The lack of any increase in people living past 110 is surprising. Demographers are so used to rising average longevity all that they might expect to see more of us pushing the boundaries of extreme old age as well. Instead there is an enormous increase in 100-year-olds and not much change in 110-year-olds...
Next time you hear some techno-optimist say that the first person to live to 250, or even 1,000, may already have been born, remind them of these numbers. The only way to get a person past the “Calment limit” of (say) 125 will be some sort of genetic engineering.
I'd go a little bit further. The size of the cohorts reaching age 100 is increasing (because of improved health at younger ages, especially at very young ages), then we should also expect to see larger cohorts achieving age 110, age 115, and age 120. However, there isn't any evidence of these hyper-aged cohorts getting much bigger. [*] This suggests to me that the mortality rate for those aged 100 years and over might be increasing over time, thereby offsetting the effect of larger cohorts achieving age 100. If I get some spare time, this is a research question that deserves further attention.

*****

[*] Although an absence of evidence is not evidence of absence. There's probably an opportunity for an honours or Masters project to look at New Zealand longitudinal census data on those aged 100 years and over.

Tuesday 10 January 2017

January tradition continued... Rent increases again

Maybe journalists have a recurring annual diary note to write about rent increases in January. Both last January, and the January before that, there have been complaints in the media about rising rents in Auckland. This January is no exception. Here's the New Zealand Herald today:
Auckland rents are set to increase this month with landlords blaming housing shortages and an unprecedented interest in their properties.
The Auckland Property Investors Association warns that rent hikes could be more dramatic than in previous years as landlords look to pass on the cost of rising interest rates to their tenants.
The association acknowledged landlords could be the target of scorn if they increase rents...
Auckland University Students' Association president Will Matthers said renting had become unaffordable for students in the city.
"In 2014, we found the average rent for a student was $218 which is already a higher amount than what full-time students are entitled to receive. And that figure has certainly increased," Matthers said.
He said students were also faced with increased competition in the rental market with fewer houses available.
As I said last year (and the year before), this is simply a story of increasing costs for landlords (decreasing supply) and increasing demand for housing (from immigrants, students, etc.). The predictable outcome is an increase in the rental price. The diagram below outlines this change, with the supply curve shifting up and to the left from S0 to S1, and the demand increasing from D0 to D1. The market moves from the equilibrium E0 to the new equilibrium E1, where the equilibrium rent is higher (R1 rather than R0). 


The media need some new stories. Maybe something about Donald Trump?

Monday 9 January 2017

The game theory of withholding supply

Back in October, Ford stopped producing the Falcon XR8 Sprint. What happened? According to the Daily Telegraph:
FORD dealers are charging a staggering $30,000 more than the recommended retail price — up from $60,000 to $90,000 — for the final Falcon V8 sedans as buyers try to secure a future classic.
The last batch of Falcon V8s was thought to have sold out, but some dealers held a secret stash to release them onto the market in the final days of production so they could jack up the price.
Why did the price rise? Ford Australia boss Graeme Whickman explains:
“It’s supply and demand,” said Mr Whickman. “We set a wholesale price and recommended retail price … but at the end of the day the dealer and the customer decide what the vehicle is going to be sold and bought for,” he said, referring to high prices being charged for the initial shipment of Mustangs last year.
Demand for the Falcon XR8 was high, and the sellers withheld some supply - high demand and lower supply ensures that prices will increase. in this case by up to 50%.

So, why don't firms do this all the time? Why not withhold supply all the time? A bit of game theory can help, as laid out in the payoff table below [*]. The seller can choose to withhold stock, or not. The buyer can choose to buy now, or wait and buy later.


Where is the Nash equilibrium in this game? Consider the seller's choice first. If the buyers choose to buy now, the seller is better off choosing to withhold some stock, because profits will be higher. If the buyers choose to wait and buy later, the seller is better off choosing to not withhold stock, because profits will be higher. So, the seller doesn't have a dominant strategy (a strategy that is always better for them, no matter what the buyers choose to do).

Now consider the buyers' choice. If the seller chooses to withhold some stock, the buyers are better off choosing to wait and buy later, since they will force the price to fall when the withheld stock is released.  If the seller chooses not to withhold stock, the buyers are better off to buy now, or they will miss out on a car. So, the buyers don't have a dominant strategy either.

Neither player has a dominant strategy, so what is the solution to this game? We can find it using the best response method (which we've already described in the last two paragraphs). Any combination of strategies where both players are choosing their best response to the other player's strategy is a Nash equilibrium. However, in this case there is no Nash equilibrium (at least, no equilibrium in pure strategy). Instead, there will be a mixed strategy equilibrium where both the seller and the buyers should randomise their actions. The seller should sometimes withhold stock, and other times not.

Of course, that analysis assumes the game is not repeated. Car sellers sell new models each year, so really this is a repeated game. How does this change the game? If a game is repeated, then players can develop reputations. So, this actually reinforces the importance of the seller not constantly withholding stock. If they develop a reputation for withholding stock to sell later, then buyers will recognise that the seller's prices are artificially high and will wait until the seller releases the withheld stock, lowering the price.

So, provided buyers (as a group) learn from this experience, then there is little to fear from Ford dealers repeating the exercise every time a popular car is withdrawn from production. The problem is, of course, that it is unlikely to be the same group of buyers next time around.

*****

[*] This game is presented as a simultaneous game, even though the seller clearly chooses their strategy before the buyer. This is because when the buyer chooses to buy now or wait, they don't know whether the seller has withheld stock or not. For simplicity, we're also assuming that all buyers act the same way, when of course they won't. However, the overall point still stands even if we have multiple buyers in the game.

Sunday 8 January 2017

The economics of kidnapping and ransom insurance

Anja Shortland (King's College London) recently wrote a piece for the Washington Post's Monkey Cage, on the economics of kidnapping and ransom insurance, based on her forthcoming article in the journal Governance (sorry I don't see an ungated version online). I was intrigued, so I read the journal article, and it has lots of bits of interest, such as:
The market for kidnap insurance is characterized by externalities: Cash-rich victim stakeholders can increase kidnappers’ ransom expectations and encourage new kidnappings. Insurers need to prevent quick payments of premium ransoms, but can only do so at a cost...
For the market to be stable and risks to be calculable, this externality needs to be managed. Given the impossibility of enforcing “proper” ransom negotiations through contracts due to high transaction costs, the Coasean prediction would be a single supplier of kidnap insurance...
The Lloyd’s solution is, therefore, an ingenious answer to the problems presented by kidnap insurance. All kidnap insurance is underwritten or reinsured at Lloyd’s...
Kidnap insurance at Lloyd’s is, therefore, a perfect example of the way in which externalities and transactions costs shape the institutions, which make up the economic system...
In ECON100 and ECON110, we talk about the private solutions to an externality problem, one of which is integration - the party creating the externality and the party suffering from the externality combine into a single entity (the classic example is a beekeeper and an orchard, who combine into a single firm - an example which I've written about before). Shortland argues that this is also the case for kidnapping and ransom insurance. If victims' families pay too much ransom, then they impose additional costs on future kidnap victims' families, because kidnappers will expect larger ransoms to be paid. Because all kidnap insurance is underwritten or reinsured at Lloyd’s, this is like integration (though not exactly, since this is integration into a single market space and not into a single firm). It allows easy sharing of information about kidnappings and ransoms paid, and ensures that all insurers follow the 'rules' that reduce any externality problems. Because Lloyd's has complete control over who can be part of this market, it is relatively easy to enforce standards of behaviour for the insurers.

It also explains why these insurers offer pro bono advice to the uninsured. By controlling negotiations between the uninsured victims' families (or employers) and kidnappers, the insurers (or rather, their agents) can reduce any spillover externalities. That is, they ensure that future ransoms will not escalate because of excessive ransom payments by uninsured victims' families or employers.

Now, we know insurance leads to moral hazard problems. Moral hazard arises when, after an agreement is made, one of the parties has an incentive to change their behaviour (usually to take advantage of the terms of the agreement) in a way that harms the other party. In the case of kidnapping and ransom insurance, once a person has insurance their incentives change slightly - they may engage in riskier behaviour safe in the knowledge that any future ransom will be paid by the insurer. So, we might expect to see people take fewer precautions to avoid being kidnapped. So, I found this bit from the article interesting:
The maximum cover is limited to the ransom the client (corporate or family) can raise themselves. The ransom is reimbursed after payment and the insurance contract cannot be used as collateral—meaning the victim stakeholders actually have to raise the ransom themselves initially. Employers are not allowed to discuss the insurance with their employees—doing so invalidates the insurance cover... All these stipulations serve to reduce moral hazard among the insured.
So, because the victim doesn't fully avoid the costs of being kidnapped (and probably does not even know they are insured!), the moral hazard problems are reduced.

Finally, back to externalities, Shortland identifies a real problem with governments negotiating directly with kidnappers on behalf of victims' families:
Governments that intervene on behalf of kidnapped citizens regularly pay premium ransoms... Unlike the participants in the private governance regime, governments often act myopically and under media pressure. They have neither binding budget constraints nor a profit motive to contain ransoms. There is no mechanism to internalize the spillovers of one government’s settlements on other negotiations. Paying multimillion dollar ransoms solves political problems in the short term but confers significant externalities on concurrent and future victims, their governments, and the insurance sector.
I'd suggest that there is a case for governments to also rely on the services of the insurers' agents, rather than handling these negotiations themselves. If they can do so anonymously (as the insurers do), then the outcome would be better, in terms of lower future ransoms and lower incentives for kidnappers.

[HT: Marginal Revolution]

Wednesday 4 January 2017

English speakers have an advantage in the economics profession

If you ever thought that native English speakers have an advantage publishing in economics journals, it turns out you were right. In a new paper published in the journal Economic Inquiry (ungated earlier version here), William Olney (Williams College) looks into this question. He motivates it based on some choice quotes from editors of economics journals, like:
Robert Moffitt (former editor of the American Economic Review) said “I should also note that non-native-English speakers should work hard to get the English right and, if necessary, hire native English speakers to edit their papers. It is no doubt unfair, but editors and referees often take poor English as a signal of low quality.”
Remember that an effective signal is one that is costly, and is costly in a way that makes it unattractive to those with low-quality attributes to attempt. Good English academic writing is difficult, and is arguably more difficult for those who will write low-quality papers, suggesting that the quality of English writing in an academic paper is a good (albeit imperfect) signal of the quality of the paper. So it is unsurprising then that journal editors use the quality of English writing as a signal of the quality of the research.

Economics is a good choice of discipline to investigate the effect of native English speaking because: (1) there is a good objective ranking of publication quality (RePEc, which this paper makes use of); and (2) the highly mathematical/statistical nature of most economics publications means that English proficiency should be less important potentially than in other disciplines (so if it is important in economics, then it should be more important in other social sciences or humanities disciplines).

Olney first looks at the share of native English speakers among U.S. economics PhDs, the top 2.5% of economists (as measured by RePEc), and Nobel Prize winners. His Figure 1 is reproduced below, and the results are striking. Compared with all U.S. economics PhDs, there are a higher proportion of native English speakers in the top 2.5% of economists, and a higher proportion again among Nobel Prize winners.


Of course, that's not the end of the story. Olney then runs regression models that control for individual characteristics (among the top 2.5% of economists) and finds that:
...after controlling for other characteristics of the economist, native English speakers have a significant advantage in the economics profession. Specifically, being born in an English-speaking country increases the rank of an economist by about 100 spots. Furthermore, native English speakers have an advantage in both components of the ranking: they are more highly ranked according to quality-adjusted publications and according to quality-adjusted citations.
An advantage of 100 places in the ranking of the top 1059 economists is quite substantial. It's probably less helpful for someone at my level (my latest RePEc ranking was in the 19,000s, not helped by many of my publications (including some of my most cited ones) not being published in economics journals), even if the analysis could be extended that far.

Olney runs a battery of additional models that effectively exclude other explanations for the results, and sensibly concludes that native English speaking confers an advantage for top economists (and this is in spite of non-native English speakers at this level probably having access to the resources necessary to substantially improve their English writing).

Finally, a couple of the additional results are also interesting in their own right. When separating the sample by age, into those born before or after 1955, he finds that the positive effect of native English speaking is larger among the younger group - English language ability is becoming more important over time. Second, in that analysis he finds that male economists had an advantage in the older group, but that advantage was not statistically significant for the younger economists. Score one for growing gender equality.

[HT: Marginal Revolution, back in June]

Monday 2 January 2017

The distributional consequences of rising world dairy prices

In the week before Christmas, the New Zealand Herald reported that Fonterra will raise the price of milk for its domestic customers:
Fonterra - the country's biggest dairy company - said in a letter to customers that pricing across the range of fresh milk, flavoured milk and UHT products would increase by 9.1 cents a litre and fresh cream by 41.4 cents a litre, effective from January 2.
"We try and absorb the fluctuations in dairy commodity prices as far as possible, they've been steadily rising over the last six months, so we're having to make an increase to our wholesale list price in the new year," the co-operative said in a subsequent media statement.
Dairy product prices have been rising sharply over the last few months in response to declining production in the major dairy producers - New Zealand, Australia and the European Union.
There are both good news and bad news stories in this. Higher world prices for dairy products obviously make New Zealand dairy farmers better off, and this will flow through to greater spending in the economy by those farmers, and so on. However, higher domestic prices for dairy products make consumers worse off. Both of these effects are illustrated in the diagram below. Say that the initial world price of milk is PW0. Notice that the world price is higher than the domestic price (PD), which is why New Zealand is an exporting country (since farmers, i.e. Fonterra, can obtain a higher price for milk on the world market than they can in the domestic market). This is also why the domestic consumers have to pay the world price - why would the farmers (Fonterra) sell to the domestic market for PD, when they can sell on the world market (and receive PW0) instead? The consumers will only demand the quantity Qd0 at that price, but the farmers can sell much more (Qs0), with the difference (between Qd0 and Qs0) being the quantity of exports. Turning to measures of economic welfare, the consumer surplus (the difference between the maximum consumers are willing to pay, and what they actually pay) in this market (with the price PW0) is the area ABK, the producer surplus (the difference between the price the farmers receive and their costs) is the area KCD, and total welfare is the area ABCD.


Now, if the world price increases to PW1, farmers will receive a higher price but consumers must also pay a higher price (as in the article linked above). Domestic consumer demand will fall to Qd1, but farmers will be willing to supply more at the higher price (Qs1), and the quantity of exports increases (to the difference between Qd1 and Qs1). In terms of economic welfare, consumer surplus falls to the area AFG (consumers are made worse off, because they must now pay a higher price, and they buy less as a result), producer surplus increases to the area GHD (farmers receive a higher price, and they sell more milk), and total welfare increases to the area AFHD (society overall is made better off).

However, it is worth unpicking that last point a little bit more. Although society as a whole is made better off (on average) by the increase in the world milk price, this ignores any distributional impacts. Domestic consumers are being made worse off, and on top of that the increase in the price of milk is likely to be regressive - since low income people probably spend a higher proportion of their income on dairy products than do high income people, the impact of the change in milk prices is likely to be more keenly felt by those on lower incomes. Changes in world dairy prices are often reported here as being good for New Zealand when they rise, and bad for New Zealand when they fall. It is worth thinking through all the consequences of these changes though, since the distributional impacts are clearly important.

One final point is that the above analysis assumes that Fonterra doesn't have the ability to influence the global price of milk. Clearly that assumption doesn't really hold. If milk prices rise and New Zealand farmers respond by supplying more milk, that additional supply will go into the export market, lowering the global milk price. For more on this (and related points) see my earlier post on the cobweb model and the NZ dairy market.