Friday, 19 January 2018

Professional tennis players are optimisers

With plenty of action in Melbourne at the Australian Open this week, it seems timely for me to write a post about tennis. I've already noted in an earlier post that tennis players appear to be loss averse. But are they optimising nonetheless? Do they make decisions that maximise their chances of winning (which would also be consistent with loss aversion)?

A recent paper by Jeffrey Ely (Northwestern University), Romain Gauriot (University of Sydney), and Lionel Page (Queensland University of Technology), published in the Journal of Economic Psychology (sorry I don't see an ungated version) provides us with some answer. The authors look specifically at the risk behaviour of servers on first and second serve:
When serving, players can opt for risky serves which are more likely to fail but are harder to return if successful or more conservative serves which are less likely to fail but are also easier to return.
The key is whether players behave differently on first and second serves (more on that in a moment). However, simply comparing first and second serves is not so straightforward. The authors correctly note that there is:
...a potential caveat with raw data on tennis serve: it can be characterised by a selection problem. First serves are always observed while second serves are only observed when the first serve failed. This means that second serves may be more likely to be observed when serving is harder than usual either for natural reasons (e.g wind conditions), fitness (e.g. tiredness late in the match) or strategic reasons (e.g. opponent having learned how to return the player’s serve).
Their solution is quite ingenious:
To cleanly compare first and second serves one ideally wants to observe some random events which determines in a given situation whether a serve is going to be a first or a second serve. We argue that such a situation occurs when the ball hits the tape (top of the net) on the first serve. The impact with the net gives the ball an unpredictable trajectory leading the ball to be either in or out. It introduces the required randomness as a first serve follows a ball let which lands in the court and a second serve follows a ball which lands outside the court.
The serve immediately following a 'let serve' is randomly either a first serve (if the 'let serve' landed in) or a second serve (if the 'let serve' landed out). Ely et al. use a dataset from 3,188 matches, involving over 690,000 serves, of which 7,605 follow a 'let serve' and are the core sample of interest. They test four conditions which would imply that players are correctly maximising their chance of winning:

  1. That first serves are more risky than second serves (the probability that a serve lands in is lower for first serves);
  2. That first serves are harder to return than second serves (players are more likely to win the point on their first serve);
  3. Using two first serves is a suboptimal strategy (it leads to a lower probability of winning the point); and
  4. Using two second serves is also a suboptimal strategy.
They find that:
...the serves from professional tennis players meet four conditions which make them consistent with the optimal strategy of risk taking between first and second serves. This result is observed both overall and when splitting the sample by gender and ranking.
So, it appears that professional tennis players are optimisers. Which we should expect - they are trained professionals who have developed skills in strategic play over many years.

Read more:



Thursday, 18 January 2018

Why men earn more when they marry

Alexandra Killewald (Harvard) and Ian Lundberg (Princeton) wrote in the IUSSP online magazine in June last year about their paper published in the journal Demography:
On average, in the United States, men earn more per hour when married than when single, even after adjusting for differences such as age and education. However, despite the suggestive evidence that marriage may exert a causal effect on men’s wages, we argue that closer inspection reveals little evidence of such a link.
There are a number of theories as to why marriage might cause an increase in wages for married men, compared with unmarried men. The Nobel prize-winner Gary Becker suggested that it was because of specialisation - wives contributing to unpaid labour at home freed up men to concentrate more on paid labour. Alternatively, it might be because marriage leads to a change of motivation - men who have become primary breadwinners for a family are more motivated to work harder and earn more to provide for their family. A third explanation is discrimination - employers may see marriage as a signal of stability for a male worker, and be willing to offer more work to men who are married.

However, it is also possible that the causation works in the other direction - that men who earn more are more marriage-worthy suitors and therefore more likely to be able to convince a woman to marry them. Alternatively, maybe there is actually no causal relationship between marriage and wages at all, but there is some third variable that causes both increases in marriage and increases in wages. One example is simply maturity - more mature men are more likely to marry, and as men mature they earn more (due to increased work experience).

In their paper, Killewald and Lundberg aren't able to directly test which direction causality runs, but they do gather some reasonably convincing evidence that marriage doesn't cause increases in earnings for men, using data on 4,218 men from the National Longitudinal Study of Youth 79 (NLSY79). First, they show that there is an apparent marriage wage premium of 3.8%, similar to other studies.

Second, they show that the increase in earnings happens before marriage, which seems to rule out specialisation as an explanation, since specialisation cannot easily occur before marriage [*]. However, that result might strengthen the case for motivation as an explanation, since some men will anticipate future marriage and being to work harder before marriage. It also suggests that reverse-causation might be at play. That is, men who are earning more are more marriage-worthy.

Third, they show that shotgun marriages (those that were followed by a birth within seven months) are no different than other marriages in terms of effects. That would rule out the increase in earnings arising from anticipation of future marriage, since shotgun weddings are less anticipated [**].

Fourth, they compare the results for men who marry at different ages. They find that men who marry after age 26 have no marriage premium, so the marriage premium is entirely among younger men. This seems to rule out both motivation and discrimination as explanations, as well as reverse causality.

What does that leave? Killewald and Lundberg suggest that maturation is the most likely explanation, and that the observed relationship between marriage and earnings for men is therefore spurious. They conclude in their paper that:
These results are consistent with the claim that marriage is associated with wage gains simply because the timing of marriage is correlated with the transition to adulthood. It may also be consistent with delay of marriage until financial thresholds are met, which may especially affect younger men, who have lower average wages.
I guess sometimes even Gary Becker can be wrong.

*****

[*] When I was reading the paper, I thought they had missed the obvious point that cohabitation can precede marriage, but they include a separate control for cohabitation in their models. They also tested models in their robustness checks that "...described wage patterns relative to the start of a first coresidential partnership (either marriage or cohabitation)...", and "...we found results very similar to those in the main models...".

[**] It is worth noting though, that shotgun weddings only occur for those men who are willing to marry. They will generally be more responsible, and hence more similar to those who plan ahead, then the less responsible men who knock up their girlfriend and then don't marry them. I'm unsure that it biases their results, but it is certainly one explanation for why there are no differences between those two groups.

Wednesday, 17 January 2018

Dolphins, incentives, and unintended consequences

In ECON110, when I teach about incentives and unintended consequences in the first week of class, one of the tutorial examples involves a story about paleontologists in China, who offered to pay peasant villagers for each dinosaur fossil fragment they found. The villagers responded to the incentive by giving the paleontologists lots of fossil fragments. However, they obtained the fossil fragments by breaking larger fossils into smaller fragments. Incentives can (and often do) lead to unintended consequences.

Now, it turns out, at least one group of dolphins is responding in a very similar way to a similar set of incentives, as the Guardian reports:
At the Institute for Marine Mammal Studies in Mississippi, Kelly the dolphin has built up quite a reputation. All the dolphins at the institute are trained to hold onto any litter that falls into their pools until they see a trainer, when they can trade the litter for fish. In this way, the dolphins help to keep their pools clean.
Kelly has taken this task one step further. When people drop paper into the water she hides it under a rock at the bottom of the pool. The next time a trainer passes, she goes down to the rock and tears off a piece of paper to give to the trainer. After a fish reward, she goes back down, tears off another piece of paper, gets another fish, and so on. This behaviour is interesting because it shows that Kelly has a sense of the future and delays gratification. She has realised that a big piece of paper gets the same reward as a small piece and so delivers only small pieces to keep the extra food coming. She has, in effect, trained the humans.
Her cunning has not stopped there. One day, when a gull flew into her pool, she grabbed it, waited for the trainers and then gave it to them. It was a large bird and so the trainers gave her lots of fish. This seemed to give Kelly a new idea. The next time she was fed, instead of eating the last fish, she took it to the bottom of the pool and hid it under the rock where she had been hiding the paper. When no trainers were present, she brought the fish to the surface and used it to lure the gulls, which she would catch to get even more fish. After mastering this lucrative strategy, she taught her calf, who taught other calves, and so gull-baiting has become a hot game among the dolphins.
No one who creates an incentive will ever be as smart as all the people (or dolphins) out there scheming to take advantage of the incentives.

[HT: Marginal Revolution]

Monday, 15 January 2018

Should we worry about non-randomness in multiple choice answers?

The human brain is built (in part) to recognise and act on patterns - it is "one of the most fundamental cognitive skills we possess". Often, we can see patterns in what is essentially random noise. Another way of thinking about that is that humans are pretty bad at recognising true randomness (for example, see here or here), and perceive bias or patterns in random processes.

Now, consider a multiple choice test, with four options for each question (A, B, C, or D). When a most teachers prepare multiple choice tests, they probably aren't thinking about whether the answers will appear sufficiently random to students. That is, they probably aren't thinking about how students will perceive the sequence of answers to each question, and whether the sequence will affect how students answer. That can lead to some interesting consequences. For instance, in a recent semester in ECON100, we have five answers of 'C' in a row (and of the 15 questions, 'C' was the correct answer for eight of them). I didn't even realise we had set that up until I was writing up the answers (just before the students sat the test), and it made me a little worried.

Why was I worried? Consider this 'trembling hand hypothesis': Say that a student is very unsure of the answer, but they think it might be 'C'. But, they are also aware that there are four possible answers, and they believe that the teacher is likely to spread the answers around in a fairly random way. If this student had answered 'C' to the previous question, that might not be a problem. But if they had answered 'C' to the previous four answers, that might cause them to re-consider their answer. Their uncertainty then may cause them to change their answer (or one of the earlier answers that they are unsure of), even though 'C' might be the correct answer (or their preferred answer, even though they are unsure).

Conversations with students after that ECON100 test with the many 'C' answers suggested to me that it probably didn't cause too many students to change their answers, but it did raise their anxiety levels. However, a new paper by Hubert János Kiss (Eötvös Loránd University, Hungary) & Adrienn Selei (Regional Centre For Energy Policy Research, Hungary), published in the journal Education Economics (sorry I don't see an ungated version online), looks at this in more depth.

Kiss and Selei use data from 153 students who sat one (or more) of five exams at Eötvös Loránd University over a two-week period. All five exams were for the same course (students could choose which exam time they attended, but the exam questions were different at each time). The authors ensured that half of students in each exam session had an exam paper where there were 'streaks' of correct answers that were the same letter, and half of the students had an exam paper with a more 'usual' distribution of answers. They then tested the differences between the two groups, and found:
Treatment has a significant effect at date 1 [for the first exam]. Points obtained in the multiple-choice test are 3 points lower in the treated group even if we control for the other variables. However, at the other dates and when looking at the overall data, treatment has no significant effect.
They then concluded that there was no treatment effect - that is, that the 'streaks' did not affect student performance. However, students in the first exam were significantly negatively affects (and received about three fewer marks out of 100). Presumably, students talk to each other, and these students in the first exam would have told other students about the unusual pattern of multiple choice answers they found (even though they didn't know the correct answers at that time). So, students in the later exams would probably have been primed not to be caught out by 'streaks' of answers. To be fair, the authors note this:
One may argue that after the first exam, students learned from their peers that streaks were not uncommon, causing the treatment effect to become insignificant later. Unfortunately, we cannot test if this is the case.
Indeed, but it doesn't seem unlikely. Kiss and Selei then go on to test whether students who give a particular letter answer to a question are more (or less) likely to give the same letter answer to the next question, and find that:
In half of the cases, the effect of having an identical previous correct answer (samecorrect1) is not significant at the usual significance levels... In the control treatments, we tend to observe a significant positive effect. Having two identical previous correct answers (samecorrect2) has a consistently negative impact on the probability of giving the correct answer to a given question, and this effect is significant... However, the effect of having three identical previous correct answers (samecorrect3) goes against our expectations, as in the cases where it has a significant effect, this effect is positive!
These results are a little unusual, but I think the problem is in the analysis. There are essentially two effects occurring here. First, good students are more likely to get the correct answer, regardless of whether it is the same letter answer as the previous question. Second, students may have a trembling hand when they observe a 'streak' of the same letter answer. Students who are willing to maintain a streak are likely to be the better students (since the not-so-good students eventually switch out of the streak due to the trembling hand, especially if the trembling hand effect increases with the length of the 'streak'). So, it doesn't at all surprise me that observing two previous same letter answers leads students on average to switch to an incorrect answer, but for that effect to become statistically insignificant for longer streaks - only the good students remain in the streak.

The authors control for student quality by using their results in essay questions, but that only adjusts for average (mean) differences between good and not-so-good students. It doesn't test whether the 'streaks' have different effects on different quality students. In any case, their sample size is probably too small to detect any difference in these effects.

All of which still leaves me wondering, should we worry about non-randomness in multiple choice answers? We'll have to wait for a more robust study for an answer to that question, and in the meantime, I'll make sure to check the distribution of answers to ECONS101 multiple choice questions. Just in case.

Read more:


Saturday, 13 January 2018

Increases in the minimum wage are effectively paid by consumers, but they lower inequality anyway

This post includes some good news, and some bad news, about increases in the minimum wage. Tobias Renkin's (University of Zurich) job market paper, co-authored with Claire Montialoux (UC Berkeley), and Michael Siegenthaler (ETH Zurich), has the details. In the paper, Renkin et al. estimated the pass-through of increases in the minimum wage into grocery store prices. In other words, they estimate whether it is consumers (full pass-through), grocery stores (no pass-through), or some combination of the two, that faces the costs of increases in the minimum wage for grocery store employees. Why care about grocery stores? The authors explain:
Grocery stores employ a substantial number of minimum wage workers, and their marginal costs are therefore likely affected by minimum wage hikes. Moreover, groceries make up a large share of consumer expenditure, especially in poor households and grocery prices thus substantially affect the real incomes of workers.
Most studies of the minimum wage look at hospitality (e.g. restaurant) workers, so it's good to have an alternative. Their price data contains U.S. data on:
...weekly prices and quantities for 31 product categories sold at grocery and drug stores between January 2001 and December 2012. On average, the sample covers 1,916 stores and 60,600 products over this period... Stores are located in 530 counties, 41 states and belong to one of about 90 retail brands... The data covers 17% of US counties which are home to about 29% of the overall population.
Unlike previous studies though, the authors argue that the date at which grocery stores make their price adjustments is the date that the minimum wage increase is announced, rather than the date it takes effect (which is usually some time later). Their analysis seems to back this up, with statistically significant effects on and around the date of the legislation, and less so around the date of implementation, of minimum wage increases.

The key results are for the minimum wage elasticity of grocery prices. That is the percentage change in grocery prices divided by the percentage change in the minimum wage, and can be interpreted as the percentage increase in grocery prices that would result from a one percent increase in the minimum wage. The results suggest that this elasticity is around 0.02. That is, a 1 percent increase in the minimum wage would increase grocery prices by around 0.02%. That doesn't sound like a lot, but the authors explain that:
In our sample, the average minimum wage legislation increases the minimum wage by about 20% in several steps. Our estimates suggest that such an increase raises prices in grocery stores by about 0.4% over three months at the time when legislation is passed. By the time the minimum wage has actually risen to the level set in the new legislation, price adjustment is already long complete.
There are lots of robustness checks that demonstrate the result is fairly robust. To work out how much of the minimum wage increase is passed through to consumers though, we need to also know what the minimum wage elasticity of grocery store costs is. That is, we need to know how much grocery store costs increase by when the minimum wage increases by one percent, and then compare that to the minimum wage elasticity of grocery prices. They authors find that:
Our estimate for pass-through based on our baseline specification amounts to 1.1. We cannot reject the hypothesis that pass-through is equal to 1...
In other words, all of the increase in the minimum wage is passed through to consumers. Grocery stores essentially pay none of the cost of the minimum wage increase. That is the bad news.

That might make you wonder then, since low income households spend the highest proportion of their income on food, and minimum wage increases are passed through in higher prices to those households (and richer households, of course), is the minimum wage increase entirely eaten up by higher prices? The authors address this question as well, and find that:
Expressing the costs as a percentage of annual household incomes reveals the regressive impact of the price response. The costs make up about 0.2% of annual income for households in the poorest bracket, and just one tenth of that, i.e. 0.02% for households in the richest bracket.
As a percentage of household income, the increase in grocery store prices are disproportionately borne by lower income households. However, the good news is that:
As expected, minimum wages reduce income inequality... In terms of nominal gains, households in this bracket gain an additional 1.5% of household income over an inequality neutral policy. Taking into account the price response in grocery stores reduces the additional gains to 1.34% and further taking into account restaurants reduces the gains to 1.15%.
Even though lower income households spend a higher proportion of their income at grocery stores than higher income households, they also benefit proportionately more from the increase in the minimum wage, and the increase in household income is not all eaten up by increases in grocery prices.

[HT: Marginal Revolution]

Friday, 12 January 2018

Drinkers generally have almost no idea of their breath alcohol concentration

As I mentioned in yesterday's post, back in 2014 I, along with Matt Roskruge and some willing research assistants, spent five nights in November and December surveying every seventh person on the street in Hamilton CBD, and taking a breathalyser reading from every one of them that was willing (which was almost all of them). We also asked each of them to guess their breath alcohol concentration (BrAC) before we took the breathalyser reading, then we compared their guesses with the actual measured BrAC from the breathalyser. The results are now out in a new paper forthcoming in the journal Alcohol and Alcoholism (gated, but you can email me for an offprint), co-authored by myself, Matthew Roskruge (Massey University), Nic Droste and Peter Miller (both Deakin University).

The headline result is that drinkers generally have no idea of the breath alcohol concentration. The key figure is presented below. Each dot represents a combination of an estimated BrAC (what the person guessed it would be) and actual BrAC (what was measured on the breathalyser). If people were good at guessing their BrAC, these dots would be close to the solid, 45-degree line, but as you can see, they are all over the place. Notice also that there are more dots above the 45-degree line than below it - drinkers were slightly more likely to overestimate their BrAC than to underestimate it. The dashed line is the trend, which at least shows that those who had higher guesses were on average more intoxicated (higher BrAC), so the news in terms of guessing is not all bad (it wasn't all just random noise!).


You can also see that the dashed line crosses the 45-degree line at a level of about 487. At BrAC levels above that, drinkers on average tend to underestimate their BrAC, but below that, drinkers tend to overestimate their BrAC. That is probably a good thing since if you want to prevent drink-driving, you want people close to the legal limit (which is now 250 mcg/L) to generally over-estimate their BrAC, increasing the chances that they will be cautious and avoid driving (none of our sample who were over the legal limit admitted to an intention to drive).

Finally, there was a good reason that we ran the survey in November and December of 2014. The legal BrAC limit for driving in New Zealand decreased from 400 mcg/L to 250 mcg/L on 1 December 2014. We hypothesised that drinkers had no idea how close they were to the previous limit, let alone the new limit. It turns out we were right. We didn't even bother presenting those results in the paper, since drinkers were so bad at guessing their BrAC.

However, if you don't know if you're over the limit or not, but you do know that the limit is decreasing, then perhaps that increases the chances that you take the less risky option of not driving, even if you've only had one or two drinks? Based on informal discussions we had with some of the research participants, it appears that might have been what was happening (although, it is also worth noting that there were many people who were completely unaware of the change in the legal BrAC limit, despite plenty of media coverage).

Thursday, 11 January 2018

The temporal gradient of intoxication in the night-time economy

It's pretty clear to most people who have been out late in the CBD of a city on a Friday or Saturday night that there are a lot of very intoxicated people about. But how intoxicated are people out and about on the streets at night? In a new journal article in the latest issue of the Journal of Studies on Alcohol and Drugs (sorry there isn't an ungated version, but you can email me for an offprint), Matthew Roskruge (Massey University), Nic Droste and Peter Miller (both Deakin University) and I set out to find out.

Essentially, we (along with some willing research assistants) spent five nights in November and December 2014 surveying every seventh person on the street, and taking a breathalyser reading from every one of them that was willing (which was almost all of them). The following diagram neatly summarises the results. The solid line is the moving average breath alcohol content (BrAC) at each point in time throughout the night. You can clearly see that it is pretty flat until about 9:30pm (the dinner crowd), then upward sloping until around midnight (the slow part of the night), then flattens out again when things start to get busy. We referred to those changes in the average level of intoxication as the temporal gradient of intoxication. The other thing to note from the diagram is the difference between those who were pre-drinkers (those who had something to drink before coming out to the CBD that night) and non-pre-drinkers. Pre-drinkers in the CBD at night are clearly more intoxicated, but their average BrAC levels out from about midnight, whereas the non-pre-drinkers continue to increase in intoxication throughout the night.


There's also an interesting difference between men and women, as shown in the second diagram below. Men continue to get more intoxicated (on average) throughout the night, but women's BrAC levels off from sometime around midnight. We don't have any firm details on why, but we can speculate that men are more likely to continue drinking at high levels when they are out on the town, but women are less likely to.


This research tells us a lot about what is happening in the CBD at night, and the most surprising thing was the levelling off of average BrAC from about midnight, which from our observations was when the majority of pre-drinkers really started to arrive. There's clearly more work for us to do on this, especially around understanding the factors associated with pre-drinking, which we hope to look at more in-depth in a follow-up study. We also have another paper also out this week using the same dataset as this one, and I'll blog about that one in the next couple of days.

Tuesday, 9 January 2018

Households' fuel mix choices in Pakistan, and why policy change is necessary

As I mentioned in a post last June, indoor air pollution is a serious problem that kills an estimated 70,000 people annually in Pakistan, and about 1.6 million people globally each year (see here). Indoor air pollution is a serious problem for developing countries, so understanding why households (or more accurately, the people making decisions who live in households) choose to use fuels that lead to high levels of indoor air pollution (solid fuels such as firewood, animal dung, and crop residues) is important.

To date, most studies of fuel use have treated fuel selection as independent. That is, those studies make the assumption is that each household decides whether or not to use a fuel independent of their choices of whether the household also uses other fuels or not. The worst of those studies only consider the fuel that households use the most, and ignore the other fuels that make up the mix of fuels the household uses. Some better studies do look at fuel mixes, but the mixes that are investigated are pre-determined by the researchers, and therefore might not reflect the on-the-ground fuel mix selections of actual households.

In a new working paper, Muhammad Irfan, Gazi Hassan and I use household data from the 2013-14 Pakistan Living Standards Measurement Survey to look at the actual fuel mix selections of households and the non-price factors associated with fuel mix use. One important aspect of the paper is that we use cluster analysis to determine the fuel mixes that are used by households, and we identify seven fuel mix clusters, made up of different proportions of solid fuels (firewood, animal dung, and crop residues) and modern fuels (natural gas and LPG). Three of the fuel mixes use exclusively solid fuels (in different proportions), while the other four use a mixture of solid and modern fuels. For one of the latter four fuel mixes, households use on average 82% natural gas, 9.8% firewood, and small proportions of other fuels - we label this fuel mix as a 'clean' fuel mix, as it contains the highest proportion of the cleaner modern fuels.

We then look at the factors associated with choosing each fuel mix in preference over the other six options. There are many comparisons, so I won't go through them in detail. To summarise though, households that have higher income and education, and those that are in urban areas, are more likely to choose the clean fuel mix, while agricultural households and larger households (those with more people) are more likely to choose the fuel mixes that are predominantly solid fuels.

Given that income is one of the determinants of clean fuel mix selection, it is reasonable to ask whether Pakistan (as a middle-income country) could simply grow out of using solid fuels. We look at this question directly and find that this is unlikely, especially in rural areas. The most feasible way for Pakistan to shift households to cleaner fuel mix use is to promote the take-up of piped natural gas connections, especially outside large urban areas. In other words, it requires a clear policy change to drive a shift away from solid fuel use and the indoor air pollution it generates.

Sunday, 7 January 2018

The optimal election strategy for conservative parties

A 2012 paper by Scott Eidelman (University of Arkansas) and co-authors, published in the journal Personality and Social Psychology Bulletin (ungated version here), demonstrates that low-effort thought is associated with political conservatism. Specifically, the paper describes four studies the authors undertook, but it was the first study that most caught my attention:
Study 1 was conducted in vivo at a local bar, with alcohol intoxication serving as a hindrance to effortful thinking; political attitudes of bar patrons were correlated with a measure of their blood alcohol content (BAC)...
To determine whether BAC was related to political conservatism, we regressed the 10-item conservatism index on participants’ self-identification as liberal/conservative, sex (0 = male; 1 = female), level of education, and BAC. Consistent with predictions, BAC was a significant predictor of political conservatism,... over and above ideological self-identification, sex, and education. 
In other words, people who were more intoxicated were more likely to agree with conservative statements than people who were less intoxicated, even after controlling for their self-identified affiliation as conservative or liberal. Of course, the authors note the main problem with this study:
Our data are correlational, and the possibility of reverse causality remains—political conservatives may drink more alcohol.
They measured how intoxicated people were when they left the bar, and their agreement with the political statements at that time. They note reverse causality - that conservatives may drink more. However, it is equally plausible that some other variable (social background or upbringing) affects both drinking behaviour and political views.

The other studies in the paper are interesting too. The second study distracted half of the participants while they completed a survey, and the distracted participants were more likely to agree with conservative statements (and no less likely to agree with liberal statements). The third study placed half of the participants under greater time pressure, and those that were under more time pressure were more likely to agree with conservative statements (and no less likely to agree with liberal statements). Finally, the fourth study asked half of participants to put a lot of thought into their answers, and the other half not to think too hard about each question. That final study found that those who didn't think too hard were more likely to agree with conservative statements (and no less likely to agree with liberal statements).

So, what do we learn from this study? If you want more people to agree with conservative statements, get them drunk, distract them, put them under time pressure, and tell them not to think too hard. It shouldn't be too difficult to create a winning conservative election strategy from that, right?

[HT: Rolf Degen, via Marginal Revolution]

Friday, 5 January 2018

If you overindulged over the holidays, you might be able to blame your wine glass

There's a long-standing result in behavioural economics that demonstrates that the size of the plate affects the amount of food you eat (see here for a very brief summary, or here for the latest meta-analysis, although the latter is gated). If the same applies to wine glasses, then this new research paper (ungated), by Zorana Zupan, Alexandra Evans, Dominique-Laurent Couturier and Theresa Marteau (all University of Cambridge) and published in the Christmas issue of the British Medical Journal, might be a cause for concern.

The authors looked at changes in the average size of wine glasses in England over the period from 1700 to now, and found:
Wine glass capacity increased from 66 mL (standard deviation 21.69) in the 1700s to 417 mL (SD 170) in the 2000s, and the mean wine glass size in 2016-17 was 449 mL (SD 161).
That's a more than sevenfold increase in the size of wine glasses over the last 300 years. But, do larger wine glasses make a difference?

Another paper (ungated), by Rachel Pechey (University of Cambridge) et al. (including two co-authors of the above paper) and published in the journal BMC Public Health in 2016, shows that wine glass size affects drinking:
Daily wine volume purchased was 9.4 % (95 % CI: 1.9, 17.5) higher when sold in larger compared to standard-sized glasses.
The larger glasses were 370 mL compared with the standard 300 mL glasses. Note that those sizes are smaller than the mean glass size reported in the new study of 449 mL.

So, if you over-indulged over the holidays, you might be able to blame the size of your wine glass!

[HT: Marginal Revolution]

Thursday, 4 January 2018

Globalisation, economic union and the number of countries

Since long before I studied any economics, I've been fascinated by the number and distribution of countries. In particular, I wondered why the world went through a period of consolidation in the 19th Century (such as the unification of Germany and of Italy) where the number of independent states decreased, followed by a period after World War II where the number of independent states increased, which has continued unabated. There are theories in the fields of history and economic history that appear to explain one of these phenomena (the decrease in nations, or the increase), but until now I don't think there was a compelling theory that explained both.

A 2016 NBER Working Paper (revised in July 2017; ungated version here) by Gino Gancia, Giacomo Ponzetto, and Jaume Ventura (all Universitat Pompeu Fabra) seems to fill this gap nicely. They start with the observation (similar to mine above) that:
In 1820, the world was made up of 125 countries and long-distance trade was very modest— less than 5% of world output. Over the following century, international trade grew more than four-fold while the number of countries fell to merely 54. The interwar period witnessed a reversal of these trends: trade collapsed and the number of sovereign states rose to 76 by 1949. Until then, political and economic integration had proceeded together. But the end of World War II marked the beginning of a new era. After 1950 trade between nations has flourished to levels never seen before. But this time the process of economic integration has been accompanied by different changes in the world political structure. On the one hand, the number of countries has risen to a record high of more than 190, so that more trade is now accompanied by political fragmentation.
The paper is purely theoretical (though includes a narrative section that demonstrates that the theory is consistent with empirical observation), and relies on four assumptions:

  1. There are border effects - that is trade is more costly between localities that are in different countries, than between localities within the same country;
  2. There is preference heterogeneity over public services - each locality receives public services from the government, and each locality has different preferences for those services, but every locality in the same country must share the same undifferentiated bundle of public services;
  3. Government costs are subject to economies of scale - the larger the country, the lower the average cost of providing public services to each locality; and
  4. Government costs are subject to economies of scope - running a government at a single level (where the same government provides both public services and market regulation) is less costly than running a government at multiple levels (e.g. where public services are provided at the country level, but market regulation occurs at the level of an economic union that is larger than the country).
They interpret globalisation as lowering the costs of trade between localities, and (without going into the maths) they find that their model shows:
At early stages of globalization, the gains from trade are small and the benefit of creating an economic union does not justify the loss of economies of scope. Thus, a single-level governance structure is optimal. As globalization proceeds, localities remove borders by increasing the size of countries. The number of countries declines and the mismatch between each locality's ideal and actual provision of public services grows. Eventually, this mismatch is large enough to justify a move to a two-level governance structure. The world political structure shifts from a few large countries to many small countries within a world economic union. The two-level structure is more expensive, but it is nonetheless desirable because it facilitates trade and improves preference-matching in the provision of public services. Our result of a shift from a single-level to a two-level architecture of government is consistent with the seemingly opposite reactions of the world political structure to the first and second waves of globalization.
They extend the model to consider the development of empires with colonies, and wars (between colonial powers and colonies, but not between colonial powers seeking to take over each other's colonies or home territories - these latter two cases would be useful extensions of the theoretical model). They also show that their model can explain stylised facts about the expansion of the United States to the west over time, and the breakup of new territories into smaller states.

Like the last two papers I blogged about earlier in the week (see here and here), I am interested in the business implications as well. The four assumptions (economies of scale, economies of scope, border effects, and preference heterogeneity) might equally apply to small business units as to localities. We've been observing an increase in large technology firms (or monopoly firms more generally) over time. Will we soon see a reversal into smaller firms?

One last point of interest is that the paper pointed me to the Interactive World History Atlas at geocron.com, which is very cool. It draws maps of countries/empires for each year from 3000 B.C. to now. It's not very interesting when you look at New Zealand (since the only points of change it picks up are in 1840 and 1908), but for other parts of the world (especially Europe, the Americas, Asia and Africa), it's very cool.

[HT: Marginal Revolution, back in August last year]

Wednesday, 3 January 2018

Social contracts, nation-building, and war

Social contract theory suggests that people have willingly given up some of their freedoms to the State and in exchange, the State agrees to protect their remaining rights. One conception of this is that we give up the freedom to retain all of our income (i.e. we grant the State the right to tax us), and in exchange the State protects our life, liberty and property. We submit to this because the State can provide for our collective needs things that could not be provided by each of us individually or through exchange with others. In other words, public goods.

So, I found this NBER Working Paper from last year (ungated version here) by Alberto Alesina (Harvard), Bryony Reich (Northwestern) and Alessandro Riboni (Ecole Polytechnique) really interesting. The paper is very theoretical and maths-heavy. However, it was of interest to me nonetheless because it explains the process of nation-building that occurred progressively as nation states and their armies increased in size, and why the nation states moved increasingly towards providing public goods. The authors write:
Mass warfare favored the transformation from the ancient regimes (based purely on rent extraction) to modern nation states in two ways. First, the state became a provider of mass public goods in order to buy the support of the population. Second, the state developed policies geared towards increasing national identity and nationalism...
The citizens face punishment from illegally avoiding conscription and the soldiers from defecting or cowardice; however it is hard to imagine that wars can be won by soldiers who are fighting only to avoid punishment and citizens who are uncooperative. So, when war became a mass enterprise, the elites had to reduce their rents and spend on public goods which were useful to the populations.
Providing public goods was one way for the elites to ensure that citizens would cooperate with conscription. Where does nation-building come in? The authors explain that:
Besides promising monetary payoffs, the elites have two means to increase war effort. One is to provide public goods and services in the home country so that soldiers would lose a lot if the war is lost. This would lead to investment in "peaceful" public goods and contribute to state building from a different angle relative to the need to collect taxes to buy guns. Second, the elite may need to homogenize or indoctrinate the citizens to make them appreciate victory and dislike living under foreign occupation.
Engaging in nation-building instilled in the citizens a sense of nationhood and a dislike for other nations, thereby increasing war effort. Alesina et al. conclude with:
A key implication of our analysis is that as warfare technologies led to a military revolution with larger armies, the elite had to change the way it motivated the soldiers: from the loots of wars for relatively small armies of mercenaries to public goods and nation building and/or nationalism for large conscripted armies.
Like the paper I discussed on Monday about queens, it made me wonder if there are business implications that can be drawn from this paper. Can CEOs induce more effort from their workers by providing them with public goods and instilling in them a dislike for the competitors? Is this already what tech firms are doing when they install ping-pong tables, and take employees away for all-staff conferences?

[HT: Marginal Revolution, back in May last year]

Monday, 1 January 2018

The warmongering queens of Europe

I just finished reading an interesting NBER Working Paper by Oeindrila Dube (University of Chicago) and S.P. Harish (McGill University), simply entitled "Queens" (ungated version here). In the paper, the authors essentially investigate whether queens are more likely than kings to engage in war.

Why would that be? Dube and Harish suggest a couple of reasons:
We examine two potential accounts of why female ruler may have increased war participation. The first account suggests that queens may have been perceived as easy targets of attack. This perception—accurate or not—could have led queens to participate more in wars as a consequence of getting attacked by others.
The second account builds on the importance of state capacity. During this period, states fought wars were primarily with the aim of expanding territory and economic power... Wars of this nature demanded financing, spurring states to develop a broader fiscal reach... As a result, states undertaking wars required greater capacity. Queenly reigns may have had greater capacity than kingly reigns for two reasons, both of which themselves reflect prevailing gender norms from this period. First, queenly reigns may have been able to secure more military alliances. While marriage brought alliances for both male and female monarchs, male spouses were typically more involved with the military of their home countries (than female spouses)...
Second, queens often enlisted their spouses to help them rule, in ways that kings were less inclined to do with their spouses—an asymmetry again reflects gender identity norms. For example, queens put their spouses in charge of the military or economic reforms, which effectively meant there were two monarchs overseeing state affairs, as compared to one. This greater spousal division of labor may also have enhanced the capacity of queenly reigns, enabling queens to pursue more more aggressive war policies. 
Dube and Harish constructed a dataset of 193 reigns in 18 polities (essentially states) in Europe over the period 1480-1913, where at least one queen had reigned in each polity (19% of the decade-polity dataset had queens reigning). In their headline results, they find that:
...polities ruled by queens were 27% more likely to participate in inter-state conflicts, compared to polities ruled by kings. These estimates are economically important, representing a doubling over mean war participation over this period. In contrast, we find that queens were no more likely to experience civil wars or other types of internal instability.
In terms of which account of why queens are more likely to engage in wars, they find that:
...among married monarchs, queens were more likely than kings to fight as aggressors, and to fight alongside allies. Among unmarried monarchs, queens were more likely than kings to fight in wars in which their polity was attacked. These results provide some support for the idea that queens were targeted for attack: Unmarried queens, specifically, may have been perceived as weak and attacked by others. But this did not hold true for married queens who instead participated as aggressors. The results are consistent with the idea that the reigns of married queens had greater capacity to carry out war, and asymmetries generated by gender identity norms played a role in shaping this outcome...
In other words, unmarried queens were more likely to be perceived as weak and attacked, while married queens were more likely to be the aggressor. Good reason for newly crowned queens to quickly find a quality husband (preferably with military experience)?

When reading the paper, I was a little worried about the exclusion of polities that never had a queen reign (which could lead to biased results), but Dube and Harish show that including king-only polities doesn't affect the results drastically, alongside a battery of other robustness checks.

It's an interesting paper, and it makes me wonder for modern times what, if anything, it might tell us about female CEOs, particularly in family-owned businesses where succession is most similar to a monarchy?

[HT: Marginal Revolution, back in April last year]