Wednesday, 30 September 2015

The behavioural economics of choosing a flag

I've been mostly steering clear of the flag debate (I've got better things to do with my study leave time). However, Will Hayward (from the Psychology Department at the University of Auckland) wrote a really interesting article in the New Zealand Herald a couple of weeks ago that piqued (no, not a reference to red peak) my interest. Hayward wrote:
But psychologists know that our preferences are affected by lots of things that have nothing to do with our "taste."
For example, we like things more if we think they cost more; I can make you love the red wine I'm serving by telling you it cost $100 a bottle (you'll like it more than the same wine in a $10 bottle).
And it is not just how much they cost - it is also who owns them.
You like things you own more than things you don't own (as you know if you've ever sold a house; you can't believe people don't want to offer fair value on your great place, but then you can't believe how greedy people are when you visit an open home).
Finally, things become more appealing as they become more familiar.
Despite love representing the match of two soul-mates, a really good predictor of a successful match is how far two people live from each other.
Neither cost nor ownership nor familiarity represent inherent values or preferences; instead, they show that our minds continually change their interpretation of what we like as an object changes its relationship to us...
My guess is that whichever one of the tea towels is chosen, we'll rally around the original flag, some people because they always liked it and some people because they prefer Red Peak or another alternative.
What Hayward was talking could also be interpreted as the behavioural economics of choosing a flag, so I thought it would be worthwhile to follow up by more explicitly making the links between choosing a flag and behavioural economics. In particular, I want to focus on status quo bias and loss aversion.

First, status quo bias: If we change flag, then that entails both a loss (the current flag) and a gain (the new flag). On the surface that might seem like a straight swap, one flag for another, so whichever flag we like more should take preference, even if we like it only a little more than the others. As an example, let's say the current flag provides you with 10 units of utility (a measure of satisfaction, or happiness), and one of the other flags (which I'll refer to as the "alternative flag", so I don't have to single out one of them) provides you with 15 units of utility. You'd choose the alternative flag, right?

It's not that simple. Due to loss aversion, we value losses much more than equivalent gains (in other words, we like to avoid losses much more than we like to capture equivalent gains). According to Nobel prize-winner Daniel Kahneman, and Amos Tversky, we value losses twice as much as we value gains (this is one of the insights of what they call prospect theory). So, in the example above, choosing the alternative flag would entail a gain of 15 units of utility, but a loss of 20 units of utility (from giving up the current flag). This would make you worse off, so you would choose to stick with the current flag (even though the alternative flag is 'better').

It is this unwillingness to give up what we already have (even when something else would be slightly better), that gives rise to status quo bias. We tend to stick to what we already have, unless the alternative is much better. As Dan Ariely points out in Predictably Irrational, status quo bias keeps us in bad relationships, keeps us in jobs we dislike, and keeps us investing in failing projects. It's also closely related to the endowment effect.

What this all means is that it's highly likely that, at the end of the referendum process, we'll be left with the current flag. That's because it appears that for most people none of the alternative flags are substantially better than the current flag. Sticking with the current flag is what the polls are suggesting, including one poll by Waikato PhD student Alex Kravchenko (which you can still contribute to here).

So unless one of the other flag options suddenly starts to capture our interest, enough to overcome our loss aversion, then at the end of March next year we'll still be flying this:



Monday, 28 September 2015

All graduate students and econometrics students should read this book

Every now and again I read a book that has a very simple point which is heavily overstated and makes me wonder if I am somehow missing something important. This is what I found when reading "The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives" by Stephen Ziliak and Dierdre McCloskey. Anyway, I'll relate what I took away from the book, and hopefully someone will enlighten me if I somehow missed the point entirely.

Ziliak and McCloskey's overall premise is that generations of economists (and psychologists, medical researchers, and others) have been misusing statistics by relying solely on statistical significance, while ignoring practical significance (what they refer to as "oomph"). They dismiss the sole focus on statistical significance as "sizeless science" (and sometimes as "sign econometrics", where the researcher is only concerned about whether an effect is positive or negative, and not how big the effect is). That is certainly an important point, and one I am sure I am as guilty of as many others (in my blog as well as my writing, I don't doubt).

The remedy appears to be a re-focus on meaningful interpretation of the size of the effects, which is hard to argue against. It's all very well for us to note when coefficients are statistically significant, but are they economically significant is a more important question. Oftentimes we find variables that are statistically significance make no difference at all (for one example from my own work, see the (lack of) effect of social media on election results here, or ungated here).

On the flip side, variables that are not statistically significant might be important, with the statistical test simply being under-powered to identify it. Some of my most recent work, looking at the effect of a pastoral care intervention on pass rates in ECON100, illustrates this (I'll blog on that work at a later time).

However, I don't think it takes 250+ pages to make these points. I remember well my graduate studies really struggling through some McCloskey papers, and this book wasn't a whole lot easier to read. I also found the book rather repetitive. As one extreme example, the same quote from Karl Pearson appears on page 198-199, and again one paragraph later.

Researchers who are fans of Bayesian analysis will no doubt find a lot of support for their approach in the book, although Ziliak and McCloskey state up front that this was not their intent. The authors could also have pointed us more directly to meta-analysis, which is growing in importance in economics as it already has in other fields such as public health. Indeed my colleague Jacques Poot has done a great amount of work on meta-analysis in economics (see here for example).

I was a little confused by the authors' preference for confidence intervals. To my mind, this suffers from the same problems as what they refer to as asterisk economics. Or worse, since you set only one level for the confidence interval, whereas with asterisks at least you are showing multiple levels of significance (though, admittedly, not the bounds). I'm unconvinced that confidence intervals will make authors interpret effects sizes more carefully (which is what Ziliak and McCloskey argue).

Anyway, despite the flaws this is definitely a book that all graduate students in economics should read, as well as undergraduate students of econometrics. See here and here (pdf) for some other reviews of the book. Finally, I give the last word on p-values here to xkcd:



Saturday, 26 September 2015

Adjuncts are better teachers than tenured professors in introductory courses

Over the past decade (or more) there has been a rise in the number of contingent (or casual) teaching positions at universities, in New Zealand and around the world. They come in various forms, and have various titles like teaching fellows, adjunct professors, and so on. What they share in common is that, most of the time, they are positions that only involve teaching, and usually only undergraduate classes. That is, there is no explicit research component to the position (although some of those working in these positions may be completing doctoral studies, etc.). They are also fixed-term casual positions, with contracts that may or may not be renewed.

Given that these positions are fixed-term with no commitment from the institutions to renew them, this distinguishes these positions from continuing positions, or tenured (or tenure-track) positions. In particular, if a fixed-term teacher does a poor job of teaching, then they may not be offered future teaching contracts. In contrast, tenured (or continuing) faculty members don't face the same pressure for teaching quality. So, one would expect that contingent teaching faculty would, on average, be better teachers than faculty who have continuing contracts (or those who are tenured).

A forthcoming paper in the Review of Economics and Statistics (ungated earlier version here) by David Figlio and Morton Schapiro (both Northwestern University) and Kevin Soter (Greatest Good) tests whether this is the case. Specifically, they attempt to answer the question: "Do undergraduates taught by contingent faculty learn as much as those taught be tenure tract/tenured faculty?".

The authors use data from eight years (2001-2008) of freshmen students at Northwestern (over 15,000 observations in total), looking at the probability that these students take another class in the same subject, and looking at the grades in that subsequent class. There are a number of interesting findings, starting with:
...a contingent faculty member increases the likelihood that a student will take another class in the subject by 7.3 percentage points (9.3 percentage points when limited to classes outside the student's intended major) and increases the grade earned in that subsequent class by slightly more than one-tenth of a grade point (with a somewhat greater impact outside the intended major).
In their preferred specification, the effect on grades is about half as large as reported in that quote, but is still statistically significant. They also look at what is driving the results:
...the most outstanding contingent faculty members and most outstanding tenure track/tenured faculty members perform essentially identically... But the bottom quarter of the tenure track/tenured faculty have lower value added than the bottom quarter of the contingent faculty...
In other words, the best teachers (tenured or contingent) are similar in their effect on student learning, but the worst tenured professors are much worse than the worst contingent teachers. Which supports the expectation I laid out at the beginning of the post. The authors also find that the differences between contingent and tenured faculty are largest and most significant among faculty that have the most experience, and that the differences are largest and most significant for the subjects with the toughest grading standards, and for the subjects that attract the highest quality (measured by SAT scores) students.

Finally, they investigated whether (among the tenured faculty) the most outstanding researchers are better or worse teachers, and:
...find no difference in teaching outcomes compared to tenured faculty who have not received the recognition [for research excellence]...
Based on my experience I find that last bit a little surprising (or not - they are comparing good researchers with truly excellent researchers), but the rest makes a lot of sense. Teaching and research both require investment in terms of time and effort. While some may argue that teaching and research are complementary, I'm not convinced. I think they're substitutes for most (but not all) faculty (I'll cover more on that in a post in the next week). Faculty who do teaching but little or no research can be expected to put more effort into teaching than faculty who do a lot of (particularly high quality, time intensive) research. So, contingent teaching faculty should do a better job of teaching on average. These results seem to support that.

Does this mean that we should replace a bunch of the most experienced tenured professors with fixed-term teaching fellows? Not necessarily. The research only looked at teaching at introductory level, and doesn't tell us much about the quality of teaching at higher (especially graduate) levels, where you might expect research-oriented faculty to have an advantage. It does tell us that choosing the best teachers for introductory courses is important in attracting the right students into future courses though.

[HT: Marginal Revolution]

Thursday, 24 September 2015

The gains from trade... Chicken sandwich edition

Let's say you wanted to make a chicken sandwich, yourself, from scratch. I mean really from scratch - making your own bread, your own butter, your own cheese, growing your own salad, killing and cooking the chicken, etc. How long do you think it would take you? And how much do you think it would cost?

This guy has done it, and it only took six months and at the very reasonable cost of $1500:


Of course, we don't pay $1500 and wait six months for chicken sandwiches, and the fact that we don't have to nicely illustrates the gains from trade (between individuals and firms). I can teach students and get paid for that, and trade that money to someone else for a chicken sandwich. That person hasn't done all the work to prepare the sandwich themselves either. The sandwich seller trades some of my money with others who make the bread, make the butter, make the cheese, and so on. And so, instead of paying $1500 for a chicken sandwich, I can get one for easily less than $10. I could buy 150 chicken sandwiches for less than it would cost me to make one myself (not to mention all of the time savings as well). And all because of trade between individuals and firms.

Wednesday, 23 September 2015

Try this: Can you maximise total surplus?

The latest issue of the Journal of Economic Education has a short one-pager (gated, but only one page long so the 'first page preview' should allow you to read all of it) by Adalbert Mayer (Washington College), pointing to this online supply-and-demand visualisation. It allows you to run a simulation model of supply and demand for some good, and can be used to demonstrate how the market allocates goods efficiently (in the sense that it maximises total surplus, or total welfare - the sum of consumer surplus and producer surplus).

See how well you can do. I suggest that after you start the simulation, click on "Select Mode" until it says "Market" and run that simulation (by repeatedly clicking "New Random Proposal"). That will give you a sense of how well the market allocates goods, and how high total surplus can be. Then use "Select Mode" to get back to "Social Planner Knowing" and try allocating the goods between buyers and sellers yourself, and see how close you can get to the market's total surplus.

Finally, for an extra challenge, try the mode "Social Planner Blind". It's pretty scary - it will show you that if social planners don't know the willingness-to-pay of buyers, or the costs of sellers, then allocating goods in an efficient (welfare-maximising) way is next to impossible.

Enjoy!

Monday, 21 September 2015

Students should work less, but not stop working entirely

Lecturers often complain that their students don't spend enough time on readings, homework, assignments, etc. Of course, students are just optimising their time between study, work and leisure. And students are working more now than ever before (if they can find work), which makes it difficult for them to complete their studies. Would it be better if students worked less? Probably. But would it be better if students didn't work at all?

A paper from 2013 published in the Journal of Education and Work (sorry I don't see an ungated version anywhere), by Stephane Moulin (Université de Montréal), Pierre Doray (Université du Québec à Montréal), Benoit Laplante and Maria Constanza Street (both Institut national de la recherche scientifique) provides some answer. The authors use Canadian panel data from 2000 to 2007, and investigated whether the number of hours of paid work affect the tendency for students to drop out of a first university programme. They find:
that there is a critical threshold of number of hours worked, beyond which negative effects in terms of non-completion start to appear... More specifically, working 25 [hours] or more per week increases the hazard of dropping out.
This fits well with the prior literature which tends to suggest that there is a U-shaped relationship between work intensity and completing university. That is, the students who work the most, and the students who work the least (or not at all) are the most likely not to complete their degree programme. The explanation for this might be that students who work the most are substituting study time for work time and perform worse academically as a result. Students who work the least might be spending too much time on leisure, or maybe they are just not hard workers (which affects both their ability to gain work as well as their ability to perform academically). Authors of other studies have have also suggested that working might be complementary to study (if work skills or general work ethic are transferable to university), which in combination with substitution would lead to a U-shaped relationship between work and study completion.

I have one gripe with the paper, and that is the causal interpretation that they attach to their findings. They use panel data, and they compare work in prior periods with current study outcomes. However, students that anticipate that they are struggling to pass may increase work intensity, and this effect might be long-lasting. Also, students who are facing financial difficulty may do worse at university (because of stress, etc.) as well as working more. So, I don't think that causality is as straightforward as the authors suggest.

Having said that, 25 hours appears to be the critical value which is reasonably consistent across a number of studies. If you are working more than that, you might want to reconsider your options (I mean your work options, not your study options!).

Sunday, 20 September 2015

Two books on military and economic history

Earlier in the year I read two books that related to military history, and I've been meaning to write a little bit about them. I've always enjoyed reading about historical events, especially ancient and medieval history. So, I was particularly excited about the first book, Castles, Battles and Bombs: How Economics Explains Military History, by Jurgen Brauer and Hubert van Tuyll. I think I bought this book through a random Amazon recommendation. The second book was Blackett's War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare Warfare, by Stephen Budiansky. I forget who suggested this book to me, but I'm glad they did.

I've found a lot of history books to pretty dry. On the other hand, being able to apply economic theory to history is usually pretty cool. Castles, Battles and Bombs belongs to the dry category, unfortunately. The authors set out to focus on a number of different economic theories, and use one historical case study to illustrate each one: castle building in the High Middle Ages to illustrate opportunity costs; the Condottieri in the Renaissance to illustrate principal-agent problems; the decision to offer battle in the 17th and 18th Centuries to illustrate marginal analysis; the American Civil War to illustrate information asymmetry; strategic bombing in World War II to illustrate diminishing marginal returns; and nuclear armament in France to illustrate substitution. Don't get me wrong - there are lots of really interesting bits, and I got a lot of inspiration for examples to use in next year's ECON110 class. However, the writing probably isn't for a generalist audience - it kind of feels like six journal articles pulled together into a book, with an introduction and concluding chapter to weave it all together.

One part in particular is interesting though, in the chapter on diminishing marginal returns in World War II bombing. Diminishing returns occur when each additional unit of input leads to a lesser amount of additional output. Using data from the USSBS (United States Strategic Bombing Survey), Brauer and van Tuyll show (to take some selected data from their Table 6.3), in September 1944 17,615 tons of bombs dropped on Germany led to a reduction in railroad movements of 3519 ton-kms. In December 1944, 61,392 tons of bombs reduced railroad movements by 6377 ton-kms. So, a 250% increase in bombing led to a 81% increase in damage. Their data are more extensive than that, but that gives you a flavour of how quickly the returns from bombing diminished.

Blackett's War also covers the bombing campaign in World War II, but from an operations research point of view. Operations research is the use of advanced analytical methods (i.e. mathematical and statistical analysis) to decision making. Obviously this is similar to economics, and a lot of economists are involved in operations research today. In the book, Budiansky extensively covers the genesis of operations research, and does so in a way that is both interesting and accessible to ordinary readers. The story follows the physicist Patrick Blackett and a cast of British (mostly) and American scientists, and their contributions to World War II, particularly focusing on how they contributed to the U-Boat war in the North Atlantic.

The book also covers the ineffectiveness of strategic bombing. Budiansky writes:
Blackett talked to Bernal around the same time and did a few pages of penciled arithmetic. In the ten months from August 1940 to June 1941, the Luftwaffe had dropped 50,000 tons of bombs, an average of 5,000 a month. The number of civilian deaths was 40,000, or 4,000 a month; that worked out to 0.8 persons killed per ton of bombs...
He also calculated that the total decline in industrial production from the air attacks on Britain was less than 1 percent: factory output for the country in April 1941 had been affected more by the Easter holiday than by the German bombs that fell during the month.
Blackett argued against strategic bombing, but his arguments fell on deaf ears. However, eventually the operations research prevailed and air attacks on u-boats in the North Atlantic became much more effective as a result.

Overall, Budiansky's narrative style is easy to follow and the choice of anecdotes brings the time period to life. I highly recommend Blackett's War, and I'll certainly be looking to populate my bookshelves with some of Stephen Budiansky's other books, to read in the future.

Saturday, 19 September 2015

Not all countries have a negative view of Chinese real estate investment

I saw this in Lisbon Airport the other week, while travelling back to New Zealand:

I can't imagine a similar sign in the arrivals area at Auckland Airport going down too well. But I guess when your real estate market has gone through this:


... any source of increased demand for housing, which is going to push house prices back up, is a good thing. This is especially important when you have combined government, corporate, and household debt that is about 360% of GDP.

Are Chinese buyers affecting the Portugese market? It's hard to say, but Property Wire reports prices increasing for the first time since at least 2010, so maybe they are. Although, Property Wire put the increase down to economic recovery and increased confidence, rather than any effect of foreign buyers.

Friday, 18 September 2015

Why is a university like a night club?

New York Times magazine last week had an excellent report by Adam Davidson on college tuition in the U.S. There is lots of good parts to it (I recommend reading it all), but I want to focus on a few bits in particular, starting with this:
But probably the single most important factor behind the rise in tuition is one that few other businesses share: Students are not just customers; they are also an integral part of the core product. When considering a school, potential students and their parents often look first at the characteristics of past classes: test scores, grade-point averages, post-college earnings, as well as ethnic and gender mixes. School admissions officers call the process through which they put together their classes the ‘‘shaping’’ of the student body. Kevin Crockett is a consultant with Ruffalo Noel Levitz, a firm that helps colleges and universities set prices. He says that the higher the prices that schools charge, the more options they have in recruiting exactly the students they want.
"I’ve got to have enough room under the top-line sticker price," he says. A school that charges $50,000 is able to offer a huge range of inducements to different sorts of students: some could pay $10,000, others $30,000 or $40,000. And a handful can pay the full price.
What's going on here? This is a not-so-subtle form of price discrimination  - where different consumers (or groups of consumers) are charged different prices for the same good or service, and where the difference in price does not arise because of a difference in cost (and which I've previously written on here and here). Typically in price discrimination, the firm charges a higher price to groups of consumers that are willing to pay more for the good or service. However, for universities it isn't that simple because the make-up of the student body is going to affect the willingness-to-pay of each student. So, universities face a difficult optimisation problem.

How does that make them like night clubs? Davidson explains:
There’s a provocative analogy to be made with how another industry does its pricing. I called Paul Norman, who owns a company that promotes high-end dance clubs in London, and he agrees that his clubs have much the same challenge as colleges and universities. Their appeal to new customers is based, in large part, on the mix of customers who are already there. The biggest spenders are wealthy men from Russia and the Middle East. But they won’t spend a lot of money in a club filled with people just like themselves. Women who have the right look — posh in Chelsea, a bit more flash in Mayfair — are admitted free and are offered free drinks, but only if they arrive early in the evening and happily mingle and dance. He said that clubs do their own version of enrollment shaping. "It’s good for the crowds if you have a mix of ethnicities," Norman told me. On any given night, he said, about a quarter of the clubs’ guests get in free. It’s an odd model: giving your most valuable product away to some and charging a lot to others. But, Norman said, if everybody paid the same price, nobody would want to come, and in a few weeks clubs wouldn’t be able to charge anyone anything.
Similarly, if an elite school like Harvard or Princeton insisted on admitting only students willing to pay the full freight, they would soon find they weren’t so elite. Many of the best teachers would rather go elsewhere than stay in a gated, rich community. The most accomplished rich kids could be lured away to other schools by the prospect of studying with the best students and teachers. So, a school with the same high sticker price for everyone would be unlikely to have the attributes — high test scores, Nobel Prize-winning faculty, a lively culture — that draw national or international attention.
So coming back to price discrimination, in order for this pricing strategy to be effective, there must be different groups with different willingness-to-pay for the good or service, the firm must be able to determine which consumers belong to which group, and there must be no transfers between the sub-markets. The first condition is met (students from higher income families are willing to pay more for elite education), and the third as well (you can't on-sell your education to someone else). The second condition is met by the increasing use of data on prospective students:
The pricing of college and university tuition used to be based on gut feelings, Crockett told me. Until around 1992, administrators would glance at what their peers were charging and come up with a number. Today, the process involves a level of mathematical and statistical rigor that few other industries could match. Crockett uses a team of statisticians and data analysts, the latest in software and data with hundreds of variables on students’ ability and willingness to pay, academic accomplishments, most likely choices of majors, ethnicity and gender, and other attributes. To the public, one number is released: the cost of tuition. But internally the school likely has dozens of price points, each set for a different group of potential students. The tools can determine how valuable a potential student is to the school’s overall reputation: more points for sports and scholarly accomplishment, fewer for the telltale signs of a likely dropout.
So, universities in the U.S. use your data to determine fairly closely how much you are willing to pay and can price accordingly. In other words, they are dynamic pricing like many other firms, but recognising that they want to encourage some particular students to join their student body and will offer them lower prices even if the student would be willing to pay more. Because every student pays a different (personal) price, we refer to this as personalised pricing (or first-degree price discrimination).

In case you're wondering, New Zealand universities can't follow suit, at least not completely. Apparently we aren't allowed to attract students with discounted fees, although we can offer scholarships or even discounted transport. However, this isn't exactly the same as providing partial fee concessions, since it is more difficult to target scholarships to those with lower willingness-to-pay.

Read more:


Thursday, 17 September 2015

This couldn't backfire, could it?... Ragweed edition

One of the most famous (possibly apocryphal) stories of unintended consequences took place in British colonial India. The government was concerned about the number of snakes running wild (er... slithering wild) in the streets of Delhi. So, they struck on a plan to rid the city of snakes. By paying a bounty for every cobra killed, the ordinary people would kill the cobras and the rampant snakes would be less of a problem. And so it proved. Except, some enterprising locals realised that it was pretty dangerous to catch and kill wild cobras, and a lot safer and more profitable to simply breed their own cobras and kill their more docile ones to claim the bounty. Naturally, the government eventually became aware of this practice, and stopped paying the bounty. The local cobra breeders, now without a reason to keep their cobras, released them. Which made the problem of wild cobras even worse.

Fast-forward to this story about the Canadian town of Hudson, Quebec, from a few weeks ago:
The last few weeks of August don’t just mean bidding goodbye to summer; for many allergy sufferers, it also means the agony of ragweed season.
But the town of Hudson, Que., is turning to a new idea to give its residents some relief: setting a bounty on the nasty weed.
Hudson has begun paying residents to pull out ragweed, offering five cents a pound for the weed, or 10 cents a kilo...
Some residents are taking on the challenge with gusto. Seven-year-old Kyle Secours has become known as Hudson’s own “Ragweed Terminator.” He’s managed to slice and dice nearly three times his own weight in less than two days.
Kyle sounds suitably smart. How long before he (or indeed, any enterprising resident of Hudson) realises that running around town looking for ragweed to dig up is a lot of hard work, which he could save by simply growing lots of it in his backyard? Although to be fair, 10 cents a kilogram isn't a huge incentive (although surprisingly not terribly far from the international price of wheat).

[HT: Marginal Revolution]

Tuesday, 15 September 2015

Try this: Rockonomix

For the last several years, ECON100 students at the University of Waikato have had the choice of completing a written assignment, or a group video project. You can see the last two years of project winners here and here. Which is why I read with interest this short piece (gated, but only one page long so the 'first page preview should allow you to read all of it) in the latest issue of the Journal of Economic Education, about Rockonomix.

Students create a music video and replace the lyrics to a popular song with new lyrics based on an economics theme. Over the years, several of our ECON100 videos, including some of the winning ones, have done this. They have a national competition every semester - the Fall 2015 competition is just underway.

You can see the winner of the previous (Spring 2015) contest here, from Nyack College:

Perhaps we should introduce this model for future video projects at Waikato?

Monday, 14 September 2015

Why you should avoid getting hit by a car in China

OK, so it's probably not a good idea to let yourself get hit by a car anywhere. But you would be especially ill-advised to do so in China, at least according to this report in Slate last week:
It seems like a crazy urban legend: In China, drivers who have injured pedestrians will sometimes then try to kill them. And yet not only is it true, it’s fairly common; security cameras have regularly captured drivers driving back and forth on top of victims to make sure that they are dead. The Chinese language even has an adage for the phenomenon: “It is better to hit to kill than to hit and injure.”
What is going on? It turns out that the well-meaning government has rules in place to compensate the victims of an accident. Sounds fair, right? However, as the Slate article explains:
In China the compensation for killing a victim in a traffic accident is relatively small — amounts typically range from $30,000 to $50,000 — and once payment is made, the matter is over. By contrast, paying for lifetime care for a disabled survivor can run into the millions.
And so, the cost of killing a traffic accident victim is probably much less than the cost of injuring them, even when you consider the risk of prison:
In each of these cases, despite video and photographs showing that the driver hit the victim a second, and often even a third, fourth, and fifth time, the drivers ended up paying the same or less in compensation and jail time than they would have if they had merely injured the victim...
And last month the unlicensed woman who had killed the 2-year-old in the fruit market with her BMW—and then offered to bribe the family—was brought to court. She claimed the killing was an accident. Prosecutors accepted her assertion, and recommended that the court reduce her sentence to two to four years in prison.
A sentence of 2-4 years for deliberately killing a child is an incomprehensibly light punishment (this woman ran over the child not once, but three times to ensure she was dead). Weak punishment and high costs of injury compensation in turn leads to the perverse outcome of some drivers having a preference for killing, rather than injuring, pedestrians in accidents. Rational (and quasi-rational) drivers will seek to lower their costs, and if avoiding the monetary cost is very important to them (relative to the increased moral cost), then driving back over the pedestrian they just hit with their car to ensure they are dead is a preferred choice:

The Chinese laws also have other perverse consequences. Strangers may be reluctant to help people who have been injured, lest they be accused of causing the injury (see this article as one example - this point was also made in Levitt and Dubner's book Think Like a Freak, which I reviewed here). The risk of being found to have caused the injury due to the whims (or corruption) of a judge is enough to dissuade Chinese good samaritans from helping their neighbours.

All of which means that you should make doubly sure you look left and right before crossing that Chinese road.

Sunday, 13 September 2015

Rational onion theft

Most people wouldn't consider criminals to be particularly rational thinkers. However, one of Gary Becker's many contributions to economics was the development of an economic theory of crime (see the first chapter in this pdf).

Rational decision-makers (including criminals) weigh up the costs and benefits of an action, and will take the action which offers the greatest net benefits. That doesn't mean that every decision-maker is a calculating machine, but at least we can usually say that if the costs or benefits of an action change, then people may make different decisions. In other words, economists recognise that people (including criminals) respond to incentives.

So, when the price of onions increases, we might expect to see more onion thefts. Why? The benefits of onion theft have increased, while the costs (in terms of the risk of punishment) probably haven't much changed. We can describe two mechanisms for why this would increase onion thefts. First, career vegetable burglars (or maybe just the generally criminally-inclined) recognise that there are larger profits to be had by stealing onions for resale. So, they steal more onions (or maybe they start stealing onions). Second, ordinary people now face higher costs of purchasing onions. So, perhaps stealing onions becomes a lower cost alternative for them, so they steal rather than purchase. Either way, increases in onion theft. According to the article:
Prices of the vegetable -- a staple in Indian diets -- have almost doubled since July, leading to a series of widely reported heists in recent weeks. One victim, Anand Naik, had 750 kilos (1,653 pounds) of onions snatched from underneath a tarp by his roadside stall in Mumbai last month.
How can the farmers of onions fight back? The most effective way must be to increase the costs of onion theft to the thieves (by at least enough to offset the greater benefits the thieves obtain). This can be achieved by making the punishments higher, or by making the risk of being caught higher. With little chance of the increased punishment from the authorities, onion farmers have resulted to trying to catch the onion thieves themselves:
In Mumbai, Naik estimates that it will take him six months to recover from the 38,000-rupee loss. To prevent further thefts, he asked his nephew to sleep on the street each night beside sacks of onions.
Which demonstrates that the incentives for farmers have changed as well (and they are responding rationally as well). Onions are more valuable, so the farmers are willing to face higher costs to protect them.

Saturday, 12 September 2015

Traffic congestion, pollution and the prisoners' dilemma

Earlier this week, Matt Heath wrote an interesting column in the New Zealand Herald, talking about his transport preferences:
I've been flirting with different ways of getting around Auckland and nothing beats the door-to-door comfort, convenience and in-house entertainment of your own vehicle.
Heath does a good job of identifying the main culprit when it comes to traffic congestion and pollution:
Whatever anyone says, driving to work is easier and way more fun [than taking the bus].
As for the pollution there's a simple equation. If most people stopped driving there could be a significant difference. But if only I stop driving it won't make any difference. I can't save the world on my own. If everyone stops, it's logical for me to continue driving because the roads would be clear and the air fresh anyway. If everyone is driving, I'd drive too because one person isn't going to be the difference that saves the world. Selfish but logical.
He's describing an example of the prisoners' dilemma, which I've also described earlier here. Consider the decisions of two commuters only (A and B), who can decide to drive, or take the bus. For simplicity, let's also assume this is a non-repeated game (which it clearly isn't of course). If both commuters choose to drive, then both face traffic congestion and pollution. If one drives and the other takes the bus, then the driver has a free and clear commute to work, while the other has the loss of fun associated with bus travel (as explained by Matt Heath). If both take the bus, then both face the loss of fun. One last assumption - the loss of fun from taking the bus is worse than suffering the congestion and pollution associated with everyone driving (which is why Matt Heath decided to drive rather than take the bus). The game in normal (payoff table) form is as follows:


What happens in this game? Consider Commuter A first. They have a dominant strategy to drive. This is because the payoff is always better than taking the bus. If Commuter B drives, Commuter A is better off driving (because suffering the congestion and pollution is better than the loss of fun associated with taking the bus, at least according to Matt Heath!). If Commuter B takes the bus, Commuter A is better off driving (because having a free and clear trip to work is better than the loss of fun associated with taking the bus). So Commuter A should choose to drive, because driving always results in a higher payoff.

Now consider Commuter B. They also have a dominant strategy to drive, because the payoff is always better than taking the bus. If Commuter A drives, Commuter B is better off driving (because suffering the congestion and pollution is better than the loss of fun associated with taking the bus). If Commuter A takes the bus, Commuter B is better off driving (because having a free and clear trip to work is better than the loss of fun associated with taking the bus). So Commuter B should choose to drive, because driving always results in a higher payoff.

So, there is a unique Nash equilibrium (and dominant strategy equilibrium) in this game, where both commuters choose to drive to work. Even though pollution and congestion would be less if both took the bus. Unfortunately, this result will hold even if we extend the game to many players, or if we treat it as a repeated game. Which is why we suffer our way through traffic jams every day on the way to work.

Thursday, 10 September 2015

The last two centuries of global inequality

Last week I wrote a post on ancient inequality, following this paper by Branko Milanovic and others, and promised to follow up by writing on some of their more recent work. This 2009 paper by Branko Milanovic takes a more detailed look at global inequality over the last two centuries.

Before the Industrial Revolution, global inequality was fairly low. Sure, some countries were richer than others and each country had their own privileged elite, but because the economies were largely based on agriculture and to a lesser extent trade and natural resource extraction, differences in average incomes between countries were relatively small (certainly compared with today). Following the Industrial Revolution, the newly industrialised countries rapidly increased in wealth and income, leading to an increase in global inequality that persists to today.

That all seems reasonable. However, Milanovic's paper demonstrates some substantial changes in the structure of global inequality:
...inequality between individuals is much higher today than 200 years ago, but – more dramatically – its composition has totally reversed: from being predominantly driven by within-national inequalities (that is, by what could be called “class” inequality), it is today overwhelmingly determined by the differences in mean country incomes (what could be called “location” or citizenship-based inequality). This latter, “locational”, element was “worth” 15 Gini points in the early 19th century; it is “worth” 60-63 Gini points today. 
In other words, most (65 percent) of the global inequality two centuries ago was within-country (within-country inequality refers to inequality between the citizens within each country, not considering the incomes of citizens in other countries), rather that between-countries (between-country inequality is essentially differences in average incomes between different countries). In contrast, most of global inequality today is between-countries (only about 10-20 percent of overall inequality is within-country). What does that mean? Milanovic suggests that:
The implication of (a) changing composition of global inequality, and (b) stable inequality extraction ratio is that the main “inequality extractors” today are citizens of rich countries rather than individual national elites as was the case 200 years ago.
That's right. It isn't the fat cat capitalists bent on global domination, but the relatively (in global terms) high incomes of the ordinary citizens of rich countries that have been the main drivers of global inequality over time. However, looking forward there is some hope, because of the increasing middle classes in populous poor and middle-income countries. Milanovic notes that, if China (and India) grow faster in absolute terms (not just in terms of growth rates) than rich countries such as the U.S., then global inequality is set to decline. Which means China's current slowdown is likely to be bad news for global inequality.

Milanovic has written a lot more on inequality. I look forward to blogging on that subsequent work sometime in the future.

Wednesday, 9 September 2015

One of the funniest requests I've seen by a journal reviewer

Journal reviewers can be overly pedantic. They can be nasty. Often they can be really helpful. But sometimes you wonder if they are just taking the piss. One example of the latter struck me in a footnote to a paper I read yesterday:
At the request of an anonymous referee, we perform a check on our nominal masculinity measure by examining the score for Bacon Magazine’s “Top 10 Stripper Names.” In theory, female exotic dancers choose hyper-feminized stage names. Only two of those names, Candy and Porsche, had a nominal masculinity of 0. Three other names had nominal masculinity names below the mean female voter. Two other names on the list actually scored quite high in nominal masculinity; Angel had a nominal masculinity of 0.15 (due to its popularity among Spanish speakers as a boy’s name) and Houston had a nominal masculinity of 0.98. These findings suggest the potential for further research, which is beyond the scope of this paper.
Asking the authors of a paper to evaluate the masculinity of stripper names is a brilliant suggestion, not to mention hilarious (though one might wonder how the reviewer knew there was even such a thing as Bacon Magazine's Top 10 Stripper Names).

The footnote comes from this 2009 paper (ungated version here) by Bentley Coffey (Clemson University) and Patrick McLaughlin (George Mason University). In the paper the authors investigate whether women with more masculine names (i.e. where their name is more bestowed on a male child, like Bobby or Jerry) are more likely than those with less masculine names (like Carol or Robin) to be judges in South Carolina. They compare judges with members of the South Carolina bar, and with an enormous database comprised of the names and genders of all registered South Carolina voters.

They find that, indeed, women with more masculine names are more likely to be judges, providing support for what they termed the Portia Hypothesis. There are a number of different mechanisms that might lead to this finding, not limited to outright discrimination. The authors note:
The explanation favored by many who have reviewed our work is that there is a common cause: wealthier families give their female children “stronger” (i.e., gender-neutral) names and a daughter of a wealthier family is more likely to become a judge.
This seems plausible, and testable as well. A clear possibility for some follow-up work. Although, perusing the list of New Zealand High Court judges, there aren't a lot of women (just 11 from the 38 judges, and just one of seven associate judges), and fewer still with masculine names (Patricia, Pamela, Jillian, Ailsa, Rebecca (x2), Mary, Sarah, Rachel, Susan, Anne, and Hannah). So perhaps New Zealand has a gender bias in judges that is not mediated by the masculinity of names?

Sunday, 6 September 2015

What Candy Crush can teach us about teaching

Candy Crush Saga is a ridiculously addictive puzzle game. People literally spend hours swiping candies into lines of three or more. It's success is linked to its simplicity, but also the challenges in mastering it (see here for a good discussion).

In teaching, we often want students to master skills that range from the simple to the complex. So, does Candy Crush have anything to offer us in terms of improving teaching practice? According to a new paper by Evangeline Marlos Varonis and Maria Evangeline Varonis (both from University of Akron), it does. The authors look at Candy Crush through the lens of Universal Design for Learning (UDL). I am quite partial to UDL, and students in my classes may recognise it as I use a certain flavour of UDL in my teaching (it is one of three major strategies I employ, alongside interactive teaching, and contingent scaffolding).

The authors offer a number of course design takeaways from their investigation of Candy Crush. Most apply mainly to online courses, but some have wider application (and sound very much like things I do in my classes), including:
Do not let learners get stuck. Individualize instruction by providing hints, automatic or manual, when activities are challenging...
Provide students with regular feedback on performance and the opportunity to earn easy, small bonuses on activities or assessments. It might give them the perk they need to continually do their best and remain active in the class...
Rather than creating “one-and-done” assessments, individualize the learning experience by creating multiple opportunities for students to succeed. Learners who do poorly on a once-only assessment may predict their final grade and give up. Giving them the opportunity to take an assessment again, or to drop a lowest grade, can motivate them to continue despite an initial failure. This encourages students to focus on mastery rather than grades.
Provide additional, optional resources or activities that are available to learners even if the next unit is not. Those who are motivated can continue while those who are not are not penalized for not engaging, and once again you are affording learners some control over their environment...
Real-life examples are rarely as clean as those in a textbook, and if learners are truly preparing themselves for the workforce and lifelong learning, then problem-solving that requires analysis of the situation and synthesis is great preparation for what is to follow. Instructors can introduce additional complexity by incorporating problem-based learning in the form of case studies or projects that require learners to apply theoretical concepts to practical real-world situations...
Present new concepts or skills sequentially, but constantly build on what learners have accomplished by presenting activities and assessments that require not only that they demonstrate knowledge but also that they can apply this knowledge creatively in novel situations.
Offer learners options (e.g. choose between these three essays; complete two of the five modules; select a film to review from this list) so they maintain a choice in the curriculum they are pursuing, ushering it forward as they personalize it. Having choices other than “do this and pass” or “don’t do this and fail” will motivate learners to actively participate in the learning process...
Provide surprises that delight and compel further exploration. Easy bonus questions on an assessment or extra credit points for completing all work up to a certain point on time can be a reward, as can be a one-off content topic that is of inherent interest even though it is not strictly part of the curriculum of the course.
Of course, none of this is startlingly new, but it does suggest that at least some of the teaching that we do in class can be effectively 'gamified' for online teaching and learning. Although, I don't see myself being replaced by Tiffi anytime soon.

[HT: Ruth Taylor]

Saturday, 5 September 2015

Why girls should have fewer male friends in high school

There is plenty of research that suggests that having a higher share of female classmates increases academic performance, both for boys and girls (see for example this paper). However, the gender composition of networks of friends hasn't been considered as often. A paper in the latest issue of the American Economic Journal: Applied Economics (it's complimentary access at the moment, but here is an ungated earlier version just in case) by Andrew Hill (University of South Carolina), entitled "The girl next door: The effect of opposite gender friends on high school achievement" goes some way towards addressing this gap.

In the paper, Hill investigated the effect of the share of opposite gender friends within students' friendship networks (rather than the share of girls only). One problem with this sort of analysis is that students choose their friends, and parents choose the schools that their children attend. So, there is likely to be some selection bias involved - if you simply compared students with more opposite gender friends with students with fewer opposite gender friends, you can't tell if any differences you observe are because of the share of opposite gender friends, or because of the selection bias. Hill overcomes this selection problem using instrumental variables analysis (which I have earlier discussed here): he essentially finds some variable that is expected to be related to the proportion of opposite gender friends, but shouldn’t plausibly have a direct effect on academic outcomes, and uses that variable in the analysis. In this case, he uses as an instrument the proportion of opposite gender schoolmates in their close neighbourhoods. As he says:
Students with more opposite gender schoolmates in their close neighborhoods have more opposite gender school friends, and, given that the gender composition of schoolmates in a student’s neighborhood is essentially random, provides plausibly exogenous variation in the share of opposite gender friends from which a causal effect on the outcomes of interest can be estimated.
The hypothesis being tested is that having more opposite gender friends decreases academic performance, through two potential mechanisms:
First... individuals in class may distract or be distracted by opposite gender friends more than same gender friends, reducing the quality of classroom inputs for individuals with a greater share of opposite gender friends...
Second... higher shares of opposite gender friends may increase the returns to leisure and therefore increase the time spent socializing at the expense of studying.
So, opposite gender friends are hypothesised to be a distraction to both in-class learning and out-of-class studying time. Hill finds that:
...an increase in the share of opposite gender school friends reduces academic achievement... A standard deviation increase in the share of opposite gender friends causes a half standard deviation reduction in GPA scores.
That effect is relatively large - the standard deviation of GPA in his sample is 0.8 (on a four-point GPA scale). However, when disaggregating by gender he finds that the effect is only statistically significant for girls. In other words, having a higher share of opposite gender friends is bad for girls, but possibly not so for boys. However, there is one part of his results that I do not fully agree with:
Results also indicate that opposite gender friends increase the probability of the student being in a romantic relationship, which may have adverse effects on achievement.
He uses the same instrumental variable approach to investigate the effects of the share of opposite gender friends on the probability of having been in a relationship in the past 18 months. Unfortunately, the instrument is not as valid in this case, as there is a plausible direct relationship between the share of opposite gender classmates living in the close neighbourhood and the probability of being in a relationship. Increased opportunity for interaction with opposite gender classmates (as would occur if there are more of them in the neighbourhood) increases the chances of starting a relationship (for at least some students!). So, while the results are plausible they are not necessarily causal.

Anyway, that doesn't take away from the overall results of the paper, which is that it is better for girls to have a higher share of female friends (and a smaller share of male friends), in terms of academic performance at high school. Which adds to the argument for the advantage of single gender schools (at least for girls), or for introducing single gender classes within coed high schools.

Thursday, 3 September 2015

Why study economics? AEA video edition...

A few weeks ago, the American Economics Association (AEA) released a new video entitled "A career in Economics... it's much more than you think", that shows the value of studying economics. From their press release:
The 9-minute film is aimed at prospective or first-year students who may be investigating economics as a career option but are unclear how broadly a degree in economics can be applied. 
The film makes effort to dispel entrenched misconceptions about who economists are and what they do. Economics can be broadly defined as the study of human behaviors aimed at finding solutions to help improve peoples' lives. Viewers are reminded that a degree in economics doesn't have to be about finance, banking, business, or government, . . . it can be useful to all individuals and can lead to many interesting and fulfilling career choices. 
The video features four individuals offering insights on how economics can be a tool for solving very human problems and they provide some interesting perspectives on how they chose economics as a career path. The film also helps raise awareness about the need for more diverse voices in the field of economics.
  • Marcella Alsan, a physician of infectious disease, discusses why she needed to pursue a degree in economics to improve the lives of her patients.
  • Randall Lewis, a research scientist at Google, uses economics and "big data" as tools to improve the functioning of markets.
  • Britni Wilcher, a PhD student of economics, offers insight on some misconceptions about economists and factors influencing her career path decision.
  • Peter Henry, dean at the NYU Stern School of Business, points to the true nature of economics and the importance of diverse voices informing the field.
Access the video at this link. Enjoy!

The only thing I would add is that there is value to studying economics even if you are not strong mathematically - the intuition that underlies the economic way of thinking is itself valuable, and important. Understanding economics helps you to evaluate choices and make better decisions. Which is of course why economics is in the core of business and commerce degrees, but I would argue economics is also highly valuable for students in other disciplines as well (as the story of Marcela in the video should demonstrate).

Read more: