Saturday, 30 November 2013

When is an academic like a drug dealer?

Apparently, often. See this blog post. It's not because the teaching and research we generate is addictive, although I suppose it is possible. It's because of the nature of the labour market. From the blog post:
The academic job market is structured in many respects like a drug gang, with an expanding mass of outsiders and a shrinking core of insiders. Even if the probability that you might get shot in academia is relatively small (unless you mark student papers very harshly), one can observe similar dynamics. Academia is only a somewhat extreme example of this trend, but it affects labour markets virtually everywhere. One of the hot topics in labour market research at the moment is what we call “dualisation”. Dualisation is the strengthening of this divide between insiders in secure, stable employment and outsiders in fixed-term, precarious employment. Academic systems more or less everywhere rely at least to some extent on the existence of a supply of “outsiders” ready to forgo wages and employment security in exchange for the prospect of uncertain security, prestige, freedom and reasonably high salaries that tenured positions entail...
In ECON110 we talk about tournament effects, and this is exactly an example of tournament effects at work. A small group of highly successful workers (insiders) get paid high salaries, while many others (outsiders) accept low salaries in exchange for the chance to become one of the highly successful few in the future. For a highly educated person to take a low-paid entry-level job isn't as irrational as it may seem at first. It's a simple benefit-cost decision for the outsiders - the cost is foregone income now (the premium they could have earned outside academia); the benefit is an expected future gain in the form of a cosy academic position with higher salary (maybe?). So, if the returns to becoming one of the successful insiders are high enough, then even a low probability of becoming an insider will induce many recent or nearly-completed PhDs to join the outsider part of the market.

The New Zealand academic labour market is somewhat dissimilar to the U.S. or European markets, in that relatively secure employment is possible even though there is no tenure track. This is similar to the situation in Britain described in the blog post. However, that doesn't mean that there aren't a number of New Zealand beginning academics in precarious work (I had thought our academic union, the TEU, was looking into this, but I can't find anything on their website). My first few years were in rolling fixed-term teaching fellowships.

Now I'm feeling like I'm missing a trick here. I should definitely be exploiting PhD students for more low-paid work.

[HT: Eric Crampton at Offsetting Behaviour]

Friday, 22 November 2013

Zombies, vampires and population modelling

I've been involved in a few climate change-related projects over the last several years. Despite being an economist, my contribution has been in the area of population modelling. These projects have given me numerous opportunities to engage with climate scientists and ecologists, and one of the interesting aspects of these interactions is the doomsday scenarios for future population that many of them hold dear. I have often likened their projections for future population to modelling the zombie apocalypse.

That's why I really enjoyed reading this 2009 paper recently: "When zombies attack! Mathematical modelling of an outbreak of zombie infection" by Munz et al., published in the book Infectious Disease Modelling Research Progress. The authors used an extended SIR epidemiological model to investigate the dynamics of a zombie apocalypse. Their conclusion:
In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.
I liked the paper, but I felt there was one flaw. In their model, all dead humans or zombies were able to re-animate. This left me wondering - what would happen in the model if you could permanently kill zombies, through shots to the head or incineration. Maybe the authors will address that in future papers.

Closer to home, Daniel Farhat from the University of Otago has a recent paper on "The economics of vampires: An agent-based perspective". In his paper, Farhat uses agent-based computational models with heterogeneous agents to investigate the dynamics of human-vampire population interaction. There are a number of interesting results from the simulations, from which I quote selectively:
...where vampires are highly visible, the human population suffers terribly for a short period (building up defences which results in starvation) until the vampire population is driven to extinction. Once they have been eradicated, the human community and their corresponding economy proceeds to grow exponentially. Therefore, one reason why we may not come across vampires in modern times is because they have already died out...
 ...where vampires are observable but somewhat hidden, they may flourish provided they are easy to destroy in a confrontation. Cycles of fear then emerge. Therefore, if we do not see vampires today it may be because spotting them is rare..
 ...where vampires are unobservable, their existence persists. Whether they flourish or stagnate depends on their hunger for blood and the speed of human reproduction. If vampires live on the brink of starvation, both vampire and human populations persistently grow despite mild cycles of fear. If humans reproduce easily, both communities languish with extreme cycles of fear keeping both populations in check. The former is more likely given the persistent growth of our planet’s population. If this is the case, we would never encounter vampires (and may even doubt their existence) in reality.
Agent-based models are an excellent tool for modelling population dynamics at very disaggregated levels. The vampire model described above is theoretical, but there are more practical applications of these models as well. For instance, I have a Masters student who is developing an agent-based model of local population dynamics for the Hamilton sub-region, to better project small-area population movements over the next 10-15 years. This highlights that these mathematical models are not just useful for developing cool applications in terms of modelling the undead, but also have real-world applicability.

Monday, 18 November 2013

Compensating differentials, psychic payoffs, and the Oswald Hypothesis

In my ECON110 Economics and Society class we cover labour markets. One of the key aspects is that, based on the basic theory, the wages in different labour markets should be the same. The intuition that leads to that conclusion is simple: If labour is mobile and there are no costs of moving, and if there were labour markets with different wages, then workers would move from the low wage market to the high wage market. This would increase the supply of workers in the high-wage labour market, lowering wages there, and decrease the supply of workers in the low-wage labour market, raising wages there. This would continue until the two markets have the same wage.

Using that as a starting point, we then discuss why that doesn't happen in the real world. There are a number of fairly intuitive and fairly obvious reasons, usually relating to differences in supply and/or demand in the markets. In this post though, I only want to concentrate on one explanation: compensating differentials. Some jobs have desirable characteristics, while other jobs have undesirable characteristics. Jobs with desirable characteristics attract more workers, increasing labour supply and lowering wages relative to jobs with undesirable characteristics. In other words, workers are compensated (with higher wages) for undertaking jobs that have undesirable characteristics. The standard example I use is welders - on oil rigs, welders can earn several times higher salaries than they can onshore. They are essentially being compensated for a number of undesirable characteristics of the job though - long hours, high stress environment, long periods away from family, and higher risk of death or injury.

The textbook treatment of compensating differentials usually tends to focus more on negative characteristics (as I have above). Positive job characteristics are typically explained as the inverse of negative job characteristics. However, jobs may provide non-monetary or emotional rewards (so-called "psychic payoffs"). These arise where the worker receives utility or satisfaction from doing the job, over and above any monetary wages they receive. A 2012 paper in the journal Kyklos (gated) by William Baumol and David Throsby does a good job of explaining how these psychic payoffs work and their implications in the context of sports and the arts.* In their basic model, Baumol and Throsby describe how the existence of psychic payoffs leads to systematic underpayment of workers (a classic compensating differential as described above, but in this case a positive one).

However, I thought the more interesting implication of their paper occurred when they relaxed the assumptions so that labour was no longer completely mobile. When sports stars (or actors, dancers, etc. in the case of the arts) become emotionally attached to a team, they are less willing to move elsewhere. This gives the team some additional monopsony power in salary negotiations, and lowers the resulting wage (leading to 'underpaid' superstars). Again, this is a compensating differential, but the mechanism through which it arises is different (it arises indirectly because of the reduced mobility, rather than directly through the job characteristics themselves). Because this effect arises indirectly, it can be caused not only by positive job characteristics (such as a superstar's attachment to their team), but also through characteristics of the labour market (such as contract terms, etc.). There is evidence to support this, with superstars being paid much less than their contributions to team revenues, such as here (ungated version here).

Reduced mobility of labour has other effects on the labour market as well. My colleagues Bill Cochrane and Jacques Poot presented their research on the Oswald Hypothesis at the recent Pathways conference. This hypothesis argues that areas of high home ownership have higher unemployment, because home owners face high transaction costs associated with selling their home, which makes it difficult to move and take up available jobs elsewhere (or alternatively, they could be reluctant to move because they are emotionally attached to the neighbourhood they live in, leading to an additional psychic cost of moving). As with compensating differentials, this suggests that the supply curve for labour would not fully adjust to bring different labour markets into equilibrium (similar to the case of compensating differentials above). Bill and Jacques showed that the hypothesis appears to hold for New Zealand, with areas of higher unemployment having higher home ownership rates (to the extent that a 10 percent higher home ownership rate was associated with 2.8 percent higher unemployment). Of course, there analysis does not demonstrate causality, despite media reports that purport otherwise. They are not alone in continuing to investigate the hypothesis and demonstrate that home ownership and unemployment are correlated - Andrew Oswald himself (along with David Branchflower) has a new working paper based on U.S. data. Their new paper was recently discussed in some detail by Ken Gibb.

So, compensating differentials, psychic payoffs, and the Oswald Hypothesis all lead us to conclude that labour markets are somewhat inflexible.

-----

* The Baumol and Throsby paper is interesting as well because it addresses what happens when there are psychic payoffs to capital (such as to team owners) as well as to labour.

Saturday, 16 November 2013

Models. Behaving. Badly.

One of the best aspects of being an academic is all of the cool and interesting research papers that I get to read. One of the worst aspects of being an academic is not having enough time to read all of the cool and interesting research papers that I want to. Which is why I really love getting away from the distractions of the office. This week and next I am in Bangkok, for the 11th International Congress on AIDS in Asia and the Pacific, where I'm co-presenting an e-poster on microbicide acceptability.

Being away from the office gives me the chance to catch up on some reading (and usually a few revise-and-resubmit journal articles). I have an enormous pile of articles I've gathered over the last several years (my "collection"), which I've always meant to read. This was also a side reason why I started this blog - I'll start blogging about some of them this week.

Anyway, I've finally finished reading Emanuel Derman's "Models. Behaving. Badly.". It's been extensively reviewed elsewhere (see here for the review on Forbes, or here for the review on WSJ) - after all, it's a 2011 title - so I won't bother with a long review of it. I will say that a good read on the problems of financial modelling in the real world, and why economists in general should not get too overconfident about our models. I haven't done any work in financial economics myself, and haven't really touched on any finance since my undergraduate study, so I liked the book because it gave me some added value. I especially liked this bit on p.144:
If you open up the prestigious Journal of Finance, one of the select number of journals in which finance professors must publish in order to get tenure, many of the papers resemble those in a mathematics journal. Replete with axioms, theorems, and lemmas, they have a degree of rigor that is inversely proportional to their minimal usefulness.
It made me laugh, anyway. But the problem is not limited to finance or financial economics. Most of economics is buried in mathematics. Deirdre McClosky gives a very thorough treatment of the problems of mathematics and statistics in economics here. Redstate covers a similar issue here, while Justin Campbell argues that economics teaching hasn't become too mathematical here. For my part, I think we as a discipline teach too much of economics using mathematics, at principles level. Using too much mathematics at first year disengages students who are not strong at mathematics, and removes the opportunity to show them the real-world relevance of the discipline.

In contrast, here at the University of Waikato Steven Lim and I teach our first-year business economics paper (ECON100) while barely using any numerical mathematics at all (to the extent that students cannot (and do not need to) use calculators in the tests or final examination). For the most part we've successfully decoupled mathematics from teaching the basic principles and insights that are useful for business decision making. As a result, our paper gets past a lot of student resistance to economics ("economics aversion" maybe?) and gets very good student evaluations. We're not alone in doing this of course, as this book demonstrates.

Greg Mankiw argues that aspiring economists need mathematics. I wouldn't necessarily disagree with that, but mathematics should not be (and is not) necessary in order to share insights with students majoring in other disciplines, or when sharing our research with the general public.

Thursday, 7 November 2013

Why study economics?

Matt Nolan at TVHE posted a couple of times in the last few days on topics which broadly relate to why students should study economics. This is of interest to me because, as part of my role of Undergraduate Convenor in the Department of Economics at Waikato, I present at Open Day, on school visits, and so on. The topic of those talks, unsurprisingly, is why students should study economics, and why they should study economics at Waikato.

The first post "Economics sucks, it is just the study of 'common sense'" is interesting because it relates to a line I have heard many times - that economics is essentially "applied common sense". Of course if that were true, then nobody (with common sense, at least) would need to study economics. And if it were true, then I'd be much less likely to see  the weirdest of exam answers I get in first-year economics and we wouldn't see stuff like this in the media. Maybe, as Voltaire wrote in Dictionnaire Philosophique, "common sense is not so common". As for "applied common sense", Matt has the right idea:
economics describes situations where common sense beliefs are right, and when they are wrong.
Which is exactly how we teach first year economics at Waikato (and pretty much everywhere else). 'Here's some things that economics can tell you, that seem to make a lot of sense', plus 'here's some things that economics can tell you, that you might not have guessed'. The latter can be the most interesting things to teach - one of my favourite questions, borrowed from Robert Frank, is "why do female models earn more than male models?".

The second TVHE post is "On studying economics". I love the quote from Joan Robinson at the beginning:
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
I may have to use that when students who are not intending to major in economics ask why they should have to do two economics papers in their business management degree. Of course, it sits well with that famous John Maynard Keynes quote:
 The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.
So, to save yourself from being the slave to some defunct economist's thinking, it's probably best to understand what they are talking about. That's as good a reason as any to study at least some economics, I think.

Friday, 1 November 2013

Police crackdowns on speeders - incentive effects?

Last Monday was Labour Day in New Zealand, meaning that we had the first holiday weekend for a few months. As you may expect, given this opportunity Kiwis take to the roads to get away for the weekend and take advantage of the break.

More cars on the roads means, almost inevitably, more motor vehicle accidents and more injuries. In most years. This year, thankfully, there was only one fatality - the lowest absolute number since records of holiday road tolls started being collected in 1956. Last year there were six deaths, 22 serious injuries and 90 minor injuries (injury statistics for 2013 aren't available as yet), and "travelling too fast for the conditions" was cited as a factor in 20 percent of the crashes.

Not surprising then, that the police want to target speeding, in order to reduce the number of fatal accidents. What is interesting though, is how the police implement this - by taking a tough line on all speeders going more than 4 km/h over the posted limit. Don't get me wrong - if you are travelling faster than the posted limit, you are breaking the law and you shouldn't complain if you are ticketed. I have never complained the few times I have been caught speeding.

There are a limited number of police cars travelling the roads, on the lookout for speeding (and other traffic offences, but let's ignore those for now). Every time a police car stops a speeder, they are effectively out of action for the time it takes to write out the ticket. In the meantime, a number of other speeding drivers are driving past, and not being stopped. I want to consider the incentive effects here. As far as I know, the police have never admitted as such, but typically the tolerance is more than 4 km/h (else why would they both highlighting that it is lower on the holiday weekend?). So, what happens when you lower the tolerance? Could doing so actually increase the number of speeding drivers?

First, let's think about the drivers' incentives. If they are rational or quasi-rational decision-makers (as I discuss with my ECON110 class), they weigh up the costs and benefits of the decision to speed. The benefits include less time wasted on the roads (an opportunity cost - you give up some time you could spend with family, friends, on the beach, whatever). The costs include the increased risk of a serious car accident, and the increased risk of a fine for speeding. For both of these costs, the cost is not absolute - it is based on risk. So, they need to consider both the probability of the event (crash or speeding ticket, respectively) occurring, and the absolute cost to them if it does. If the benefits outweigh the costs, then the driver will choose to speed.

Now let's assume for the moment that, among drivers, there is a distribution of open road driving speeds between 70 km/h and 120 km/h where 75 percent of drivers driving below 100 km/h, about 10 percent between 100 km/h and 104 km/h, about 10 percent between 105 km/h and 109 km/h, and about 5 percent at 110 km/h or above (this is similar to the unimpeded open road speed distribution from the 2012 National Speed Survey). That means that about 25 percent of all drivers are speeding, compared to the posted limit of 100 km/h.

If the police apply a tolerance of 9 km/h, they will only ticket speeding drivers going 10 km/h or more over the posted limit. This means that they are targeting the top 5 percent of drivers, and will regularly pull over a speeding driver. All of the drivers the police stop and ticket are the fastest (110+ km/h) drivers, and those drivers face some probability (let's call it P1) of being stopped and ticketed. This probability is what the rational or quasi-rational drivers take into account in deciding whether to speed or not.

Now, assume instead that the police apply a tolerance of 4 km/h. Now they are targeting the top 15 percent of drivers, and still regularly pull over speeding drivers. However, now two thirds of the drivers that are stopped are those driving between 105 and 109 km/h, and only one third are those driving 110 km/h or greater. So, now the fastest drivers face only a probability of (1/3 * P1) of being stopped and ticketed.

It is entirely possible, then, that lowering the tolerance for speeding actually reduces the disincentive (potential speeding ticket) effect of speeding. This lowers the costs of speeding for the fastest drivers and moves the cost-benefit calculation in favour of more speeding. It increases the costs of speeding for the slower drivers who now have a probability of being ticketed (which was zero before). So, overall there are likely to be the same number of fast speeding drivers on the roads, but fewer slow speeding drivers. I wish there were data available to test this hypothesised change in speeding behaviour.

Of course, the police could counteract the disincentive effect by increasing the number of police cars patrolling the open road. But, in order to completely offset the effect in our example above, they would need to triple the number of cars, and ensure that speeding drivers knew that was happening (which, of course, is exactly what the police do via the media).

Finally, having more cars on the roads naturally slows all drivers down anyway (the National Speed Survey results are based on unimpeded speeds, i.e. speeds when other cars are not slowing each other down, which is least likely on a holiday weekend). So, maybe lowering the tolerance is a good move for the police not because it lowers speeds (which would happen anyway) but because it ensures that each police car that is patrolling on holiday weekends is kept busy even though the number of speeding drivers on the roads on holiday weekends is actually lower than on non-holiday weekends. Remember that each minute spend not pulling over another driver entails an opportunity cost for the police. Food for thought.