Tuesday, 24 December 2013

Happy Holidays!

One of my all-time favourite blog posts, that I regularly use in the readings for my ECON110 class, is the 2010 Dr Seuss parody by Art Carden titled "How economics saved Christmas". I use it to teach alternative policy options for dealing with externalities. Enjoy!

Christmas bonus: Tyler Cowen at Marginal Revolution talks talks about the inefficiency (or otherwise) of Christmas.

Happy holidays!

Thursday, 12 December 2013

What to do when your cities are stuck in the wrong place

The persistence of the location of cities and towns is well recognised in economic geography. People tend to locate where jobs are. New industries (and hence jobs) tend to locate close to where customers are, which unsurprisingly, is where people are. And so, the location of cities in the future is likely to be where cities were in the past.

In order to get substantial change in the location of towns and cities, it looks like you need to generate a collapse in civilization. At least, that might be one tongue-in-cheek take-away from a recent paper by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford). Michaels and Rauch studied a cool natural experiment - the effect of the fall of the Roman Empire on the location of towns in Britain and France. The key point that makes this natural experiment useful is that the effect of the fall of Rome was much bigger in Britain than in France:
Roman Britain suffered invasions, usurpations, and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain's last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain's political, social, and economic order collapsed. From 450-600CE, its towns no longer functioned. The Roman towns in France also suffered when the western Roman Empire fell, but many of them survived and were taken over by the Franks.
  • In short, the urban network in Britain effectively ended with the fall of the western Roman Empire; French towns experienced greater continuity.
  • The divergent paths of British and French urban networks allow us to study the spatial consequences of the resetting of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages.
They find that the location of towns changed in Britain, but remained the same in France. But did that even matter? It turns out it did:
The conclusion we draw is that many French towns were stuck in the wrong places for many centuries. They could not take advantage of the new transportation technologies since they had poor coastal access; they were in locations that were designed to fit with the demands of Roman times and not the considerations of the Middle Ages.
So, towns and cities can be stuck in the 'wrong' (from a productivity perspective) location for centuries or longer. There are no barbarian invasions in our near-term future, so we are to a large extent stuck with the urban locations we have now. This has interesting implications for adaptation to climate change. Many cities currently sit in extremely vulnerable locations, in terms of surface flooding, sea level rise, desertification and water stress, etc. The implications of this paper is that there is substantial inertia that will prevent large-scale relocation of people and industries to areas that are more resilient or less vulnerable. In other words, adaptation to climate change in situ is going to be very important - we can't simply rely on moving away from where the problems occur. On a related note, we shouldn't expect large masses of migrants trying to get away from vulnerable cities and countries to suddenly end up on our doorstep. It simply isn't that easy for them to move.

For the full paper (gated), see here.

[HT: Paul Krugman's NY times blog]

Monday, 9 December 2013

How to raise the price AND increase sales

Business Insider Australia reports on the unusual case of Cards Against Humanity:
The people behind card game Cards Against Humanity wanted to get noticed on Black Friday, but they didn’t want to discount their game below $US25. 
So they came up with a strange, perverse offer. For a limited time only, you could buy Cards Against Humanity for…$5 more. 
The plan worked. The absurd offer got a lot of attention and sales spiked.
 According to the chart below, sales increased by around the same amount on Black Friday (the day after Thanksgiving, traditionally a day of big sales in the U.S.) as the previous year, in spite the price increase.



Or was it because of the price increase? Traditional economic theory, as I teach in ECON100 or ECON110, maintains that when the price increases, the quantity demanded (and sales) decrease. But in this case, quantity and price have both increased. Does this mean that the demand curve is upward sloping (as Tyler Cowen cheekily implies here)?

Probably not, as we find out later in the Business Insider piece, quoting Max Temkin (the creator of Cards Against Humanity):
This is a difficult time of year for us because we spend almost no money on marketing, and it’s easy for us to get lost in the noise and money of the holiday season...
 The sale made people laugh, it was widely shared on Twitter and Tumblr, and it was the top post on Reddit. The press picked it up, and it was reported in The Guardian, USA Today, Polygon, BuzzFeed, All Things D, Chicagoist, and AdWeek. It was even the top comment on The Wirecutter’s front page AMA, which had nothing to do with us.
In other words, Cards Against Humanity's publicity stunt of increasing price had the effect of greatly increasing their marketing exposure. So, the observed change in quantity demanded wasn't the result of an upward sloping demand curve, but instead was the result of a shift in the demand curve to the right (an increase in demand).

This raises a more general point about observed price and quantity combinations in the real world. When we see two price-quantity combinations, it is tempting to connect them with a line and think that we have observed the demand curve. However, we face an identification problem - we can't tell for sure whether what we have observed is a movement along a given demand curve, or a shift from one demand curve to another. In other words, we can't identify which portion of the movement from one point to another occurs because the demand curve has shifted.

Graphically, using the Cards Against Humanity example, we have observed the two price-quantity combinations (Q1,P1) and (Q2,P2), and if we assumed they were on the same demand curve we would guess that they were on the dotted demand curve (Da). However, based on the other details we know about the case, we know that what actually occurred was a shift from demand curve Db to demand curve Dc.

 
The identification problem is a serious issue for real-world data. I've had a number of students complete applied projects for me, using for example supermarket data to investigate price elasticities. To get a useful price elasticity estimate though, you must be able to separate the part of a price-quantity change that relates to a movement along a given demand curve, from the part that results from a shift in the demand curve. Usually we try to achieve this by including in our regression models other factors that we know affect demand, but of course we cannot include everything (some things, like consumer tastes and preferences, are not easily measured). So we will always have an imperfect estimate of price elasticity when using real-world data. This is worth keeping in mind any time you are presented with elasticities.

For more on the Cards Against Humanity example, read Max Temkin's Tumblr blog.

HT: Tyler Cowan at Marginal Revolution.

Saturday, 30 November 2013

When is an academic like a drug dealer?

Apparently, often. See this blog post. It's not because the teaching and research we generate is addictive, although I suppose it is possible. It's because of the nature of the labour market. From the blog post:
The academic job market is structured in many respects like a drug gang, with an expanding mass of outsiders and a shrinking core of insiders. Even if the probability that you might get shot in academia is relatively small (unless you mark student papers very harshly), one can observe similar dynamics. Academia is only a somewhat extreme example of this trend, but it affects labour markets virtually everywhere. One of the hot topics in labour market research at the moment is what we call “dualisation”. Dualisation is the strengthening of this divide between insiders in secure, stable employment and outsiders in fixed-term, precarious employment. Academic systems more or less everywhere rely at least to some extent on the existence of a supply of “outsiders” ready to forgo wages and employment security in exchange for the prospect of uncertain security, prestige, freedom and reasonably high salaries that tenured positions entail...
In ECON110 we talk about tournament effects, and this is exactly an example of tournament effects at work. A small group of highly successful workers (insiders) get paid high salaries, while many others (outsiders) accept low salaries in exchange for the chance to become one of the highly successful few in the future. For a highly educated person to take a low-paid entry-level job isn't as irrational as it may seem at first. It's a simple benefit-cost decision for the outsiders - the cost is foregone income now (the premium they could have earned outside academia); the benefit is an expected future gain in the form of a cosy academic position with higher salary (maybe?). So, if the returns to becoming one of the successful insiders are high enough, then even a low probability of becoming an insider will induce many recent or nearly-completed PhDs to join the outsider part of the market.

The New Zealand academic labour market is somewhat dissimilar to the U.S. or European markets, in that relatively secure employment is possible even though there is no tenure track. This is similar to the situation in Britain described in the blog post. However, that doesn't mean that there aren't a number of New Zealand beginning academics in precarious work (I had thought our academic union, the TEU, was looking into this, but I can't find anything on their website). My first few years were in rolling fixed-term teaching fellowships.

Now I'm feeling like I'm missing a trick here. I should definitely be exploiting PhD students for more low-paid work.

[HT: Eric Crampton at Offsetting Behaviour]

Friday, 22 November 2013

Zombies, vampires and population modelling

I've been involved in a few climate change-related projects over the last several years. Despite being an economist, my contribution has been in the area of population modelling. These projects have given me numerous opportunities to engage with climate scientists and ecologists, and one of the interesting aspects of these interactions is the doomsday scenarios for future population that many of them hold dear. I have often likened their projections for future population to modelling the zombie apocalypse.

That's why I really enjoyed reading this 2009 paper recently: "When zombies attack! Mathematical modelling of an outbreak of zombie infection" by Munz et al., published in the book Infectious Disease Modelling Research Progress. The authors used an extended SIR epidemiological model to investigate the dynamics of a zombie apocalypse. Their conclusion:
In summary, a zombie outbreak is likely to lead to the collapse of civilisation, unless it is dealt with quickly. While aggressive quarantine may contain the epidemic, or a cure may lead to coexistence of humans and zombies, the most effective way to contain the rise of the undead is to hit hard and hit often. As seen in the movies, it is imperative that zombies are dealt with quickly, or else we are all in a great deal of trouble.
I liked the paper, but I felt there was one flaw. In their model, all dead humans or zombies were able to re-animate. This left me wondering - what would happen in the model if you could permanently kill zombies, through shots to the head or incineration. Maybe the authors will address that in future papers.

Closer to home, Daniel Farhat from the University of Otago has a recent paper on "The economics of vampires: An agent-based perspective". In his paper, Farhat uses agent-based computational models with heterogeneous agents to investigate the dynamics of human-vampire population interaction. There are a number of interesting results from the simulations, from which I quote selectively:
...where vampires are highly visible, the human population suffers terribly for a short period (building up defences which results in starvation) until the vampire population is driven to extinction. Once they have been eradicated, the human community and their corresponding economy proceeds to grow exponentially. Therefore, one reason why we may not come across vampires in modern times is because they have already died out...
 ...where vampires are observable but somewhat hidden, they may flourish provided they are easy to destroy in a confrontation. Cycles of fear then emerge. Therefore, if we do not see vampires today it may be because spotting them is rare..
 ...where vampires are unobservable, their existence persists. Whether they flourish or stagnate depends on their hunger for blood and the speed of human reproduction. If vampires live on the brink of starvation, both vampire and human populations persistently grow despite mild cycles of fear. If humans reproduce easily, both communities languish with extreme cycles of fear keeping both populations in check. The former is more likely given the persistent growth of our planet’s population. If this is the case, we would never encounter vampires (and may even doubt their existence) in reality.
Agent-based models are an excellent tool for modelling population dynamics at very disaggregated levels. The vampire model described above is theoretical, but there are more practical applications of these models as well. For instance, I have a Masters student who is developing an agent-based model of local population dynamics for the Hamilton sub-region, to better project small-area population movements over the next 10-15 years. This highlights that these mathematical models are not just useful for developing cool applications in terms of modelling the undead, but also have real-world applicability.

Monday, 18 November 2013

Compensating differentials, psychic payoffs, and the Oswald Hypothesis

In my ECON110 Economics and Society class we cover labour markets. One of the key aspects is that, based on the basic theory, the wages in different labour markets should be the same. The intuition that leads to that conclusion is simple: If labour is mobile and there are no costs of moving, and if there were labour markets with different wages, then workers would move from the low wage market to the high wage market. This would increase the supply of workers in the high-wage labour market, lowering wages there, and decrease the supply of workers in the low-wage labour market, raising wages there. This would continue until the two markets have the same wage.

Using that as a starting point, we then discuss why that doesn't happen in the real world. There are a number of fairly intuitive and fairly obvious reasons, usually relating to differences in supply and/or demand in the markets. In this post though, I only want to concentrate on one explanation: compensating differentials. Some jobs have desirable characteristics, while other jobs have undesirable characteristics. Jobs with desirable characteristics attract more workers, increasing labour supply and lowering wages relative to jobs with undesirable characteristics. In other words, workers are compensated (with higher wages) for undertaking jobs that have undesirable characteristics. The standard example I use is welders - on oil rigs, welders can earn several times higher salaries than they can onshore. They are essentially being compensated for a number of undesirable characteristics of the job though - long hours, high stress environment, long periods away from family, and higher risk of death or injury.

The textbook treatment of compensating differentials usually tends to focus more on negative characteristics (as I have above). Positive job characteristics are typically explained as the inverse of negative job characteristics. However, jobs may provide non-monetary or emotional rewards (so-called "psychic payoffs"). These arise where the worker receives utility or satisfaction from doing the job, over and above any monetary wages they receive. A 2012 paper in the journal Kyklos (gated) by William Baumol and David Throsby does a good job of explaining how these psychic payoffs work and their implications in the context of sports and the arts.* In their basic model, Baumol and Throsby describe how the existence of psychic payoffs leads to systematic underpayment of workers (a classic compensating differential as described above, but in this case a positive one).

However, I thought the more interesting implication of their paper occurred when they relaxed the assumptions so that labour was no longer completely mobile. When sports stars (or actors, dancers, etc. in the case of the arts) become emotionally attached to a team, they are less willing to move elsewhere. This gives the team some additional monopsony power in salary negotiations, and lowers the resulting wage (leading to 'underpaid' superstars). Again, this is a compensating differential, but the mechanism through which it arises is different (it arises indirectly because of the reduced mobility, rather than directly through the job characteristics themselves). Because this effect arises indirectly, it can be caused not only by positive job characteristics (such as a superstar's attachment to their team), but also through characteristics of the labour market (such as contract terms, etc.). There is evidence to support this, with superstars being paid much less than their contributions to team revenues, such as here (ungated version here).

Reduced mobility of labour has other effects on the labour market as well. My colleagues Bill Cochrane and Jacques Poot presented their research on the Oswald Hypothesis at the recent Pathways conference. This hypothesis argues that areas of high home ownership have higher unemployment, because home owners face high transaction costs associated with selling their home, which makes it difficult to move and take up available jobs elsewhere (or alternatively, they could be reluctant to move because they are emotionally attached to the neighbourhood they live in, leading to an additional psychic cost of moving). As with compensating differentials, this suggests that the supply curve for labour would not fully adjust to bring different labour markets into equilibrium (similar to the case of compensating differentials above). Bill and Jacques showed that the hypothesis appears to hold for New Zealand, with areas of higher unemployment having higher home ownership rates (to the extent that a 10 percent higher home ownership rate was associated with 2.8 percent higher unemployment). Of course, there analysis does not demonstrate causality, despite media reports that purport otherwise. They are not alone in continuing to investigate the hypothesis and demonstrate that home ownership and unemployment are correlated - Andrew Oswald himself (along with David Branchflower) has a new working paper based on U.S. data. Their new paper was recently discussed in some detail by Ken Gibb.

So, compensating differentials, psychic payoffs, and the Oswald Hypothesis all lead us to conclude that labour markets are somewhat inflexible.

-----

* The Baumol and Throsby paper is interesting as well because it addresses what happens when there are psychic payoffs to capital (such as to team owners) as well as to labour.

Saturday, 16 November 2013

Models. Behaving. Badly.

One of the best aspects of being an academic is all of the cool and interesting research papers that I get to read. One of the worst aspects of being an academic is not having enough time to read all of the cool and interesting research papers that I want to. Which is why I really love getting away from the distractions of the office. This week and next I am in Bangkok, for the 11th International Congress on AIDS in Asia and the Pacific, where I'm co-presenting an e-poster on microbicide acceptability.

Being away from the office gives me the chance to catch up on some reading (and usually a few revise-and-resubmit journal articles). I have an enormous pile of articles I've gathered over the last several years (my "collection"), which I've always meant to read. This was also a side reason why I started this blog - I'll start blogging about some of them this week.

Anyway, I've finally finished reading Emanuel Derman's "Models. Behaving. Badly.". It's been extensively reviewed elsewhere (see here for the review on Forbes, or here for the review on WSJ) - after all, it's a 2011 title - so I won't bother with a long review of it. I will say that a good read on the problems of financial modelling in the real world, and why economists in general should not get too overconfident about our models. I haven't done any work in financial economics myself, and haven't really touched on any finance since my undergraduate study, so I liked the book because it gave me some added value. I especially liked this bit on p.144:
If you open up the prestigious Journal of Finance, one of the select number of journals in which finance professors must publish in order to get tenure, many of the papers resemble those in a mathematics journal. Replete with axioms, theorems, and lemmas, they have a degree of rigor that is inversely proportional to their minimal usefulness.
It made me laugh, anyway. But the problem is not limited to finance or financial economics. Most of economics is buried in mathematics. Deirdre McClosky gives a very thorough treatment of the problems of mathematics and statistics in economics here. Redstate covers a similar issue here, while Justin Campbell argues that economics teaching hasn't become too mathematical here. For my part, I think we as a discipline teach too much of economics using mathematics, at principles level. Using too much mathematics at first year disengages students who are not strong at mathematics, and removes the opportunity to show them the real-world relevance of the discipline.

In contrast, here at the University of Waikato Steven Lim and I teach our first-year business economics paper (ECON100) while barely using any numerical mathematics at all (to the extent that students cannot (and do not need to) use calculators in the tests or final examination). For the most part we've successfully decoupled mathematics from teaching the basic principles and insights that are useful for business decision making. As a result, our paper gets past a lot of student resistance to economics ("economics aversion" maybe?) and gets very good student evaluations. We're not alone in doing this of course, as this book demonstrates.

Greg Mankiw argues that aspiring economists need mathematics. I wouldn't necessarily disagree with that, but mathematics should not be (and is not) necessary in order to share insights with students majoring in other disciplines, or when sharing our research with the general public.

Thursday, 7 November 2013

Why study economics?

Matt Nolan at TVHE posted a couple of times in the last few days on topics which broadly relate to why students should study economics. This is of interest to me because, as part of my role of Undergraduate Convenor in the Department of Economics at Waikato, I present at Open Day, on school visits, and so on. The topic of those talks, unsurprisingly, is why students should study economics, and why they should study economics at Waikato.

The first post "Economics sucks, it is just the study of 'common sense'" is interesting because it relates to a line I have heard many times - that economics is essentially "applied common sense". Of course if that were true, then nobody (with common sense, at least) would need to study economics. And if it were true, then I'd be much less likely to see  the weirdest of exam answers I get in first-year economics and we wouldn't see stuff like this in the media. Maybe, as Voltaire wrote in Dictionnaire Philosophique, "common sense is not so common". As for "applied common sense", Matt has the right idea:
economics describes situations where common sense beliefs are right, and when they are wrong.
Which is exactly how we teach first year economics at Waikato (and pretty much everywhere else). 'Here's some things that economics can tell you, that seem to make a lot of sense', plus 'here's some things that economics can tell you, that you might not have guessed'. The latter can be the most interesting things to teach - one of my favourite questions, borrowed from Robert Frank, is "why do female models earn more than male models?".

The second TVHE post is "On studying economics". I love the quote from Joan Robinson at the beginning:
The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.
I may have to use that when students who are not intending to major in economics ask why they should have to do two economics papers in their business management degree. Of course, it sits well with that famous John Maynard Keynes quote:
 The ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.
So, to save yourself from being the slave to some defunct economist's thinking, it's probably best to understand what they are talking about. That's as good a reason as any to study at least some economics, I think.

Friday, 1 November 2013

Police crackdowns on speeders - incentive effects?

Last Monday was Labour Day in New Zealand, meaning that we had the first holiday weekend for a few months. As you may expect, given this opportunity Kiwis take to the roads to get away for the weekend and take advantage of the break.

More cars on the roads means, almost inevitably, more motor vehicle accidents and more injuries. In most years. This year, thankfully, there was only one fatality - the lowest absolute number since records of holiday road tolls started being collected in 1956. Last year there were six deaths, 22 serious injuries and 90 minor injuries (injury statistics for 2013 aren't available as yet), and "travelling too fast for the conditions" was cited as a factor in 20 percent of the crashes.

Not surprising then, that the police want to target speeding, in order to reduce the number of fatal accidents. What is interesting though, is how the police implement this - by taking a tough line on all speeders going more than 4 km/h over the posted limit. Don't get me wrong - if you are travelling faster than the posted limit, you are breaking the law and you shouldn't complain if you are ticketed. I have never complained the few times I have been caught speeding.

There are a limited number of police cars travelling the roads, on the lookout for speeding (and other traffic offences, but let's ignore those for now). Every time a police car stops a speeder, they are effectively out of action for the time it takes to write out the ticket. In the meantime, a number of other speeding drivers are driving past, and not being stopped. I want to consider the incentive effects here. As far as I know, the police have never admitted as such, but typically the tolerance is more than 4 km/h (else why would they both highlighting that it is lower on the holiday weekend?). So, what happens when you lower the tolerance? Could doing so actually increase the number of speeding drivers?

First, let's think about the drivers' incentives. If they are rational or quasi-rational decision-makers (as I discuss with my ECON110 class), they weigh up the costs and benefits of the decision to speed. The benefits include less time wasted on the roads (an opportunity cost - you give up some time you could spend with family, friends, on the beach, whatever). The costs include the increased risk of a serious car accident, and the increased risk of a fine for speeding. For both of these costs, the cost is not absolute - it is based on risk. So, they need to consider both the probability of the event (crash or speeding ticket, respectively) occurring, and the absolute cost to them if it does. If the benefits outweigh the costs, then the driver will choose to speed.

Now let's assume for the moment that, among drivers, there is a distribution of open road driving speeds between 70 km/h and 120 km/h where 75 percent of drivers driving below 100 km/h, about 10 percent between 100 km/h and 104 km/h, about 10 percent between 105 km/h and 109 km/h, and about 5 percent at 110 km/h or above (this is similar to the unimpeded open road speed distribution from the 2012 National Speed Survey). That means that about 25 percent of all drivers are speeding, compared to the posted limit of 100 km/h.

If the police apply a tolerance of 9 km/h, they will only ticket speeding drivers going 10 km/h or more over the posted limit. This means that they are targeting the top 5 percent of drivers, and will regularly pull over a speeding driver. All of the drivers the police stop and ticket are the fastest (110+ km/h) drivers, and those drivers face some probability (let's call it P1) of being stopped and ticketed. This probability is what the rational or quasi-rational drivers take into account in deciding whether to speed or not.

Now, assume instead that the police apply a tolerance of 4 km/h. Now they are targeting the top 15 percent of drivers, and still regularly pull over speeding drivers. However, now two thirds of the drivers that are stopped are those driving between 105 and 109 km/h, and only one third are those driving 110 km/h or greater. So, now the fastest drivers face only a probability of (1/3 * P1) of being stopped and ticketed.

It is entirely possible, then, that lowering the tolerance for speeding actually reduces the disincentive (potential speeding ticket) effect of speeding. This lowers the costs of speeding for the fastest drivers and moves the cost-benefit calculation in favour of more speeding. It increases the costs of speeding for the slower drivers who now have a probability of being ticketed (which was zero before). So, overall there are likely to be the same number of fast speeding drivers on the roads, but fewer slow speeding drivers. I wish there were data available to test this hypothesised change in speeding behaviour.

Of course, the police could counteract the disincentive effect by increasing the number of police cars patrolling the open road. But, in order to completely offset the effect in our example above, they would need to triple the number of cars, and ensure that speeding drivers knew that was happening (which, of course, is exactly what the police do via the media).

Finally, having more cars on the roads naturally slows all drivers down anyway (the National Speed Survey results are based on unimpeded speeds, i.e. speeds when other cars are not slowing each other down, which is least likely on a holiday weekend). So, maybe lowering the tolerance is a good move for the police not because it lowers speeds (which would happen anyway) but because it ensures that each police car that is patrolling on holiday weekends is kept busy even though the number of speeding drivers on the roads on holiday weekends is actually lower than on non-holiday weekends. Remember that each minute spend not pulling over another driver entails an opportunity cost for the police. Food for thought.

Tuesday, 22 October 2013

Market competition and firm costs in the taxi industry

I've been in Wellington the last couple of days, presenting at the Pathways, Circuits and Crossroads Conference (more on that later). Anyway, I had a quite interesting conversation with my taxi driver on the way into the city from the airport, which I thought I should share.

This semester, my ECON110 students showed some scepticism over the mechanism/s through which costs increase when a market becomes more competitive. The standard explanation relates to economies of scale (smaller firms have higher average costs because the fixed costs such as administration, etc. are spread over a smaller quantity of output). However, my conversation with the taxi driver raised another potential explanation.

Some four or five years ago, the taxi industry in Wellington was deregulated. Prior to that, my driver says, special licenses were required in order to operate a taxi (possibly similar to taxi medallions in New York, but without the resale market). After deregulation, the number of taxi operators exploded - he wasn't sure of the exact numbers, but they may have doubled or more. The result was a large increase in competition in the market.

How does that increase in competition raise costs for taxi drivers? Of course, it doesn't materially affect direct out-of-pocket costs like fuel, taxi maintenance, etc. However, it does affect the cost-per-fare, through time (opportunity) costs. The explanation goes like this. Before de-regulation, there were fewer taxis. When a driver arrived at the airport with a fare, he could expect to wait a relatively short time before picking up a new fare, so faced only a small time cost for each fare. With a large increase in the number of taxis competing for fares, taxi drivers now wait much longer at the airport before collecting a new fare (my driver said that he had waited over 90 minutes before picking me up. This isn't unusual, or specific to the airport - he said there were often similar waiting times at the city taxi stands). This increases the opportunity costs for each taxi driver, since they could be doing something else with that time (like this, perhaps?), raising the costs per fare.

Anyway, that provides another alternative explanation for why costs might rise as competition increases - competitive firms must work much harder to attract customers, and working hard is costly.
Coming back to the Pathways conference: If anyone is interested, a video of my presentation on Subnational Stochastic Population Projections in New Zealand will soon be available on the Nga Tangata Oho Mairangi research project website. From the feedback I have received, my presentation was most memorable for my opening statement that "All population projections are wrong" (paraphrasing a quote from the British mathematician George E.P. Box). For more context on the wrongness of Auckland population projections recently, see here for example.

Thursday, 17 October 2013

Newsflash! Schools with more problem bullying are more likely to implement anti-bullying programs

OK, so that headline isn't really an attention grabber. At least, not as much as "Youth more likely to be bullied at schools with anti-bullying programs".

So, let's say are interested in whether anti-bullying programs are effective or not. You find yourself a dataset that includes individual-level data on students (including data on whether they have ever been physically or emotionally bullied at school), and school-level data on security climate (like, whether they have uniformed police, metal detectors, random bag/locker checks, etc.) and whether the school runs an anti-bullying program. Importantly, the dataset is collected at a single point in time (a cross-sectional dataset). You run a regression analysis (using whatever fancy analysis method is your flavour of the month - in this case because the data are at two levels (individual-level and school-level) and the dependent variable is binary (bullied - yes or no) you run a multi-level logit model). You find that students' experience of bullying is positively associated with anti-bullying programs. In other words, students at schools that have anti-bullying programs are more likely to have experienced bullying.

You could reasonably conclude that anti-bullying programs somehow create more problem bullying, right? Wrong. What you've done is confused causation with correlation, as the study cited in the article above (recently published in the open-access Journal of Criminology) has done (Quote: "Surprisingly, bullying prevention had a negative effect on peer victimization" - by negative, they mean undesirable).

A positive association between anti-bullying programs and bullying could be because "students who are victimizing their peers have learned the language from these anti-bullying campaigns and programs" (quote from the UTA news release). Alternatively, and possibly more likely, it could be because schools that have more problem bullying are more likely to implement anti-bullying programs in the first place. In the latter case, the observed positive association between anti-bullying programs and problem bullying is in spite of the anti-bullying program, not because of it. A third possibility (and equally plausible) is that students who are at a school that has anti-bullying programs are more aware of what bullying is, are less afraid and more secure in themselves, and consequently are more likely to report bullying.

So, while technically correct, the headline is a little misleading. Youth are more likely to be bullied at schools with anti-bullying programs, but not necessarily as a result of those programs. To be fair, the authors note in the limitations to their study that "the cross-sectional nature of the study limits one from making a causal inference about the relationship between individual and school-level facros and likelihood of peer victimization". However, their loose language in the conclusions and in the media release do just that.

The main problem here is that we have no idea of how much worse (or indeed, how much better) bullying would have been if those programs had not been in place. In order to tease that out, you would ideally need to have longitudinal data (data for schools both before, and after, their anti-bullying programs were implemented, and comparable data for schools that never implemented a program). Then you could see whether the anti-bullying programs have an effect or not (you would have a quasi-experimental design, and there are problems with this but I won't go into them here).

You could possibly argue that, because the results show a positive association, that the anti-bully programs are not effective (because they don't eliminate problem bullying in schools that have them). That's the angle taken by Time magazine columnist Christopher Ferguson ("Anti-Bullying Programs Could Be a Waste of Time"). Again the headline is technically correct but partly misses the point.

Why might schools with effective anti-bullying programs still show great levels of bullying than schools without such programs? This could be because, even though the program is effective in reducing bullying, it doesn't eliminate bullying entirely (relative to schools with no such program in place). In that case, schools with anti-bullying programs could still have higher-than-expected levels of bullying, even though their programs are effective.

So, because the study was poorly designed to determine the effectiveness of anti-bullying programs - it essentially tells us nothing about how effective (or ineffective) they are. I am much more happy to believe meta-analysis of results from many experimental and quasi-experimental studies, such as that by Ferguson and others reported here. They found that anti-bullying programs show a small significant effect. However, they also noted it is likely that their estimated effect was largely due to publication bias, and as a result they concluded that these programs "produce little discernible effect on youth participants". So, the question of whether these programs are effective or not remains somewhat open.

I guess the overall point here is that as researchers, we need to be careful about interpreting and not over-stating our results, and where possible we also need to be careful about how the media interpret our results. It is far too easy for the general public to misinterpret our results if they are not clearly stated.

Tuesday, 15 October 2013

An un-recognised additional cost of higher education

In my ECON110 Economics and Society class, one of the topics we cover is the economics of education. Specifically, part of the topic looks at the private education decision - under human capital theory, we would choose another year of education (or to study a course, certificate, degree, etc.) provided the incremental benefits outweigh the incremental costs. The incremental benefits include higher lifetime earnings, which may arise from productivity gains, but also from signalling to employers that you are a better hire, as well as social capital gains from interacting with a cohort of like-minded students who will each go onto future careers. The incremental costs include direct costs such as tuition, textbooks, accommodation (provided it costs more than accommodation would if not studying), and so on. The incremental costs also include opportunity costs, such as foregone income while studying, foregone leisure time, and so on.

The Becker-Posner blog this week discussed the increasing costs of education (see here for Becker's post, and here for Posner's). These increasing costs are happening in New Zealand, not just the U.S. However, Becker and Posner also note that the returns to higher education are also increasing (though, New Zealand doesn't do so well on this front compared with the rest of the OECD). So, in spite of the increasing tuition costs, the cost-benefit calculation will still often come out in favour of university study for most students.

However, there may be other costs of higher education that are less well recognised. A paper published in the journal Kyklos last year (gated) by Helmut Rainer (University of Munich) and Ian Smith (University of St Andrews) raising the prospoect of an additional (and unrecognised) cost of higher education - lower sexual satisfaction. Rainer and Smith showed that education had two effects on sexual satisfaction. On the one hand, education improves communication between partners, which makes them more likely to coordinate their sexual preferences and leads to higher sexual satisfaction. On the other hand, education increases earnings and therefore increases the opportunity costs of leisure activities (including sex). Their empirical findings based on German data show that the opportunity cost effects dominate, leading higher education to be associated with lower sexual satisfaction overall.

The Rainer and Smith papers sits alongside an earlier paper in Economic Inquiry by Hugo Mialon of Emory University (ungated version here), on the economics of faking orgasm. Mialon found that both men and women with higher education were more likely to fake orgasms. His argument was along similar lines - higher earnings for more educated people led to higher opportunity costs of time, making them "more likely to fake just to get it over with".

So, when considering the private decision on education based on human capital grounds, there is an additional cost to consider!

Tuesday, 8 October 2013

Why Sex, Drugs and Economics?

Welcome to my new blog, "Sex, Drugs and Economics". You might rightfully ask, why call your blog "Sex, Drugs and Economics"? Well, some time ago I was asked to describe my research interests in a single (short) sentence. I have always had a pretty wide portfolio of research interests (if you are interested, my Google Scholar profile is here), but at the time my main research interests were in the economics of HIV/AIDS (my PhD thesis was on the relationships between poverty and HIV/AIDS in the Northeast of Thailand), and I was embarking on a new project on the spatial economics and impacts of liquor outlet density (research which has since generated a significant amount of publicity as liquor licensing laws have been reviewed and subsequently changed in New Zealand). Thus: sex, drugs and economics.

Why blog at all?
Until a few years ago, my first year elective economics paper (ECON110 Economics and Society - still my favourite paper to teach) students had to complete a blog as part of their assessment for the paper. Students found this to be one of the most challenging, and most rewarding, aspects of the paper (I've published on their this innovative assessment here, with a much longer ungated version here).

It was also a great way to launch discussions of how even basic economics applies in the real world. So, in part this blog is a way for me to create a discussion space for my students, and help them to recognise the value in the economic approach to looking at real-world situations.

I'm not entirely unselfish though. There is research to suggest that blogging boosts the reputation of the blogger above economists with similar publication records, and it would be silly not to try and milk some of those benefits. And it's a good way to talk about some of the quirky research that others are doing, using economics.

What can you expect from this blog?
It's unlikely that I'll become a daily (or more frequent) blogger in the way that some of my favourite bloggers are, like Jodi Beggs at Economists Do It With Models. But I'll aim to post once or twice a week - sometimes on contemporary issues, and sometimes on quirky applied economics papers from the recent past. I'm hoping that most of my posts will apply some fairly basic economic principles, to make them accessible to my students.

Enjoy!