Tuesday 31 May 2016

The toilet seat game

When I blogged my review of William Nicolson's "The Romantic Economist - A Story of Love and Market Forces" last year, I mentioned that I would definitely use the toilet seat game as an example in ECON100 this year. However, the game theory topic is already pretty full, so I didn't manage to squeeze it in. So instead, I thought I would talk about it here.

First, a little bit of background. William tells us about one of his relationships, with Sarah, which hit a bit of a rocky patch with respect to toilet seats, and whether they should be left up or down. You may laugh, but there's been several papers that have explored the game theoretical implications of the toilet seat game (see here and here for two examples).

In the case of Will and Sarah, the decision-making can be thought of as a sequential game, with three decision-making nodes. In the first node of the game, Will decides whether to leave the seat up or down. If he leaves the seat down, the game ends (because Sarah is happy). But if he leaves the seat up, Sarah gets angry. She then has the choice of whether to forgive Will, or punish him by nagging or throwing a minor tantrum. If Sarah forgives Will, then the game ends (and Will is pretty happy, because he got to leave the seat up and avoid the nagging). If instead Sarah punishes Will, then Will has the choice of agreeing to change his behaviour for the future, or ending the relationship. The game as a decision tree (extensive form) is presented below. The payoffs to Will and Sarah are simply ranked from 1 (best) to 4 (worst). Note that in this version of the game, ending the relationship is a worse outcome for Sarah than for Will. [*]

We can solve for the (subgame perfect) Nash equilibrium by using backward induction - essentially we start at the end of the game and work our way backwards, eliminating strategy choices that would not be chosen by each player. In this case, the last play is by Will. He can choose to end the relationship (and receive his 3rd best payoff), or change his behaviour (and receive his worst payoff). So, clearly he would choose to end the relationship. Now, working our way back to Sarah's choice, if she punishes Will then we know that Will is going to end the relationship. So Sarah can choose to forgive Will (and receive her 3rd best payoff), or punish him (which results in Will ending the relationship and Sarah receiving her worst payoff). Given that choice, Sarah will choose to forgive Will. Working our way back to the first node then, Will can choose to leave the seat down (and receive his 2nd best payoff) or leave the seat up (knowing that Sarah will forgive him, and he will receive his best payoff). Given that choice, Will is going to leave the seat up. So, the subgame perfect Nash equilibrium in this case is that Will leaves the seat up, and Sarah forgives him.

Will then goes on to point out that the outcome of this game crucially depends on how Will feels about Sarah. If he really wants to be with Sarah, then the game changes. Now say that ending the relationship is the worst outcome for both Sarah and Will, as showing in the decision tree below.


Solving for equilibrium in this case (again using backward induction), we find that Will would agree to change his behaviour (because that provides him with his 3rd best payoff, compared with the worst payoff if he instead chose to end the relationship). Now, knowing that Will is going to change his behaviour, Sarah would choose to punish him if he leaves the seat up (because she would receive her second best payoff when he agrees to change, which is better than her third best payoff, which she would receive by forgiving Will). Now, knowing that Sarah will punish him, Will decides to leave the seat down (and receive his second best payoff, which is better than the third best payoff that he would receive if he left the seat up, because Sarah would then punish him and he would then change his behaviour).

All in all, this is a great example of a sequential game in action. Of course, the game as presented above is necessarily simplified - in fact this game is a repeated game, so the Nash equilibrium is more complex (as shown in the two papers linked at the beginning). But a great example nonetheless.

*****

[*] Careful readers of Nicolson's book will notice that I have altered the payoffs in these games from those that appear in the book. Specifically, the payoff for Sarah in the case of the Up/Punish/Change outcome is that this is Sarah's second best payoff (whereas in the book it is her third, which is equal to the Up/Forgive outcome). I expect this was a slight error in the book.

Friday 27 May 2016

Uber says they're not price discriminating - should we believe them?

James from my ECON100 class pointed me to this article about Uber, surge pricing and price discrimination:
The car-hailing service Uber can detect when a user’s smartphone is low on battery, and therefore willing to pay more to book a ride.  
Uber, which has faced the ire of London’s tax drivers since launching in the capital in 2012, can tell when its app is preparing to go into power-saving mode, although the firm says it does not use this information to pump up the price.
Keith Chen, head of economic research at Uber, told NPR that users are willing to accept a “surge price” up to 9.9 times the normal rate, particularly if their phone is about to die.
“One of the strongest predictors of whether or not you’re going to be sensitive to surge… is how much battery you have left on your cellphone,” he said.
“We absolutely don’t use that to push you a higher surge price, but it’s an interesting psychological fact of human behaviour.”
So, Uber has discovered that people are more willing to pay a higher surge price when their phone battery is about to die (see my previous post on surge pricing). This suggests that Uber passengers have less elastic demand when their phone is running low on charge. This makes sense for a couple of reasons, both relating to time horizons. First, if you need to get home (or to work, or to a meeting, etc.) and your phone is running low on battery, then you need to get a ride soon. If you wait too long, your phone might run flat and then you might find it more difficult to find a ride home (you'd need to resort to hailing a cab on the street, using public transport, or walking). Second, we don't want to be without our phones for too long, lest we miss Kim Kardashian's latest tweet. So, we want to get somewhere where we can recharge our phones, and fast. We know that when customers have short time horizons, their demand is relatively less elastic. And when demand is relatively less elastic, firms can mark up the price higher and the customer will still be willing to pay.

So, should we believe that Uber is not price discriminating? Price discrimination increases profits when firms can do it effectively. This only requires three conditions to be met:
  1. Different groups of customers (a group could be made up of one individual) who have different price elasticities of demand (different sensitivity to price changes);
  2. You need to be able to deduce which customers belong to which groups (so that they get charged the correct price); and
  3. No transfers between the groups (since you don't want the low-price group re-selling to the high-price group).
The first condition is clearly met, and presumably Uber's app knows when the phone is low battery (it's probably buried in the terms and conditions for the app, which almost no one reads). Since customers don't know the battery status of other Uber customers, then the third condition is likely to be met too. So, if Uber isn't price discriminating on the basis of battery level, they are leaving potential profits on the table. Uber shareholders probably wouldn't be too happy to learn this. So, I think it's hard to believe that Uber don't take a lot of information about their passengers (including potentially the remaining battery life of their phone) into account at least at some level - perhaps they are not price discriminating via the surge price (i.e. the multiple by which they increase prices), but via the underlying base price?

As an interesting side-note, later in the article we find out a bit more about the price elasticity of demand of Uber customers when surge pricing is new, and then later:
When Uber first introduces surge pricing to a city, even a small jump in price to 1.2 times the normal rate is enough to discourage 27pc of potential customers from booking.
However, cities with more established Uber services see a smaller 7pc reduction when this price rise kicks in, showing customers get used to the idea, Chen said. 
This suggests there is relatively elastic demand when Uber first introduces surge pricing to a city (since the percentage change in quantity (27%) is greater than the percentage change in price (20%)). But there is relatively inelastic demand once customers are more used to it (since the percentage change in quantity (7%0 is less than the percentage change in price (still 20%)). This suggests that people are initially resistant to temporary increases in price (through surge pricing), but once they realise the benefits of it, that initial resistance fades.

I also found this bit interesting:
Mr Chen also noted the quirks of psychology that make passengers more likely to ride when the price is 2.1 times normal, rather than 2.0 times, which chimes in their consciousness as being twice as expensive.
“Whereas if you say your trip is going to be 2.1 times more than it normally was, wow, there must be some smart algorithm in the background here at work, it doesn’t seem as unfair,” he said.
I've seen other research that supports this (but off the top of my head I can't recall where) - it showed that people are more likely to be confident in a forecast share price of, say, $2.02 per share, than a forecast of $2.00. People believe there must be some sophisticated underlying model for a forecast of $2.02, whereas $2.00 seems more like a guess. Aren't cognitive biases wonderful?

Thursday 26 May 2016

Why study economics? Law and economics edition...

A recent paper (ungated here) in the Journal of Economic Education by John Winters (Oklahoma State University) looks at the earnings of lawyers in the U.S., by their undergraduate major (note for New Zealand readers: in the U.S., law requires graduate-level study, so all law students would have previously studied an undergraduate degree first). John explains:
I build on previous literature by presenting new data on lawyer earnings by undergraduate college major. Specifically, the data are from the pooled 2009–13 American Community Survey (ACS), which provides a 5-percent sample of the U.S. population. I report both mean and median earnings, and find that economics majors have especially high earnings among practicing lawyers.
Interestingly, economics is the fourth most common undergraduate major (studied by 6.5% of lawyers in the sample), behind political science and government (21.6%), history (9.9%), and English language and literature (8.1%). However, economics outperforms all of those more common majors in terms of earnings:
Lawyers with undergraduate majors in electrical engineering have the highest earnings according to both medians ($179,744) and means ($219,383)...
Accounting majors have the second highest median ($135,044) and third highest mean ($180,507) earnings. Economics majors have the third highest median ($130,723) and the second highest mean ($182,359) earnings.
Political science and government ranked ninth in median earnings, history ranked eighth, and English language and literature ranked 15th. So of the four most common undergraduate majors among lawyers, economics came out on top.

Law and economics are very complementary - for instance, Wikipedia has a page devoted to law and economics, and the late Nobel prize-winner Ronald Coase was famously Professor of Economics at the University of Chicago Law School. Top economists Gary Becker (the 1992 Nobel prize winner) and Andrei Shleifer have made seminal contributions to research in law and economics.

At Waikato we have taught undergraduate and graduate papers in law and economics for many years, and some of our top graduates have completed conjoint degrees that combined a law degree with either a management or social science degree majoring in economics. Knowing the great jobs that those graduates have gone onto, I wouldn't be surprised if the earnings for law and economics graduates was higher than many other combinations for our graduates as well.

Add this to the list of reasons to consider including economics in your degree programme (if you're a current or future law student).

Read more:

Tuesday 24 May 2016

Why block pricing doesn't work for heterogeneous demand

On Sunday I covered two-part pricing. Today it's the turn of block pricing. A firm uses block pricing when it charges a relatively high price until the consumer reaches some threshold, then a lower price for units after the threshold. In reality, there could be multiple thresholds. Buy-one-get-one-half-price is an example of block pricing. As with two-part pricing, we can think about block pricing first by contrasting it with a monopoly firm pricing at a single price-per-unit, as in the diagram below (for simplicity, I'll again use a constant-cost firm).


The monopoly firm using a single price-per-unit selects the price that maximises profits. This occurs where marginal revenue is equal to marginal cost, i.e. at the quantity Q1 with price P1. The producer surplus (profit) the firm earns is the rectangular area CBDF.

However, if the firm switches to block pricing, then the firm can choose an initially higher price (P0), at which point the consumers purchase Q0 of the good. If the firm lowers the price to P1 for every unit the consumer buys after Q0, then the consumer will buy Q1 of the good (and pay P0 for the first Q0 units, then P1 for the rest). The firm's profits would increase by the area HGJC.

The firm could block again, offering a lower price than P1 for units purchased beyond Q1, and capture more profits (I haven't shown this on the diagram though). Again, this demonstrates that firm profitability is all about creating and capturing value.

We can also show the effect of block pricing using the consumer choice model. This is illustrated in the diagram below. The black budget constraint represents the most the consumer can afford to buy with their income, when there is a single price-per-unit for Good X (with 'All Other Goods' [AOG] on the y-axis). The consumer purchases the bundle of goods E, which includes X0 of Good X, and A0 of All Other Goods.


With block pricing, the firm charges the standard price Px up to the quantity X0, then pays a lower price beyond that quantity. This causes the budget constraint to pivot outwards (and become flatter) from the point E. So this consumer can now reach a higher indifference curve, by buying the bundle of goods D (their new best affordable choice). This bundle includes more of Good X (X1), and less of All Other Goods (A1). Because they are buying less of All Other Goods, they must be spending more on Good X.

As with two-part pricing, block pricing also works well when the firm faces homogeneous demand for its product (i.e. when all consumers have similar demand for the product). We can also use the consumer choice model to demonstrate why block pricing doesn't work so well when there is heterogeneous demand.

Consider two consumers - one with low demand (shown on the diagram below with the blue indifference curve), and one with high demand (red indifference curves). With a single price-per-unit, the low demand consumer buys Bundle G, which includes X1 of Good X, and A1 of All Other Goods, and the high demand consumer buys Bundle J, which includes X3 of Good X, and A3 of All Other Goods.


When the firm moves to block pricing instead, the low demand consumer is not affected. The highest indifference curve they can get to is still I0, so they continue to buy Bundle G. 

The high demand consumer would be better off moving to buying Bundle K, which is on the highest indifference curve they can now reach. Bundle K contains more of Good X (X4), so block pricing does induce these consumer to buy more. However, Bundle K includes more of Good A (A4), which means that even though these high demand consumers are buying more of Good X with block pricing, they are actually spending less on Good X.

So, block pricing doesn't work so well for heterogeneous demand, because the lowest demand consumers will not be affected, while the highest demand consumers will buy more of the good, but spend less on it. The combination of these two effects is likely to reduce the firm's profit, but notice that it isn't as bad as for two-part pricing that I described on Sunday.

Again, this is why you don't often see block pricing (or two-part pricing) alone 'in the wild'. Firms must first ensure they have relatively homogeneous demand, and they achieve this through price discrimination (e.g. through menu pricing - offering different options to different consumers, knowing that each option will appeal to a different 'type' of consumer). Then within each homogeneous group they can use block pricing or two-part pricing.

Read more:

Monday 23 May 2016

Pricing strategy and behavioural economics

I've been holding onto this one for a while, until my ECON100 class covered pricing strategy. We know that firms use pricing strategy to try and extract additional profits from consumers (yesterday I blogged about two-part pricing). But firms also take advantage of consumer's quasi-rationality (consumer's departures from pure rational decision making). Back in January, Tim Harford wrote a great post which highlighted some examples:
The most egregious example — popular in the US, blessedly less so in the UK — is the mail-in rebate, where a manufacturer or retailer will offer a discount but only after the customer fills in a form, attaches a receipt and mails the paperwork off with fingers crossed. There are a number of advantages to this — it may allow the manufacturer to gain information about customers, and produces a flattering cash flow — but surely the main reason companies use mail-in rebates is that they know some people aren’t as disciplined and organised as they think.
Scott Adams, as so often, nailed the issue in a Dilbert comic strip. Dogbert offers a product for sale for $1,000,029 with a $1,000,000 rebate. And since “all we need is one person to forget to mail in the rebate forms”, Dogbert suggests targeting “the lazy rich”.
The benign argument for mail-in rebates is that the relatively wealthy are less likely to mail in the rebate, because they prefer to spend the time it takes to do so on other things (this is also the argument for why the wealthy are less likely to clip coupons). This leads the wealthy to pay a relatively higher price for the good, which is essentially what the firm wants, since rebates (and coupons) are a subtle form of price discrimination. Because the wealthy have relatively less elastic demand for goods (they are less responsive to higher prices, because goods of whatever type take up a smaller proportion of their income), the firm wants to (and can get away with) charging the wealthy a higher price.

However, Harford hits on another point with rebates. If consumers trick themselves into thinking they will mail in the rebate (so at the time they make the purchase they are thinking about the sticker price minus the rebate), but then are too disorganised to do so, the consumers are paying a higher price than they initially were willing to. Or perhaps people are just time inconsistent - the decision they make at the time of purchase (that they will mail in the rebate form) isn't the decision that they make later (that it's too much trouble to fill out the form, find a postbox, etc.). Either way, that extra money goes into the back pockets of the firm.

Harford also gives another example, which will be familiar to my ECON100 students:
More subtle still are pricing schemes that exploit consumers. In a recent analysis of overconfident consumers, economist Michael D Grubb highlights the “three-part tariff”. A one-part tariff would be, for example, 2p per minute to make phone calls. A two-part tariff might be £10 a month, plus 1p a minute to make phone calls. And a three-part tariff? A tenner a month with 200 minutes of free calls, plus 10p a minute to make phone calls after the 200 minutes have been used.
The three-part tariff will be reassuringly familiar to anyone with a mobile phone contract. But look at it there on the page. It’s ridiculous, is it not? It is hard to imagine any company deploying such a convoluted offering for a product whose consumption was obvious, such as petrol — “£10 a month to use our petrol stations, the first 50 litres of petrol to be supplied at cost price, and then £5 a litre thereafter.” There are legitimate business justifications for a three-part tariff but the likeliest story is that phone companies think we are fallible. Most of us don’t have a firm grasp either of how much we talk on average or of how variable that average is. As a result, many of us pay these punitive charges more often than we expect.
I demonstrated yesterday how the two-part tariff allows consumers to reach a higher indifference curve (thus making them better off). Again in a benign explanation, the three-part tariff does similarly. However, this relies on consumers being aware of how much they are spending, but if consumers are largely unaware then all bets are off. One aspect of behavioural economics is the illusion of knowledge - we think we know more than we do. So, when we choose a mobile phone plan, we are confident we know how many minutes we will use and that we will be able to keep track so that we avoid the expensive minutes after our free minutes have run out. But it turns out we aren't as aware of our mobile phone usage as we thought.

I encourage you to read the whole of Harford's post, as it is very good.

Sunday 22 May 2016

Why two-part pricing doesn't work for heterogeneous demand

A firm uses two-part pricing when it splits the price into two parts (the clue is in the name!): (1) an up-front fee for the right to purchase; and (2) a price per unit. If the consumer wants to buy any of the product, they must first pay the up-front fee. We can think about two-part pricing first by contrasting it with a monopoly firm pricing at a single price-per-unit, as in the diagram below (for simplicity, I'll use a constant-cost firm).


The monopoly firm using a single price-per-unit selects the price that maximises profits. This occurs where marginal revenue is equal to marginal cost, i.e. at the quantity QM with price PM. The producer surplus (profit) the firm earns is the rectangular area CBDF.

However, if the firm switches to two-part pricing, then we first recognise that the firm can charge an up-front fee equal to the consumer surplus, and the consumer would still be willing to purchase the same quantity (Q0). So, the up-front fee could be as large as the area ABC, in which case profits would be the combined area ABDF.

The firm can do even better than that. Profitability is all about creating and capturing value. So, if the firm can create more value (by increasing the consumer surplus), they can capture more profit (by increasing the size of the up-front fee). So, by lowering the price to PS, the consumers would be willing to buy the quantity QS, and would receive consumer surplus equal to the area AEF. By setting the up-front fee equal to AEF and the per-unit price at PS, the firm then increases their profit to be all of the area AEF.

We can also show the effect of two-part pricing using the consumer choice model. This is illustrated in the diagram below. The black budget constraint represents the most the consumer can afford to buy with their income, when there is a single price-per-unit for Good X (with 'All Other Goods' [AOG] on the y-axis). The consumer purchases the bundle of goods E, which includes X0 of Good X, and A0 of All Other Goods.


With two-part pricing, the firm charges an up-front fee (so the budget constraint starts at a lower point on the y-axis, since paying the fee is like giving up income for the consumer), and a lower per-unit price. So the budget constraint for two-part pricing (the red budget constraint) is flatter. Let's assume it passes through the point E (so the consumer could still purchase that bundle of goods if they wanted to. There is one other point that we need to recognise - if the consumer buys none of Good X, then they do not need to pay the fee. So Bundle C is also an option for the consumer.

With two-part pricing, this consumer can now reach a higher indifference curve, by buying the bundle of goods D (their new best affordable choice). This bundle includes more of Good X (X1), and less of All Other Goods (A1). Because they are buying less of All Other Goods, they must be spending more on Good X.

Two-part pricing works well when the firm faces homogeneous demand for its product (i.e. when all consumers have similar demand for the product). We can also use the consumer choice model to demonstrate why two-part pricing doesn't work so well when there is heterogeneous demand.

Consider two consumers - one with low demand (shown on the diagram below with the blue indifference curves), and one with high demand (red indifference curves). With a single price-per-unit, the low demand consumer buys Bundle G, which includes X1 of Good X, and A1 of All Other Goods, and the high demand consumer buys Bundle J, which includes X3 of Good X, and A3 of All Other Goods.


When the firm moves to two-part pricing instead, the low demand consumer can no longer afford bundle G (it is outside the new budget constraint). The highest indifference curve they can get to is I0, where they buy Bundle C. This bundle includes none of Good X. These low demand consumers find themselves better off by not buying any of Good X at all, because then they don't have to pay the up-front fee.

The high demand consumer would be better off moving to buying Bundle K, which is on the highest indifference curve they can now reach. Bundle K contains more of Good X (X4), so two-part pricing does induce these consumer to buy more. However, Bundle K includes more of Good A (A4), which means that even though these high demand consumers are buying more of Good X with two-part pricing, they are actually spending less on Good X.

So, two-part pricing doesn't work so well for heterogeneous demand, because the lowest demand consumers will stop buying the good entirely, while the highest demand consumers will buy more of the good, but spend less on it. The combination of these two effects is likely to reduce the firm's profit.

This is why you don't often see two-part pricing alone 'in the wild'. Most often, firms will price discriminate first (often through menu pricing - offering different options to different consumers, knowing that each option will appeal to a different 'type' of consumer), then within each subgroup use two-part pricing.

Read more:

Saturday 21 May 2016

Online vs. blended vs. traditional classes

I've been a bit resistant to the increased trend towards 'flipped classrooms' and blended learning. For the most part, my resistance was due to a lack of robust research demonstrating that flipped classrooms were more effective (or even equally effective) when compared with 'traditional' face-to-face classes. My classes aren't the traditional chalk-and-talk in any case - my students know that they will get a lot of exercises to do in class, especially in ECON110. So, I remain to be convinced that flipping my classroom would benefit my students (moreover, I worry that there would be distributional consequences - more on that at the end of this post).

Anyway, two papers in the new AER Papers and Proceedings issue caught my eye on this topic. The first paper (ungated version here), by Aaron Swoboda and Lauren Feiler (both Carleton College) compares a blended learning approach with a traditional face-to-face class in introductory microeconomics. They describe the two approaches they employ as follows:
The blended sections required students to read textbook chapters, watch videos of lecture material created by the authors, and answer basic comprehension questions online before coming to a class session on a topic. During the class session, the professors began by answering questions about the out-of-class materials and giving mini-lectures targeted to troublesome items, but students spent much of their time engaged in group problem-solving, simulations, and discussion activities. After the session, students were assigned a more challenging set of online questions covering that day’s topic and preparatory materials for the next topic. Online homework assignments were completed using Sapling Learning and provided instant grading and feedback. The control courses primarily used “chalk-and-talk” and traditional written homework assignments.
In other words, this paper doesn't simply compare the flipped classroom with a traditional approach. Both approaches had the same number of contact hours between teachers and students. So, the blended approach essentially involved the students engaging in more learning hours, because in additional to classroom time they were watching video material. On top of that, the blended learning treatment group had online questions with feedback. So, it's not possible to isolate whether any impact on learning arises from the flipped classroom approach, the additional materials, or some combination of the two. Having said that, Swoboda and Feiler find:
The mean student scored 20.5 points on the TUCE [Test of Understanding of College Economics] at the end of the term compared to 15.0 on the pretest. The difference in TUCE scores is significantly different from zero at conventional levels.
...students in the traditional courses improved by roughly four points out of 30, while students in the blended courses improved by 6 points.
So with the blended learning approach students improved their economics knowledge by more. The additional increase was about half a standard deviation in the initial scores, which is reasonably substantial. However, it isn't possible to isolate whether this was because of the blended approach itself, or the additional learning hours devoted to the paper, or the online tests that provided ongoing feedback. Another issue with the paper is that students could choose whether they were in the blended learning class or the face-to-face class. You'd expect students who thought they would do better in the blended learning class to choose that option (though that is by no means certain - students might well convince themselves that blended learning is a good thing, but then not follow through on watching the videos, thereby reducing their learning opportunities).

Which brings me to the second paper (sorry I don't see an ungated version anywhere), by William Alpert, Kenneth Couch, and Oskar Harmon (all University of Connecticut). They take what I consider to be a more robust approach (also in introductory microeconomics), where students are randomly assigned to a treatment (or control), and they compare face-to-face with both a blended learning approach and a fully online approach. They explain:
The experimental design randomly assigned students to one of three delivery modalities: classroom instruction, blended instruction with some online content and reduced instructor contact, and purely online instruction. The traditional section met weekly for two 75-minute sessions, alternating between a lecture and discussion period. The blended section met weekly with the instructor for a 75-minute discussion period. As a substitute for the lecture period, students were given access to online lecture materials. The online section had class discussion in an online asynchronous forum, and for the lecture period students were given access to the same online lecture materials as students in the blended section. The online materials were developed using best practice standards from the Higher Ed Program Rubric for online education as described on the website of the Quality Matters (2014) organization. For the three arms of the experiment, lectures, discussions, and other instructional content were prepared and delivered by the same instructor.
So, in this case the blended learning approach had reduced contact hours, so any observed effect would likely not be because of enforced additional learning hours for students. Their outcome measure is performance on a common final examination. They find:
There, students are still found to score 4.2 (t-statistic = 2.68) points lower in the online section than the face- to-face variant. The sign of the impact of participating in the blended course is negative but the parameters are not significantly different than zero at conventional levels across the three columns.
In other words, the students in the online only treatment did demonstrably worse than students in the face-to-face class (by about half a letter grade). Students in the blended learning treatment performed similarly to those in the face-to-face class.

Alpert et al. also look at differential attrition between the three classes. They find:
From the point students received permission to enroll to course completion, potential participation in the face-to-face section declined from 175 to 120 students (30 percent). For the blended course section, the decline was from 172 randomized students to 110 completers (36 percent). The largest decline was observed for the online arm where 172 students were assigned to the course and 93 completed (46 percent).
In other words, students were most likely to drop out of the online class, and a greater proportion of students dropped out of the blended learning class than the face-to-face class. Once they control for this attrition, the negative effects of the online only class are even larger (about a full letter grade), but there is still no significant difference between the face-to-face and blended learning class.

These two papers give quite complementary findings. The Swoboda and Feiler paper found a significant positive effect of their blended learning approach (compared with face-to-face), while Alpert et al. find no significant effect of blended learning. However, since Swoboda and Feiler had additional learning hours (i.e. the same number of classroom hours plus video watching beforehand), it is plausible that their significant effects were due to the enforced additional learning hours. The higher attrition in the blended learning class for Alpert et al. is a bit of a concern though.

Now I would really like to know what the distributional effects of blended learning are. My expectation is that it will probably work well for keen, motivated students, who are almost certain to watch the lectures before class. These are the students who currently read the textbook, and additional resources (like the lecturer's blog!). These students will likely benefit most from the change in approach, and gain significantly from the interactive learning in class, which is what the blended learning approach is designed to facilitate. The less-motivated students, who may watch some of the videos before class but often don't, will not benefit as much, or may actually be made worse off by the switch to blended learning. So I'd expect to see a similar mean (or median) result in the class (or higher if the number of classroom hours remained the same), but a wider distribution around that mean.

At the least, some more research work is needed to convince me to move to a blended learning approach (having said that, it is essentially the approach we use in teaching ECON100 in summer school!).

Friday 20 May 2016

Homer Economicus - The Simpsons and Economics

I've just finished reading this book, which was edited by Joshua Hall (West Virginia University) and included contributions from many authors, including Jodi Beggs (of Economists Do It With Models fame) and Art Carden (famous to me at least for this poem on How Economics Saved Christmas).

Each chapter uses examples from The Simpsons to illustrate economic concepts and how they apply to the real world. Overall the book is pitched at readers who have some passing familiarity with 100-level economics - if you have no understanding of economics at all, this probably isn't the place to start. However, for those who can at least recognise the concepts of supply and demand, opportunity cost, and monopoly, and especially for those who are also fans of the show, this book is a good read. You may be surprised how many economic lessons you can draw from the show.

The highlights for me were the chapters on unintended consequences (by Art Carden), labour markets (by David T. Mitchell of the University of Central Arkansas), and behavioural economics (by Jodi Beggs). I especially enjoyed being reminded of a lot of the most amusing parts of episodes from the early seasons, before I started on my economics journey.

Joshua Hall also has two earlier research papers on using The Simpsons to teach economics (first in the Journal of Private Enterprise; and second in The American Economist - I don't see an ungated version of the second paper anywhere, sorry). See also this paper from the Journal of Economic Education by Andrew Luccasen and Kathleen Thomas (both Mississippi State University). They are all well worth reading. You can also hear Joshua Hall talking to Tom Woods about economics and The Simpsons here. Enjoy!

Thursday 19 May 2016

We probably are paying more for better movies, but it's not obvious

Two years ago I wrote a post about uniform pricing at the movies. It's interesting that in the two years since then, little has changed. The price of a movie ticket is still nominally the same, regardless of whether you are seeing a blockbuster new release, or a B-grade horror movie. In my earlier post though, I pointed out why nominally uniform pricing probably doesn't actually lead all movies to be priced the same when you consider the average ticket price:
First, consumers are typically unable to use complementary passes at new movies. However, the length of time a new movie remains "no complementaries" varies by movie. A summer blockbuster might remain "no complementaries" for a few weeks, whereas a B-grade horror flick might only be "no complementaries" on opening night or not at all. You might argue though that the price is the same, whether the consumer is using a ticket purchased on the day or a complementary ticket which was purchased earlier. However, complementary passes are usually distributed in bulk to third parties and for less than the full ticket price. The effect of this is that the average price paid by consumers for a movie varies by movie. It's probably not a large difference (although if the movie theatres didn't enforce this rule, along with many others I would certainly be saving any free passes for the high-demand movies), but it will have some effect of changing the average price received by the movie theatre even though their posted price doesn't change.
Second, movie theatres are very deliberate in their selection of session times for movies. The low-demand movies are more likely to be playing at low-demand times (Monday afternoon, etc.), when ticket prices are lower. So again, this will affect the average price the movie theatres receive for each ticket sold for different movies with low-demand movies (being more likely to be in low-price session times) having a lower average price than high-demand movies.
So, overall while variable pricing is not observed explicitly in the market, I would argue that there is at least some variable pricing at play through the non-price strategies undertaken by the movie theatres.
A couple of additional points on pricing at movie theatres came up in class discussion in ECON100 this week. Movie theatres are increasingly providing premium seating options (I made this point in the earlier post), but not all of their theatres have the same configuration. Some have more premium seating than others. So, ensuring that high demand movies are shown more often in the theatres that have more premium seats will also lead to higher average ticket prices for high demand movies when compared with low demand movies.

On a more speculative note, consumers at high demand movies probably have less elastic demand for the movie experience (as a whole) than consumers at a low demand movie. Faced with a 'low' ticket price relative to their demand, the consumers with low elasticity for the whole experience might spend more on concessions (drinks, popcorn) and end up spending more overall as a result, leading to higher profits for the movie theatres (the profit margin on concessions is quite high). I don't know how robust this latter argument is, but on the surface it seems plausible.

Either way, it is clear that movie theatres' pricing still largely defies logic. Marginal Revolution offers some additional thoughts here.

Read more:


Wednesday 18 May 2016

Anti-trust laws catch up with Google

In ECON100 (and ECON110) we recognise that monopolies use their market power to raise the price above the price that would prevail in a more competitive market. This higher price causes fewer consumers to purchase the product, and reduces economic welfare compared with the more competitive market. This is one of the reasons for considering anti-trust legislation - to prevent the formation of monopolies in the first place, and thus increase total welfare.

However, one aspect of market power I didn't discuss in class was the case where firms leverage their market power in one market to increase their profitability in other markets. We'll talk about strategic aspects of pricing between markets in class this week, but leveraging market power in one market to achieve increased profits in another market is something else. For instance, the anti-trust case against Microsoft in the late 1990s/early 2000s was essentially about whether Microsoft used its dominant position in the market for operating systems to force rivals out of the market for internet browsers, by automatically bundling the Internet Explorer browser with the Windows operating system. This results in fewer choices for consumers and makes them worse off as a result.

The Europeans have taken similar issue with some of the activities of Google, and Google may now face a €3 billion fine as a result of their activities. MSN.com explains:
It is understood that the European Commission is aiming to hit Google with a fine in the region of €3bn, a figure that would easily surpass its toughest anti-trust punishment to date, a €1.1bn fine levied on the microchip giant Intel...
It will mark a watershed moment in Silicon Valley’s competition battle with Brussels. Google has already been formally charged with unlawfully promoting its own price comparison service in general search results while simultaneously relegating those of smaller rivals, denying them traffic...
Margarethe Vestager, the Competition Commissioner, on Friday raised the possibility of further charges in other specialised web search markets such as travel information and maps.
So Google has been using its dominance of the search engine market to increase profits from its price comparison service (and likely travel, and maps). It's probably even worse than the Europeans believe - I've written before about the market power of price comparison websites, especially if one firm came to dominate the 'price comparison market' (which was no doubt Google's intention). We shouldn't be surprised to see firms taking advantage of their market power in this way. This is exactly the reason why we have organisations like the European Competition Commission, to ensure the activities of firms with market power are kept in check.

Read more:

[HT: James from my ECON100 class]

Tuesday 17 May 2016

Amazon, the copycat seller

Last week in ECON100 we covered market power, and one aspect I spent a little bit of time on was the idea of platform markets (or two-sided markets) - where the firm brings together the buyer and the seller, and charges a fee to one or both parties for linking them up. Platform markets are a special case of network externalities, where the size of the network conveys benefits to those who are part of it.

So, TradeMe provides a platform for buyers and sellers to come together to exchange goods. The platform is valuable to buyers because they know there are many sellers offering goods there. The platform is valuable to sellers because they know there are many buyers looking for goods there. TradeMe pockets a commission on all sales, and because it is the go-to place for buyers and sellers, this creates a barrier to entry for other potential online auction sites - it would be difficult for them to compete with TradeMe, because they would have to somehow entice buyers and sellers away from TradeMe. That provides TradeMe with some market power, so the auction fees are likely a little bit higher than they would be if there were many competing auction sites.

Now, it turns out that platform markets can be profitable in other ways too, in ways not related to market power. Bloomberg reports:
Rain Design has been selling an aluminum laptop stand on Amazon.com Inc. for more than a decade. A best-seller in its category, the $43 product has a 5-star rating and 2,460 customer reviews.
In July, a similar stand appeared at about half the price. The brand: AmazonBasics. Since then, sales of the Rain Design original have slipped. “We don’t feel good about it,” says Harvey Tai, the company’s general manager. “But there’s nothing we can do because they didn’t violate the patent.”
Rain Design’s experience shows how Amazon is using insights gleaned from its vast Web store to build a private-label juggernaut that now includes more than 3,000 products -- from women’s blouses and men’s khakis to fire pits and camera tripods. The strategy is a digital twist on one used for years by department stores and big-box chains to edge out middlemen and go direct to consumers -- boosting loyalty and profits.
In other words, Amazon has been mining the data on what are the most profitable items selling on its platform, and using that to develop profitable product lines for itself. Of course, consumers are happy to buy the goods from the cheapest source (AmazonBasics). And even more:
...not only can Amazon track what shoppers are buying; it can also tell what merchandise they’re searching for but can’t find, says Rachel Greer, who worked on the private label team until 2014. Then, she says, “Amazon can just make it themselves.”
The idea of using data on consumers online shopping views, purchasing behaviour and preferences also underlies a lot of personalised pricing (offering different prices to every customer, based on what firms think their willingness-to-pay is) - a form of price discrimination which I have discussed before. No one should be surprised that Amazon is using the data and tools available to increase its profitability at the expense of other sellers.

[HT: Marginal Revolution]

Thursday 12 May 2016

Why you shouldn't believe all the 'science' you hear about in the media

Jodi Beggs (at Economists Do It With Models) points to this John Oliver segment on scientific studies, which is well worth watching:


Oliver makes some great points. If you want to know the real science, it pays to read the study rather than the media article, or even the media release from the researchers or their institution.

If you want more on this topic, I encourage you to read Thomas Lumley at StatsChat, who is constantly making similar points about media coverage of scientific studies.

Wednesday 11 May 2016

Ransomware is all about creating and capturing value

This week and next in ECON100 we are essentially discussing creating and capturing value (this week market power and monopolies, and next week pricing strategy). Most of the firms that we consider are selling fairly standard products that consumers want to buy. The lack of close substitutes (usually because the product is differentiated, branded, or because there are barriers to entry into the market) means that firms can sometimes derive a substantial profit from these activities.

However, not all 'business' activity is quite as benign in its effects as selling branded skateboards or pharmaceuticals. This week's Economist has an interesting article on ransomware, which can be interpreted in similar ways to what we are discussing in class:
Cybercrooks are changing their modus operandi and widening their nets for snagging the unwary... The most pernicious malware today immobilises an infected computer, encrypts its files and then demands a ransom to release them. If not paid within 12 hours or so, the computer’s content gets obliterated. To make sure the hapless victim gets the message, a bright red clock begins the count down...
No fewer than 4m incidents of ransomware were reported in the second quarter of 2015 alone. Millions more are thought to have gone unreported.
If someone bricks your computer or phone, then someone who un-bricks computers and phones is going to generate a lot of value for you (and for other hapless victims of the malware). Capturing that value simply requires pricing the 'un-bricking service' at less than the value it creates.

Creating demand for your own services is referred to as supplier-induced demand. We usually associate it with the seller having more information about the need for services than the buyer. Think about mechanics, who inspect your vehicle, and then undertake the repairs that they have advised that you need as a result of their inspection. The mechanic has an incentive to overstate the necessity for repairs. You can solve the problem of supplier-induced demand by having one firm do the inspections, and a different firm do the repairs.

However, in this case there is no information problem - the criminals are directly generating demand for their un-bricking services. Even better for the criminals, infecting computers and then un-bricking them is a low-cost activity:
Hacking into online retailers and financial institutions to steal credit-card and bank details may offer larger financial returns eventually, but selling the stolen data on the black market can be burdensome. By contrast, ransomware allows cybercrooks to get paid directly by their victims—with little effort, no special hacking skills, and negligible chance of being caught.
And to make things even better (for the criminals), demand for un-bricking services is likely to be relatively inelastic (unresponsive to price). We know that demand is less elastic when time horizons are short, and most affected people (and especially firms) want access to their data quickly (reinforced by the pressure of the visible countdown timer mentioned above):
Ransomware is especially effective because many of its victims do not have time on their side. Hospitals are particularly vulnerable, since they cannot afford to wait to access medical histories of patients requiring urgent treatment. Likewise, without the continual availability of data from suppliers, distributors and customers, modern manufacturing grinds quickly to a halt. Airlines closing flights prior to departure need to tally the “no-shows” with their “over-sold” seats. Disrupt any such mission-critical activity and costs—in human as well as financial terms—quickly get out of hand.
This relatively inelastic demand means that the criminals can charge a relatively high price for their 'services'. All of which adds up to significant profits for the criminals - potentially hundreds of millions of dollars. You'd better keep your computer and phone security up-to-date, unless you want to contribute to these profits.

Tuesday 10 May 2016

Creating and capturing value in pharmaceutical markets

In managerial economics, we recognise that business profitability is all about creating and capturing value. Creating value for customers attracts them to buy the product from your firm, and capturing some of that value for your firm is what leads to business profits. The better at creating and capturing value your firm is, the more profitable it will be.

This is essentially what we will be covering in ECON100 this week. Which made this New Zealand Herald article from last week particularly timely. The article talks about pricing in the market for pharmaceuticals, and tells the story of Mick Kolassa:
The Kolassa theory seems straightforward enough. Basically, it holds that drug companies should charge handsomely for products that will benefit not only the patient but the economy, by keeping people out of hospitals or allowing them to live productive lives. The price should factor in a profit that can finance more crucial discoveries.
A case study displayed on MME's website sheds light on what the firm calls its "value-based strategies." A client wondered whether it should cut a price to reverse slowing sales. After discovering some doctors really liked the drug, MME recommended "a set of aggressive price increases immediately." The client obliged, and "revenue has increased substantially."...
In his book "The Strategic Pricing of Pharmaceuticals," Kolassa wrote about drug-price elasticity. "It is theoretically possible to set a price that is too high," he said. "We have yet to identify such a situation in the U.S. market."
Pharmaceuticals create a lot of value for patients (the difference in value between life and death is pretty stark!). Pharmaceutical companies therefore seek to capture much of that value for themselves, by charging high prices (I've written on the economics of drug development and pricing before). They can get away with the high prices because they have market power, and this market power arises because there are barriers to entry into the market - the drugs can be patented, and patents keep competitors out of the market. Even after a drug is off-patent though, there are still barriers to entry because it is costly and takes time for firms to divert resources to drug development, testing, and accreditation, even if it only involves creating a generic version of an existing drug. This is one of the reasons that Turing Pharmaceuticals was able to radically increase the price of Daraprim last year.

So, even though a full course of the hepatitis C drug Sovaldi sells for US$84,000, because it requires a shorter treatment period and has fewer side effects than alternative treatments, it generates a lot of value for patients. Patients (and by extension, health insurers and providers) are willing to pay the high price because of the value it generates for them in terms of improvements in health and wellbeing. All of this makes Sovaldi enormously profitable for Gilead.

Read more:


Sunday 8 May 2016

Why study economics? More on jobs in the tech sector edition...

Earlier in the year I wrote a post about the increasing number of jobs for economists in the tech sector. Julia Chen Investor's Business Daily had an interesting article last month on the same topic. Chen writes:
Established giants and newer tech firms alike are enlisting economists to help with many crucial tasks. Companies that employ economists include Amazon.com (AMZN), Airbnb, IBM (IBM), Facebook (FB), Microsoft (MSFT), eBay (EBAY), Yahoo (YHOO) and Uber...
Tech company economists must combine theory with practical application.
“We use economic principles and economic theory, but we also use experiments, statistical data and other aspects of the real world to build systems that work and will stand the test of time,” said Preston McAfee, chief economist at Microsoft.
These systems can address fundamental business questions, such as setting prices, as well as challenges brought about or exacerbated by the rapid rate of innovation in the tech industry. As firms expand, economists are increasingly working on public policy issues, including privacy issues and intellectual property topics...
Economists have to analyze large amounts of data. Much of their value to tech firms is in helping to connect the engineering side with the business side.
As I've noted before, the availability of jobs in the tech sector is just one more reason why studying economics is a good idea. Not only are the big tech firms hiring economists (and not just the PhD-qualified economists that the IBD article talks about), there are opportunities for economics graduates at small start-ups. Steven Lim even teaches a paper at Waikato on the New Economics of Business, substantially related to how economics is useful for new, tech-oriented firms. The skills that economics graduates learns have wide applicability to business decision-making.

Read more:

Thursday 5 May 2016

Time to end the e-cigarette ban paradox

New Zealand and Australia both prohibit the sale of e-cigarette products containing nicotine. This creates a weird paradox where cigarettes containing nicotine are legal, but e-cigarettes containing nicotine are not. The argument from some public health advocates is that e-cigarettes may be a gateway to smoking, they increase social acceptability of smoking, and the long-term health impacts of e-cigarettes are unknown so it is better to be safe than sorry. An interesting piece in The Conversation this week by Colin Mendelsohn (University of Sydney) takes up the issue:
A new report by the Royal College of Physicians in the United Kingdom says electronic cigarettes (e-cigarettes) are much safer than smoking and encourages their widespread use by smokers. It concludes that e-cigarettes have huge potential to prevent death and disease from tobacco use.
The review identifies e-cigarettes as a valuable tool to help smokers quit. For those who are unable to quit with currently available methods, e-cigarettes can substitute for smoking by providing the nicotine to which smokers are addicted without the smoke that causes almost all of the harm. This approach is supported by the scientific and public health community in the UK and is consistent with a previous review by Public Health England, the government health agency...
In the UK, there is no evidence e-cigarettes are a gateway to smoking. E-cigarette use is almost entirely restricted to current or past smokers. Use by children who would not otherwise have smoked appears to be minimal.
The report found no evidence to suspect the use of e-cigarettes renormalises smoking. On the contrary, smoking rates in the UK have been falling as e-cigarette use rises.
E-cigarette vapour contains some toxins and the report acknowledges some harm from long-term use cannot be dismissed. However, it supports the widely held view that the hazard to health is unlikely to exceed 5% of the risk of smoking, and may well be substantially lower.
The irony here is that a ban on e-cigarettes, which as noted above are substantially lower risk than cigarettes, probably leads at least some people who would have quit smoking if e-cigarettes were available, to continue smoking. Cigarettes and e-cigarettes are substitutes - the unavailability of one (e-cigarettes) makes consumers more likely to consume the other (cigarettes).

Not only that, the unavailability of e-cigarettes may lead to more new smokers taking up the habit. Recent research by Abigail Friedman (Yale School of Public Health), reported here (but with the original paper here, and ungated earlier version here), shows that:
...state bans on e-cigarette sales to minors find that such bans yield a positive and statistically significant 0.7 percentage point increase in recent smoking rates among 12 to 17 year olds, relative to the rate in states that had not implemented such bans.
Of course, that doesn't answer the question about the effect of a ban on e-cigarette sales on smoking rates for the population as a whole. However, it is probably not a stretch to believe that similar effects would be apparent for New Zealand teens, and so a relaxation of the ban on e-cigarette sales is likely to reduce smoking among the older population, and reduce rates of smoking uptake among the younger population.

Clearly, having more smokers as a result of banning e-cigarettes is an unintended consequence of the policy, since it leads to greater harm. As many others have argued, it is time to relax the ban on e-cigarettes containing nicotine, to reduce this unintended consequence.

[HT for the Yale research: Marginal Revolution]

Monday 2 May 2016

The 'efficient' allocation of refugees

Some time back I promised one of my students I would write about the refugee crisis in Europe. So here goes: If we had free movement of people, then refugees would simply move to their preferred location (which may or may not be a Western country). From an overall global welfare perspective, this should be the preferred solution (if you want an explanation why, Michael Clement argues persuasively that there are trillion dollar bills being left on the sidewalk as a result of restrictive immigration policies in Western countries).

However, there isn't free movement of people, which means that from an economic perspective one of the interesting aspects of the crisis is how 'best' to allocate refugees between countries. Thinking about European countries that are facing the brunt of the wave of refugees, the current solutions are clearly not working. Open Europe has good coverage of the problems here. In short though, the 'Dublin regulation system', whereby refugees apply in the country where they first arrive and are returned there if they move elsewhere, has failed with peripheral European countries like Greece simply shepherding migrants through to the next country in.

An alternative solution was developed in the form of a €3bn deal with Turkey, whereby migrants are returned to Turkey, only appears to cover migrants in the thousands (compared to the 1.25 million refugees who entered Europe last year). This was essentially a Coasean bargain between the European countries and Turkey. The Coase Theorem tells us that, if private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own (i.e. without government intervention). In this case the private parties are the EU governments and the Turkish government. This bargaining solutions would work provided the payment to Turkey is more than enough to compensate for the cost of hosting the refugees, and provided the payment is less than the alternative cost (of dealing with the refugees) for the EU governments. Also, it would work provided the transaction costs (the costs of arranging the agreement) are low, the cost of monitoring and enforcing the terms of the agreement are low, and there are no free riders (EU countries that would benefit from the agreement, but refuse to contribute their share of the €3bn). In theory the Coasean solution would work, but its failure was already being discussed when it came into force, mainly because it still fails to address the allocation of refugees between countries.

Earlier, at the end of January, Dalibor Rohac wrote an interesting op-ed on the allocation issue in the New York Times. Rohac writes:
Europe’s current refugee crisis is often presented as a quantity problem: There are simply too many migrants for the European Union to absorb. But this situation is not without historical precedent. Europe has accepted large numbers of immigrants before. The issue this time is political. It has little to do with the absolute numbers of asylum seekers. The problem lies with the European Union’s dysfunctional asylum system, which encourages countries to pass refugees on like hot potatoes, and places the burden of registering and processing asylum seekers on a small number of countries on the Union’s border...
But there are ways out of this seemingly desperate situation.
For one, the quota system, proposed by the European Commission, could be made flexible. In 1997, the Yale University legal scholar Peter Schuck proposed a system of tradable refugee quotas. The European Union would still have to agree on the total number of migrants to whom it is willing to grant asylum, and on how they would be distributed among the member states. But the quota market would allow countries such as Slovakia or Hungary, whose leaders refuse to accept any refugees, to “bribe” others to carry their obligations on their behalf, putting a concrete price tag on the unwillingness of Central Europeans to help.
Essentially the allocation issue can be solved with some sort of quota. The quota system creates a set of 'obligations' for EU countries - the obligation to accept a given number of refugees each year. In ECON110 we talk about four criteria of an efficient property rights system, and effectively these obligations are a form of property rights (albeit, negative property rights) [*]. An efficient system should have obligations (or rights) that are: (1) universal; (2) exclusive; (3) transferable; and (4) enforceable.

Universality in this context means that all refugee flows would need to be covered by the obligations system, and all EU countries would be obligated to take refugees. The system would start to break down if there were additional flows of refugees that were not covered, or where countries were able to opt out of the system, for instance. Exclusivity in this context means that all of the costs of the refugee flows should be borne by the country that is accepting that group of refugees. This means that there can be no free riders. Transferability means that the obligations can be freely traded between countries. If the Netherlands wants to accept fewer than their quota of refugees, they might be able to trade the obligation to Sweden, presumably in exchange for something that Sweden wants. Finally, enforceability means that there must be some form of penalties (presumably from the EU) for countries that refuse to comply with their obligations. A system of obligations meeting these four criteria would be an efficient way of allocating refugees among European countries.

However, Rohac also notes an alternative solution:
...an explicit market in refugee quotas is not the only possible fix to the current crisis, according to two researchers at the University of Oxford, Alex Teytelboym and Will Jones. To bring the chaotic influx of refugees under control, the European Union could also create a centralized “matching system,” which would involve none of the cash payments that are often seen as repugnant.
[In the matching system] [a]pplicants would rank European Union countries by order of preference and submit that ordering to a central clearinghouse.
Some countries, such as Germany or Sweden, would likely remain oversubscribed. But because applicants would be submitting a complete ordering of European Union countries they are applying for, they could still be matched with, say, their second, or third choice, instead of being rejected outright.
The European Union member states would in turn specify how many and what refugees they are willing to accept. 
The problem with the proposed matching system is that no country would be obligated to take refugees, so the system would lack enforceability, and countries could easily opt out, leaving us back where we started. So, while matching might appeal to those who are squeamish about the transferability of obligations between countries, a quota system that obeys simple criteria would work much better in practice. This is provided an initial allocation of obligations could be agreed, and of course all participating countries would have an incentive to ensure that their initial allocation was as low as possible, if only so that they could obtain concessions from other countries after the system comes into force. The joys of politics!

[Update]: Reflecting on this overnight, I wrote this post as if the matching system and a system of tradeable obligations are somewhat mutually exclusive. Of course, they are quite complementary. Once a country knows how many refugees they are obligated to accept, there needs to be some mechanism to select which refugees they take, which is where matching could contribute. Similarly, a matching system will tell which refugees are most compatible with each country, but not how many each country should take (e.g. at what 'level' of compatibility should the cut-off for acceptance be?), which is where the system of tradeable obligations becomes helpful.

*****

[*] Please note that I am explicitly not referring to refugees as property here. I am simply linking the concept of a system of obligations to that of a system of property rights, because the efficiency of both systems relies on the same four criteria.

[HT] For the Open Europe blog, Marginal Revolution, which incidentally has been following the refugee issue over the past few months (see here).