Wednesday, 31 December 2014

Cellphones and customer lock-in (literally)

In ECON100 we spend a whole topic looking at pricing strategy, which is a substantial departure from most microeconomics principles courses. Pricing strategy is important because it helps to explain why firms don't typically price at the point where marginal revenue is exactly equal to marginal cost (the point that would maximise profits for a firm selling a single product at a single price). We also look at non-price business strategies that are related to creating and capturing value (in other words, ways of increasing profits).

One of the areas we look at is switching costs and customer lock-in. Switching costs are the costs of switching from one good or service to another. Switching costs might include contract termination fees, but also include other costs such as the cost of searching for an alternative good, and learning how it works, etc. Customer lock-in occurs when customers find it difficult (costly) to change once they have started purchasing a particular good or service. Switching costs typically generate customer lock-in, because a high cost of switching can prevent customers from changing to substitute products.

Which brings me to this opinion piece by Juha Saarinen last month, which makes the case against telcos locking their customers in:
With that in mind, it seems a shame that SIM locking has reared its ugly head again.
This is a feature in GSM networks that allows providers to restrict or lock phones bought from them so that only their SIM cards can be used in them. The idea here is that telcos will "subsidise" phones - or on some plans, include it for no money upfront - but you have to stay on their networks.
Having to pay less upfront for phones is an attractive proposition for many, especially when it comes to expensive smartphones.
However, telcos are not charities and they make up for the cost of the device in usage charges over the time it is locked to their networks. You're not going to save money in other words, and likely lose out over time as better deals and plans come online.
Juha is describing the act of multi-period pricing, a common counterpart to switching costs and customer lock-in (and the reason why we discuss these things in a topic on pricing strategy). Multi-period pricing occurs where the initial price is low (to attract customers) and then the price is raised when the customers are locked in. Multi-period pricing only increases profits if the customer is locked in - if the customer is free to move to other providers, then when the price is increased they are likely to do so. There are lots of examples of multi-period pricing - one of my favourites is that it is a good explanation for why drug dealers give away free samples of their highest-quality (and most addictive) product.

In the case of locked mobile phones, the customer is literally locked in because the phone they are given for free (or heavily discounted) is unable to be used with any other network. The telcos are not dummies - they're doing this because it increases their long-run profits. They take a hit by giving away the handset at below cost, and make up for it through monthly plan charges from a long-term locked-in customer.

If that sounds a bit unfair or anti-competitive to you, then according to Juha you may be right:
You'd think that the Commerce Commission would frown upon telcos again trying to lock in customers with SIM locking to prevent "churn", or moving to other providers with more competitive deals, but no. The regulator has done a one-eighty on SIM locking lately.
"We don't believe SIM locking is anti-competitive. It's analogous to early termination provisions in post-pay contracts," a commission spokesperson told me.
"Customers have choices of buying handsets directly or honour some sort of undertaking if they accept a handset subsidy," the spokesperson added.
Those arguments are both flimsy though. First, a contract between a telco and a person or company doesn't stop customers from moving to other providers.
Sure, you'll have to honour the contract or pay termination fees (which can be exorbitant), but you can use your phone on another network if it's compatible with it.
Let's say Telco A's service in your area is bad but Telco B is good; you need phone service and will bite the bullet and switch providers. With an unlocked phone, you can.
I have to agree with the Commerce Commission here. While on the surface locking customers into a long term relationship sounds anti-competitive or unfair to the customer, the customer is still free to choose not to purchase from the provider offering a locked phone and to go with another provider (even if it means being locked into purchasing from the other provider instead). The only difference is that this mobile phone lock-in is technological rather than contractual. But the customer need not be locked in - they could purchase the phone at full price and not have a locked phone (or be locked into a contract with termination fees) at all. They would then be free to change provider at will.

Another key point is that owning a locked phone doesn't stop customers from moving to other providers - it only stops them taking their phone to another provider. If the customer wants out of their contract and they are willing to pay termination fees, then probably they are willing to purchase a new phone to get out of a locked phone. The $30 cost to unlock the phone (quoted in the article) hardly seems excessively prohibitive alongside the contract termination fees which might be hundreds of dollars, depending on the phone. I don't see the issue here. As for customers who want to change because of poor service in their area, perhaps they should have looked at the quality of service in their area before purchasing the phone, locked or otherwise.

In this last bit I think Juha misses the point:
Large multinational telcos can use their market power to hammer out exclusive deals with phone makers and offer handsets to customers at low initial cost, provided they agree to be locked in over a period of time.
They can also offer network features and services exclusively to locked-in customers - and refuse to connect customers who have bought handsets directly. Telcos may also be tempted to offer plans with more expensive local and roaming rates to customers who bring their own handsets so as to steer them towards locked ones with better deals for calling, texting and data.
So, customers can get phones cheaper as a result of exclusive deals between the telcos and phone makers? The horror! As for telcos offering their locked-in customers features and services not available to those who aren't locked in, this sounds like a good deal for both the customer and the telco. The customer will be made better off (than going with some other provider without a locked phone) provided that the extra features and services are more valuable to the customer than the freedom to change provider. The telco will be made better off because this incentivises more customers to lock themselves into using their service.

Customer lock-in and multi-period pricing are a legitimate tool for increasing profits, and can actually make both the customer (who can get a better phone earlier than if they had saved up for it) as well as the telco better off. Having said all that though, as a customer it pays to think carefully about the total cost of the phone plus the monthly plan charges over the locked in period - is it worth it?

Sunday, 21 December 2014

Could technology eliminate moral hazard in car insurance?

A couple of weeks ago, Bloomberg reported that car insurers are offering discounts to insured drivers who agree to have the equivalent of aircraft black boxes installed in their cars:
Smartphone applications and devices that record trip and vehicle data are set to infiltrate auto insurance at a rapid pace, bolstered by discounts of as much as 30 percent. Consultancy Oliver Wyman forecasts that car insurance using driver data to set prices will grow 40 percent a year to become a $3.6 billion market by 2020.
Why would car insurers do this? You can be sure it isn't out of the goodness of their hearts, so there must be something in it for them.

One thing that the insurers are trying to do is to overcome moral hazard - the tendency for someone who is imperfectly monitored to take advantage of the terms of a contract (a problem of post-contractual opportunism). Drivers who are uninsured have a large financial incentive to drive carefully and avoid accidents, because if they have an accident they must cover the full repair cost themselves (not to mention the risk to life and limb). Once a car in insured, the driver has less financial incentive to drive carefully because they have transferred part or all of the financial cost of any accident onto the insurer (though the risk of injury remains, of course). The insurance contract creates a problem of moral hazard - the driver's behaviour could change after the contract is signed.

Now, car insurers aren't stupid and insurance markets have developed in order to reduce moral hazard problems. This is why we have excesses (deductibles) and no-claims bonuses - paying an excess or losing a no-claims bonus puts some of the financial burden of any accident back on the driver and increases the financial incentive for driving safely. This is also why driving illegally usually voids an insurance policy.

However, despite these contract 'enhancements' moral hazard remains a problem for car insurers. The problem remains because the insured drivers' driving behaviour isn't able to be perfectly monitored by the insurance company - they don't know if you're driving safely or not (that is, the asymmetric information about your driving behaviour remains).

This is where new technology comes in. If a black box is installed and the insurance company has ready access to the collected data, then there is little information asymmetry remaining as drivers won't be able to hide their misbehaviour from the insurance company. Now, the black box doesn't let the insurance company know who is driving the car, but since the insurance company is really insuring the car and not the driver it matters little since they should price the insurance policy on the way the car is driven. If your cars turns out to be consistently driven in a risky manner, then you can expect a higher insurance premium to compensate the insurance company for the higher risk. So, the moral hazard problem will be reduced (but not eliminated - there is still an incentive for insurance fraud, and now a new incentive for tampering with the black box).

What's to stop the risky drivers from simply opting out of having a black box? That way, the insurance company wouldn't be able to tell they are driving unsafely, right? Wrong. Since the black box comes along with a premium discount for those who install it, low-risk drivers have an incentive to have the black box installed - they needn't be worried that the insurance company will find out that they are low risk (but they might be worried about the security of their driving data being held by insurance companies!). High-risk drivers want to avoid the insurance company knowing they are high risk, so are less likely to agree to having the black box installed. So, the low-risk and high-risk drivers sort themselves in a way that is advantageous to the insurance company - it helps the insurance company overcome the adverse selection problem.

The adverse selection problem arises in car insurance because the uninformed party (the insurer) cannot tell those with 'good' attributes (low-risk drivers) from those with 'bad' attributes (high-risk drivers). To minimise the risk to themselves, it makes sense for the insurer to assume that everyone is a high-risk driver, and price their premiums accordingly. This leads to a pooling equilibrium - low-risk and high-risk drivers are grouped together because they can't easily differentiate themselves. However, the black boxes solve this problem by causing the low-risk and high-risk drivers to separate themselves in terms of who agrees to have a black box installed (a separating equilibrium) - the low-risk drivers will choose to install the black box, while the high-risk drivers will not.

The insurance companies have also chosen an interesting way of framing this option for consumers. They could have described it as higher premiums for high-risk drivers, but instead they frame it as a discount for those who install the black box. On the surface, this makes it sound a lot more attractive to consumers, since a 30% discount for installing the black box probably seems a whole lot better than a 43% penalty for not installing the black box (even though they are mathematically equivalent). However, it would be interesting to see how drivers would respond to framing this the other way (a penalty for not installing the black box). We know that people are loss averse, and more willing to avoid losses than they are willing to receive equivalent gains - so framing it as avoiding a penalty might actually encourage more consumers to install the black box, as consumers try to avoid the penalty. On the other hand, insurance companies prefer to insure low-risk drivers, so attracting new insurance contracts with low-risk drivers by enticing them with a discount is probably a better move overall. Either way though, moral hazard is likely to be reduced.

Wednesday, 10 December 2014

Dealing with squealing children at least cost

Paul Little wrote an interesting Herald on Sunday column the week before last, about squealing children:
Spare a thought in your charity for the residents of Stonefields, an "urban village" at Mt Wellington where, among other things, the "planting of pohutukawa trees along the boulevards, mimics the original lava flows", a market includes "substantive family restaurant and other dining/takeaway options" and parks provide "for a range of passive and active recreational spaces".
The planning and design of the joint appears exemplary. Unfortunately, it didn't allow for the people.
Such as those who have been complaining because those parks' recreational spaces are just a little too active.
As resident Alan Gilder says: "The park is awesome but they haven't put a lot of thought into it - the flying fox generates a lot of squealing.
Squealing. How awful, but how true. Where there are children there will likely be squealing.
And where there are flying foxes there will almost certainly be a lot of squealing.
If there is a sound more aggravating than that of children enjoying themselves then I don't know what it is.
Now, squealing children is a classic negative externality - an uncompensated impact of the actions of one party on a bystander. The poor residents of Stonefields face a cost that is imposed on them by the unscrupulous actions of the children. Since the children have no incentives to take into account the costs that they are imposing on the residents of Stonefields, they generate too much noise compared to the socially efficient optimum.

How can the externality problem be solved? One option is government intervention, as Paul explains:
What to do? Perhaps the residents could crowdfund a shush monitor - someone in attendance with a decibel reader who could hiss "shush" at the children when the squealing reached a certain level.
A "shush monitor" is an example of a command-and-control policy. The local government puts in place a limit on the allowable amount of noise, and when that noise is exceeded the nasty noisemakers can be sanctioned - perhaps by fines, or sending them to bed without dessert. If the noise level consistently exceeds the limit, the playground could be closed. No more negative externality.

Now, this solution follows from what is called the "polluter pays principle". Under this principle, the party that is responsible for the pollution is solely responsible for making restitution for the damage they cause. Since the children are causing the noise pollution, they have to pay the cost of making things right. Even if that means closing the playground. So, the cost of reducing the externality in terms of foregone fun could be pretty high.

There is an alternative to the polluter pays principle. Instead of making the polluter pay, we could try to solve the problem of the externality at the least cost (maybe we call this the 'least cost principle'). Instead of closing the playground at the cost of lots of fun times (which would be an ongoing cost, since fun would be lost every year that the playground is not there), perhaps the government could soundproof the houses that are next to the park? That would be a one-off cost, and likely a lower cost in total than the lost fun.

Of course, maybe no government-based solution is required at all. The Coase Theorem tells us that, if private parties can bargain without cost over the allocation of resources, they can solve the problem of externalities on their own (i.e. without government intervention). In the case of a bargaining solution under the Coase Theorem, it depends crucially on the distribution of entitlements (property rights and liability rules). Do children have the right to play and make noise? If so, then the residents would have liability to pay the children to be quiet - maybe buy them a bunch of Playstations and send them indoors to be quiet. Either that, or the children can just keep having fun in the playground and making as much noise as they like. On the other hand, do the residents have the right to peace and quiet? If so, then the children would have liability to compensate the residents for the noise of their playing. Either that, or they have to give up the playground.

Probably the right to peace and quiet prevails - in New Zealand homeowners have the right to quiet enjoyment of their property. So, the children will have to compensate the Stonefields residents for their excessive squealing. Or will they? The residents of Stonefields chose to live close to a park, and the cost of the negative externality will be factored into the price of the houses (if squealing children makes houses in Stonefields less desirable, then houses there will consequently be cheaper). So, you could argue that the residents of Stonefields have already been compensated for the negative externality, which has been incorporated into the price of housing (at no additional cost to the children). In which case, the residents should just suck it up or move somewhere quieter.

Monday, 8 December 2014

Are sex services in Russia a Veblen good?

The Moscow Times reports (emphasis added):
In the Urals, sex workers have raised prices by between 50 and 100 percent, Uralpolit.ru said Wednesday, citing unnamed clients of prostitutes.
In addition to the falling ruble, the sex tariff inflation may have been boosted by an influx of sex workers fleeing war-torn Ukraine, the website said. The new competition is forcing local sex workers to hike their rates in order to pay their bills, the report said.
So, there is an increase in the number of people supplying sex services (because of the influx of Ukrainian sex workers), and that leads to an increase in the price of sex services? Only if the demand curve is upward sloping. Otherwise, an increase in competition should lead to a decrease in the price (after all, this is one of the reasons that competition is argued to be good for consumers).

Could the demand curve for sex services be upward sloping? It seems unlikely, but there are some types of goods where the demand curve is upward sloping. One of these types of goods is Veblen goods - luxury goods where the price is a signal of the high status of the purchaser. In this case, when the price goes up people the good is an even more powerful signal of high status, and so consumers who are seeking status demand more of the good. To show their high status, the purchasers then want to broadcast their purchase to many people (especially those who are close to them in actual social status) - this is conspicuous consumption, otherwise the signal is worthless. That doesn't seem a particularly likely scenario for sex services. Neither are sex services consistent with other types of goods that have upward-sloping demand (Giffen goods, goods with network effects, goods with bandwagon effects).

More likely, Uralpolit.ru and the Moscow Times have demonstrated temporary economic illiteracy. Increased supply doesn't increase prices. On the other hand, inflation does increase prices and that is what is being observed.

[HT: Marginal Revolution]

Sunday, 23 November 2014

The chocolate deficit - will we run out of chocolate?

I'm back from conference leave in the U.S. and after a month off blogging finish exam marking and travel it's time to post some content relevant to my summer school ECON100 class. The market for chocolate has been in the news while I was away - Roberto Ferdman on Wonkblog has an interesting piece outlining the problem:
Chocolate deficits, whereby farmers produce less cocoa than the world eats, are becoming the norm. Already, we are in the midst of what could be the longest streak of consecutive chocolate deficits in more than 50 years. It also looks like deficits aren't just carrying over from year-to-year—the industry expects them to grow. Last year, the world ate roughly 70,000 metric tons more cocoa than it produced. By 2020, the two chocolate-makers warn that that number could swell to 1 million metric tons, a more than 14-fold increase; by 2030, they think the deficit could reach 2 million metric tons.
How can we (the world, I mean) be consuming more cocoa than we produce? By running down stockpiles of cocoa built up over past growing seasons. I'm not sure that I believe the projected deficits the chocolate-makers are suggesting above, because according to the latest data from the International Cocoa Organization, the total stock of cocoa in September 2013 was just over 1.6 million tonnes. That stock would run out long before the projected deficits arise.

The issue of the size-of-deficits vs. available-stock aside, cocoa and chocolate markets can be easily analysed using the simple supply and demand diagrams that we use in ECON100 and ECON110. Let's start with the market for chocolate where there has been an increase in demand, particularly from China. Ferdman writes:
China's growing love for the stuff is of particular concern. The Chinese are buying more and more chocolate each year. Still, they only consume per capita about 5 percent of what the average Western European eats. There's also the rising popularity of dark chocolate, which contains a good deal more cocoa by volume than traditional chocolate bars (the average chocolate bar contains about 10 percent, while dark chocolate often contains upwards of 70 percent).
Ceteris paribus (all else being equal), increased demand for chocolate (from D0 to D1 in the diagram of the market for chocolate below) increases the price of chocolate quantity of chocolate (from P0 to P1), and increases the quantity of chocolate traded (from Q0 to Q1).


Since more chocolate needs to be made to satisfy this (Q1 instead of Q0), and cocoa is an input into the production of chocolate, more cocoa will be needed. This means an increased demand for cocoa, as shown in the diagram below of the market for cocoa (increased demand from DA to DB). However, there's also been a reduction in the supply of cocoa (from SA to SB), because as Ferdman writes:
Dry weather in West Africa (specifically in the Ivory Coast and Ghana, where more than 70 percent of the world's cocoa is produced) has greatly decreased production in the region. A nasty fungal disease known as frosty pod hasn't helped either. The International Cocoa Organization estimates it has wiped out between 30 percent and 40 percent of global cocoa production. Because of all this, cocoa farming has proven a particularly tough business, and many farmers have shifted to more profitable crops, like corn, as a result.


So, the price of cocoa rises (from PA to PB) - Feldman notes that the price of cocoa has increased by more than 60 percent since 2012. The change in the quantity of cocoa is ambiguous - it depends on the relative size of the shifts in demand and supply. As drawn in the diagram above (a supply decrease that's smaller than the demand increase), the quantity traded increases from QA to QB. That seems consistent with recent production data from the International Cocoa Organization. However, it's also possible that the quantity traded decreases (if the supply decrease was larger than the demand increase) or stays the same (if the supply decrease exactly offsets the demand increase).

It's also worth noting that the supply curve for cocoa is upward sloping, even though supply is essentially fixed to the amount of cocoa produced in any given season. This is because higher prices will induce some of those who have stocks of cocoa to sell them on the market - so higher prices induce greater quantity of cocoa supplied to the market. Similarly, low prices induce some supplier to hold back the cocoa from the market and stockpile it to sell in the future.

Now, we can see that the price of cocoa has increased, and we know that cocoa is an input in the production of chocolate. So, we need to go back to the market for chocolate to see how this affects things. Higher input prices increase the costs of chocolate production, which reduces supply (from S0 to S1 in the diagram below). Combined with the demand increase we showed above, this leads to an increase in the price of chocolate (from P0 to P2; P1 would have been the price if there wasn't an increase in the cost of cocoa), and an ambiguous change in the quantity of chocolate traded (though the diagram below shows a small increase in quantity traded from Q0 to Q2; Q1 would have been the quantity if there wasn't an increase in the cost of cocoa).


Finally, it seems unlikely that we will run out of chocolate. Unless, by running out you are referring to being unable to meet demand at current prices. The market adjusts to ensure that demand and supply are brought into balance - so instead of running out of chocolate, we are likely to simply face higher prices to get a chocolate fix in the future.


Sunday, 19 October 2014

Pricing strategy in practice: Cocktail menu edition

One of my favourite topics to teach in ECON100 is pricing strategy. In part, it's because this topic is a bit less about microeconomic theory, and a bit more about practical things that real managers do. That's why I enjoyed reading this article about cocktail pricing, which talks about how real bar managers set prices for drinks.

In the article, there's no pricing where marginal revenue is equal to marginal cost (the theoretical profit-maximising point for the firm with market power). Instead, the managers are making judgement calls about pricing based on their industry experience. For instance, consider this quote:
"I can't say that there's any way to be 100 percent certain that a certain drink will sell better than others," Morgenthaler tells me. "I'm constantly surprised by what is less or more popular on our menus. But with as much experience as I have, I would say I've got a pretty good idea of what's going to sell and what's going to appeal to a more connoisseur crowd."
In other words, the manager is using their market knowledge to set the price. This might involve heuristics (rules-of-thumb, such as the price of a glass of wine being the same as the wholesale price of the bottle - this used to be a common heuristic in the restaurant trade here, but I'm not sure if that still is the case), or it might just involve expert judgment. Cost is an important factor:
A cocktail by nature is a combination, in differing ratios, of a set of ingredients that each have costs, so many cocktail bars spend a lot of time and effort crunching the numbers behind their drinks...
Pour cost is pretty much what it sounds like: the cost a bar incurs by pouring a given cocktail... a bar might decide upon an acceptable range in which its pour costs must fall, given how other aspects of the business factor in, and then calculate the price of drinks based on that range. Between two drinks sold for the same price, the one with the higher pour cost earns the bar a smaller profit...
No matter its size, Cannon points out that "a restaurant will be successful over the long haul if it can pocket"—meaning earn in net profits—"10 cents on the dollar." In other words, for an establishment pulling in $1 million a year in revenue, the owner is fortunate to have $100,000 to show for it after expenses. "That's a tough order," Cannon adds. "Robust liquor sales at solid cost of goods are one of the reasons you can get to that 10 cents on a dollar." Astute cocktail pricing (say, pour costs around 21 percent or less, on average) can be a critical component of a restaurant's overall business strategy and health.
But there can't be any explicit determination of the point where marginal revenue is equal to marginal cost. In order to determine marginal revenue you must know what your demand curve is, which seems unlikely (see the earlier Morgenthaler quote) and if you don't know marginal revenue you certainly can't determine the point where marginal revenue is equal to marginal cost as we do in the textbook examples.

An alternative way of determining the profit maximising price is to use the price elasticity of demand directly (though this only works where the product has elastic demand, i.e. a price elasticity of demand that is greater than one). The formula for the optimal price in terms of the price elasticity of demand (which you can find here, or for a more lengthy explanation see here or the mathematical derivation here) is:

P* = MC[ε/(ε+1)] where Îµ is the price elasticity of demand (and remember that price elasticity of demand is negative, because as price increases the quantity demanded decreases due to the law of demand - the price elasticity of demand is the percentage change in quantity demanded divided by the percentage change in price, and when one of these is negative the other is positive).

For goods which have more elastic demand (i.e. where customers are more responsive to a change in the price), Îµ is larger (more negative) and [ε/(ε+1)] will be smaller and the price will be a smaller markup over marginal cost. For goods which have more inelastic demand (i.e. where customers are less responsive to a change in the price), Îµ is smaller (less negative) and [ε/(ε+1)] will be larger and the price will be a larger markup over marginal cost. So, you should charge a lower price (lower markup) if your customers are more price sensitive, and charge a higher price (higher markup) if your customers are less price sensitive.

Of course, this assumes that the price elasticity of demand is constant (which isn't true for a straight line demand curve), but putting that aside it doesn't appear that the bar managers are using an explicit calculation of price elasticity of demand in their pricing decisions either (again, see the Morgenthaler quote above). So, are they getting their pricing decisions wrong?

I would argue (as I do in class on this topic) that the long-term managers we observe in the market are not systematically getting their pricing decisions wrong. The reason is Darwinian. The cocktail bar market is pretty cutthroat - there isn't a lot of margin for error, and a manager who systematically got their pricing decisions wrong is going to lower bar profits either by pricing too high (and having customers go to the competition) or pricing too low (and lowering the per-customer profit making it more difficult to cover rent and staff costs, etc.). A manager who consistently lowers bar profits won't be a manager for very long, so the managers we see (who have been managers for a while) must be the ones who generally price close to the profit maximising point. So, even though these managers are not explicitly using marginal-revenue-equals-marginal-cost or the optimal-price-as-a-function-of-elasticity to determine prices, they must be internalising that through their expert knowledge of the market. And if you talk to bar managers, you can see that they have an understanding of price elasticities (or how their customers respond to changes in price in relative terms), even though they don't use the language of economics.

However, that's not the end of the story, because the pricing of each drink is not undertaken in isolation:
When Cannon and his team revise their cocktail menus, he says they try to price drinks destined for the greatest popularity so that they have the lowest percentage pour costs. For a prospective top-selling drink, "we need to make sure that that one is in a very solid cost of goods range, maybe a point or two below our target, because if a number-one mover that is refreshing and easy to [drink] is priced right, it allows you some wiggle room on some other esoteric things, where the ingredients are more expensive." He adds that, "We'll take a few lumps on this really cool drink that [the bartenders have] created, and it will be great conversation. Meanwhile, the gin sour...this is going to do the heavy lifting for us."
In other words, there are strategic aspects to pricing (as we discuss in ECON100). Sometimes it makes sense to lower the price of a drink, if that drink is going to attract customers who would buy other (higher markup) drinks as well, or who would bring other customers with them who purchase higher markup drinks. The former is the justification for loss-leading (where some products are sold below marginal cost in the hopes of increasing revenue and profits from other products - a common strategy for supermarkets, for instance). The latter is the justification for 'ladies nights' at bars. Again, good managers can be expected to take advantage of opportunities for strategic pricing across the range of product offerings.

And then there's the effect of competition. More bars in the local area will increase competition and lower prices. Customers have more alternatives, so if a bar increases prices (or markups) there will be a greater shift of customers to the competition. This means that customers become more price sensitive when there is more competition, which increases the price elasticity of demand (ε) and lowers the optimal price of drinks. So when there is more local competition, bars should be offering lower priced cocktails.

Finally, bars aren't only offering drinks to customers. They also offer amenity - the atmosphere, music, etc. which customers value. Bars that have attractive more attractive characteristics than their competition will be able to charge a premium for cocktails - again, because their customers are less price sensitive (lower Îµ, higher optimal price and markup).

So the next time you are drinking a cocktail, spare a thought for the pricing decision-making prowess of the bar manager. They're balancing cost considerations and the price elasticity of demand, as well as strategic pricing and amenity considerations, in determining the price you pay for that Long Island iced tea, whiskey sour, or special creation. And hope they've got the pricing right - otherwise they might not be around the next time you're out on the town.

See also: Fancy a margarita: Why it'll cost you more

[HT: Marginal revolution, back in July]

Thursday, 16 October 2014

The living wage is good for employers; unless lots of employers pay a living wage

The living wage is back in the news this week, with The Warehouse Group being held up as an example for other (especially retail) employers in terms of looking after the wellbeing of their workers. From this Bernard Hickey piece in the New Zealand Herald:
The Warehouse is one of a growing number of companies paying a "Living Wage". From August 1, it started paying 4100 of its workers a "Career Retailer Wage" of at least $18.50 an hour. To qualify, they must have full training and 5000 hours' experience. It represents a pay increase of 10-20 per cent.
Warehouse CEO Mark Powell estimated it would cost almost $6 million in extra wages, but it was an investment worth making...
This week, union researchers Eileen Blair, Annabel Newman and Sophia Blair delivered a paper to the Population Health Congress in Auckland on the experience of employers and workers who have adopted the Living Wage, currently $18.80 an hour - 32 per cent above the $14.25 minimum wage.
They interviewed four employers and found a variety of reasons for adopting the Living Wage, including that it was the right thing to do.
But there were more practical reasons, including wanting employees paid enough to buy their products, reducing staff turnover and having staff motivated to produce a great product or service.
You can read the research paper by Brown, Newman and Blair here (pdf), and read more about the living wage campaign in New Zealand here.

I thought a blog post on the living wage was timely given that my ECON110 class has just covered the economics of social security, poverty and inequality, and related policy, so this research provides an interesting kick-off point. As Bernard Hickey points out in his article, Henry Ford introduced a $5-a-day wage at Ford factories in 1914 (although Hickey makes the mistake of buying into the story that this was done so that Ford's workers could afford to buy cars - Tim Worstall and others have already thoroughly debunked that story). The $5-a-day wage might not seem like much, but it was about double the ‘normal’ factory wage at the time. Ford had a huge number of job applications (not surprising - they were the highest paying employer around at the time). Staff turnover fell, absenteeism fell, and productivity rose so much that Ford’s production costs decreased even though they were paying much higher wages.

What Ford had introduced was what we term an efficiency wage, a wage that is voluntarily offered by an employer and is above the equilibrium wage in the labour market. Employers offer these efficiency wages because they know they have positive effects - they attract and retain higher quality employees who work harder for the firm, higher productivity, lower absenteeism and lower staff turnover. Why do all these good effects happen? In the simplest sense, having lots of job applicants and being the first-choice employer for most available workers means you get to choose the best (most productive) workers.

But the good effects go beyond the selection of job applicants, because of the incentives that the efficiency wage creates. If an employee is working for you for a wage that is well above equilibrium, then they have a strong incentive not to shirk, not to take too many dodgy sick days, and generally to work hard for you. Why? Because if they don't and they lose their job, then the best possible outcome for them is that they go back to working somewhere else for a much lower wage. Alternatively, maybe the employees just work harder for their employer because they feel good feelings for the employer who is paying them very well. There is plenty of support for the idea of efficiency wages, including research by myself and Steven Lim and others in Thailand, and there are some good quotes from employers in the Blair et al. research report, like this one:
When you spend a lot of money training someone up you don’t want them to just leave three months later, or six months later; you kind of want them to stick around for a year or two. If they feel like they can earn more money and save up more and then go travel for longer, they’ll stick around a lot longer and the productivity will go up...
Now, the living wage is a good example of an efficiency wage. If you pay your semi-skilled (say, retail) employees $18.80 per hour, you are paying above the minimum wage and well above the equilibrium wage. So I'm not surprised that The Warehouse, and the four employers that Blair et al. interviewed for their study, have seen positive gains from paying a living wage. The alternative for their employees is to work somewhere else for (probably much) less, so working hard for more pay might be an attractive option to them.

What's good for a few employers (and their employees) must be great if all employers follow suit, right? If every employer paid a living wage much higher than the mandated minimum wage, won't everyone be better off? Not so fast. The gains from paying an efficiency wage arise because the alternative jobs for employees pay much less. If every other employer is also paying a high wage, then the employees don't need to work so hard because if they lose their job they can go somewhere else that is also paying a high wage. Same goes for absenteeism, staff turnover, etc. The benefits of the efficiency wage evaporate if lots of employers pay efficiency wages.

So, it's likely that the observed gains for employers from paying a living wage of $18.80 (rather than the minimum wage $14.25) are only sustainable so long as the living wage isn't mandatory for all employers. As Bob Jones rightly points out, forcing employers to pay much higher wages is just going to force those with slender margins (including a lot of small-scale retailers) out of business. This would reduce the number of available jobs for semi-skilled workers. According to the Treasury (quoting an MBIE estimate), raising the minimum wage to the living wage would cost 25,000 jobs. Most of these lost jobs would be in accommodation and food services, and retail trade.

Overall, the living wage might have some positive effects for those employers who offer it. But the idea that it should be rolled out by all employers is clearly being oversold if the gains to employers are essentially those that arise from paying an efficiency wage.

[HT: Tracey from my ECON110(NET) class, for pointing me to the Tim Worstall piece on the Ford $5 workday]

[Update: Fixed broken link]

Tuesday, 14 October 2014

Try this: The economics of "The Office"

Dirk Mateer and friends (Daniel Kuester at Kansas State University and Christopher Youderian at Pareto Software; Dirk is now at the University of Arizona) have done it again. Their latest contribution to using pop culture to illustrate economics concepts is The Economics of The Office, which I was alerted to by a description of the site in the most recent issue of the Journal of Economics Education. The videos are generally short, easy to use to start a discussion or illustrate a concept in class or lectures, and the variety of concepts (which you can browse through) is broad. Macroeconomics or microeconomics - there's something for everyone there.

On a related note, Dirk Mateer's website is highly recommended for teachers (and students) of economics, especially the media library. See also this earlier post from me on cornering the market for Christmas toys (also based on a video from The Office).

Thursday, 9 October 2014

University rankings and signalling

On Tuesday I wrote a post on market-based pricing and the impact on university rankings. But, to what extent do university rankings matter for our graduates? From the NZ Herald on Monday:
Kiwi employers working through a pile of CVs are unlikely to care how applicants' place of study compares on international rankings.
They may have noted last week's media reports on the latest university rankings that showed institutions here losing ground or stagnating.
But the University of Auckland's fall of 11 places on the annual Times Higher Education (THE) rankings won't work against the vast majority of its job-seeking alumni. "Generally speaking, the conversation that we have around university degrees with clients is around a demonstrated ability to commit and complete a degree," said Vanesha Din, a manager at recruitment firm Michael Page Finance.
I've written before on the value of tertiary education as a signal to employers of a student's quality (specifically for economics, see here and here). The article agrees - the value of a degree is a "demonstrated ability to commit and complete a degree". This is a signal of the student's quality as an employee (committed, hard-working, etc.), because a potential employee without a degree can't easily demonstrate those same qualities of commitment and hard work. The ranking of the university doesn't necessarily add much to the quality of the signal. That is, unless everyone that goes for a position has a degree. From the article:
However, the rankings do matter to some employers with senior technical roles including in law, medicine, specialised engineering and financial services.
Every lawyer has to have a law degree, and every doctor has to have a medical degree. So there is no signalling benefit from the degree itself - a student can't signal their quality as an employee with the degree, because all other applicants will have a degree too. The quality of the student then has to be signalled by the quality of the institution they studied at, rather than the degree itself. An effective signal has to be costly (degrees at top-ranked institutions are costly) and more costly to lower quality students (which seems likely in this case, because lower quality students would find it much more difficult to get into a top-ranked institution).

Of course, top students are weighing up the benefits of the higher quality signal provided by graduating from a top-ranked university (rather than a lower-ranked university), against the higher costs of attending the top-ranked university. Sometimes the higher-ranked university won't win out in this evaluation but at the margin, the university rankings will make a difference to this decision, and they should especially make a difference in areas like law and medicine (as in the example in the Herald article).

Tuesday, 7 October 2014

Market-based pricing of university education and university rankings

How New Zealand's university education sector is funded has been in the news lately. From the NZ Herald last Thursday:
Tertiary students should be charged much more if the Government is unwilling to invest enough to keep universities competitive, the country's largest university says.
University of Auckland vice-chancellor Stuart McCutcheon believes there is a strong case for following Britain and Australia's lead and raising the cost of study.
Fee deregulation or a similar system would allow universities to set their own fees and would likely lead to increases well above the current annual maximum 4 per cent...
...The latest international rankings released today show New Zealand universities losing ground or stagnating...
...Professor McCutcheon said such league tables were the main way international students judged quality, and the downward trend put the funding of universities at risk.
International students - and the high fees they pay - have become increasingly important, with all institutions attempting to increase them after domestic numbers and attached funding were effectively capped.
The funding available to NZ universities on a per-student basis was comparatively very low, Professor McCutcheon said. If the Government would not increase it substantially, then another way needed to be found.
This is timely given that my ECON110 class has just covered the economics of education. There are a couple of things I want to focus on in this post: (1) how university education is funded and the implications of moving to a purely market-based pricing model; and (2) the impact this has on university rankings.

University education in New Zealand for domestic students is part-funded by the government (each institution is subsidised on the basis of the number of full-time equivalent (FTE) students), and part-funded by the students themselves (through tuition fees). Alongside this, the tuition fees that universities are allowed to charge domestic students are capped (as noted above, they're only allowed to increase by 4 percent per year) - this is a price control, albeit a control that moves over time. And to complicate things further, the amount of FTE subsidy that the government will provide is negotiated with each university each year. In other words, universities are essentially limited in the number of students that they are allowed to enrol - a quantity control.

What would happen if the restrictions (on price and quantity) were removed? In ECON110, we describe the optimal level of subsidy for tertiary education, which is described in the figure below. The optimal subsidy would be the subsidy that ensured that marginal social benefit is exactly equal to marginal social cost. Any more subsidy than that and the extra cost (to society) of the additional students would outweigh the extra benefit (to society), and society would be worse off. Any less subsidy that that and the extra benefit of one additional student would be greater than the additional cost, and you should provide more subsidy.

The tricky bit is that education provides positive consumption externalities (higher productivity, lower crime, better outcomes for children of the educated, etc.) so the marginal social benefit (MSB) is greater than the demand (D) for tertiary education. The effect of the subsidy (which is paid to the universities - to keep thing simple we will ignore demand-side subsidies such as student allowances, interest-free student loans, etc.) is to lower the effective cost of providing university education (we show this with the new curve S-subsidy, which is below the supply curve S). If the size of the subsidy is optimal, then it will ensure that the market provides Q0 university places - the quantity where marginal social benefit (MSB) is exactly equal to marginal social cost (MSC). This is ensured because the students pay the low price PC and demand Q0 places at university, and the universities receive PC from the students, which is topped up to PP by the subsidy, leading them to offer Q0 places for students.



Now consider the effect of price and quantity controls. It's not possible for both price and quantity controls to be simultaneously binding on the market. This is because, if the quantity control was binding the price would adjust - the price that universities were willing to accept for each student would fall to below the level of the price control. On the other hand, if the price control was binding, the quantity would adjust  If the price control is binding, it makes the quantity control less likely to also be binding - the quantity of university places that are made available to students would likely fall to below the level of the quantity control. The lack of a binding quantity control is probably the situation we currently observe in most degree programmes other than those that have strictly limited places such as medicine or law. [*]

Let's assume that the price control is binding, but the quantity control is not. This is suggested by Professor McCutcheon's comments. Also, we've recently had a period where the quantity control was binding (we had severely limited spaces a few years ago), and that isn't the case now the sorts of things we observed then (e.g. long waiting lists for all degree programmes) isn't happening now. So, if the price control (in terms of the price paid by students) is binding at PMAX, the quantity of places available at university would be QS. However, at this artificially low price, there is excess demand for places at university - the number of places demanded is QD, but there are only QS places available. Some students, who would like to study at the subsidised price, are excluded. And because the market is operating at less than Q0 (where MSB = MSC), there is a loss of welfare - society is worse off overall.

On the flip side, removing the price (and quantity) controls would allow the market to move to Q0, maximising welfare. More students would go to university, and although they would pay a higher (out-of-pocket) cost for doing so and the government would pay more in subsidies (because there are more students), society would be better off overall.

Where do international students fit into this though? International students aren't subject to the quantity controls, so they don't directly affect the number of places available at universities. However, they do increase the demand for universities' scare education resources. This should push the price of education upwards, but it can't because of the binding price control. This should manifest in additional excess demand - more turned away students. However, it's not clear to me that this (turning students away) is happening in large numbers. So, maybe the degree of excess demand is relatively small.

However, think about the related effects. If the universities aren't bringing in as much income as they would be under market pricing (an effect of the price controls), then they can't afford to employ as many staff (or as high quality staff). Fewer staff means fewer courses (or courses with a higher student-to-staff ratio), and offering fewer courses means fewer places for students. So, perhaps the excess demand is absorbed by offering a lower quality education to all students than would have been obtained with market pricing.

There is a further flow-on effect of this, which should by now be becoming obvious. Lower quality staff means lower quality research output, which flows through into lower international rankings. Higher student-to-staff ratios also lead to lower international rankings (this is one of the dimensions that is taken into account in the Times Higher Education (THE) rankings, for instance - it is their crude measure of 'teaching quality'). So, as Professor McCutcheon notes, there is probably a close link between the lack of market pricing of education and falling rankings of New Zealand universities.

Lower international rankings reduce the demand for a New Zealand-based education by international students. Which lowers the excess demand and probably raises the quality of education for students. So, perhaps there is some dynamic equilibrium between below-marking pricing, university rankings, and international student demand. That doesn't mean we shouldn't be aiming instead for a new equilibrium with market pricing, optimal subsidies, and higher university rankings.

*****

[*] Update: Of course, the quantity and price controls are able to be simultaneously binding. A profit-maximising university will price on the basis of the willingness-to-pay of students, not their willingness-to accept. So, if the quantity control is binding and the number of student places is restricted, students are willing to pay more for their study and this makes the price control more likely to be binding as well. This seems likely for degrees where places are strictly limited, like law or medicine, but less likely for commerce, management, or arts degrees.

Sunday, 5 October 2014

Does working make you happy when you're older?

It turns out that maybe it really doesn't. At least, not according to research that I am completing with Matt Roskruge at NIDEA (the National Institute of Demographic and Economic Analysis, at the University of Waikato). In the journal Policy Quarterly in August (PDF), we wrote about some of our preliminary results. Our key research question was essentially: "Does working make older New Zealanders better off?".

We were interested in this question because labour force participation at older ages has been increasing substantially over time (and because I was funded by MBIE to investigate the effects of labour force participation among older people). To give you a sense of the scale of increase in participation, see Figure 1 below, which shows the labour force participation rate of each five-year age group 55 years and over, across the last five Censuses. This figure is taken from a working paper I wrote (PDF) earlier in the year, which explores a lot of different aspects of labour force participation among older workers in New Zealand.

Figure 1: Labour Force Participation Rate by Age, 1991-2013

Obviously, this is also a really important question, because the New Zealand population is ageing rapidly (see here or here for the national-level, or see some pretty graphic pictures (pun intended!) of the changing age structure at the sub-national level in this report by Bill Cochrane and I for the Waikato Region).

To look at the question of wellbeing and working among older people, Matt and I used data from three waves (2008, 2010, and 2012) of the New Zealand General Social Survey (GSS), which is a nationally-representative survey that collects data on a range of social and economic indicators of well-being. The key question is on life satisfaction (a proxy for overall wellbeing): "How do you feel about your life as a whole 
right now?" Responses are measured on a five-point Likert scale (1 = very satisfied; 2 = satisfied; 3 = no feeling either way; 4 = dissatisfied; and 5 = very dissatisfied). To keep things simple (and avoid having to run ordinal models), we reduced this into a variable that was equal to one if the respondent was very satisfied, and zero otherwise.

Now, there are two key problems to overcome with trying to analyse the relationship between working an wellbeing. First, there is self-selection - people choose whether or not to work, and those who choose to work would likely be those who believe that it will increase their wellbeing. This would lead to a bias in any attempt to evaluate the effect of working on wellbeing. Second, there is an endogeneity problem, because health status affects whether an individual is able to work or not, and also directly affects the individual's wellbeing. We use instrumental variables regression to overcome both problems - using gender as an instrument for full-time work status. Now, I'm not the biggest fan of instrumental variables (see here for example). We think that gender is a good instrument because it is closely correlated with full-time work (men have higher labour force participation than women), and it meets the exclusion restriction because there is no theoretical reason why men should have higher (or lower) wellbeing than women (indeed, from having looked through the literature as part of the Enhancing Wellbeing in an Ageing Society (EWAS) project, there is little consistency in effect of gender on wellbeing). Incidentally, if you are interested in what correlates with wellbeing among older people in New Zealand, read this monograph (PDF) from the EWAS project (or this one (PDF) on people aged 40-64).

What Matt and I found is interesting. We had three groups of labour force status - full-time employed, part-time employed, and not working (which combines the small number of people who reported as unemployed, as well as those who reported as being retired). Figure 2 shows the raw results in terms of life satisfaction, and it looks like those not working have the lowest life satisfaction (smallest proportion very satisfied, and largest proportion not satisfied).

Figure 2: Life Satisfaction, by Labour Force Status

However, one we ran our instrumental variables model, we found that (after controlling for health status) full-time work is associated with significantly lower life satisfaction than either part-time work or not working. Working full-time is associated with an about 49% lower probability of reporting being very satisfied. Those are the results we report in Policy Quarterly. We've been doing some follow-up work since that article, looking at some of the mechanisms through which this might be working.

Is it because wealthier older people don't need to work, so those who are working are doing so because they have to in order to get by? It doesn't seem so - the results are robust to the inclusion of area deprivation (as a proxy for wealth - the GSS doesn't have a direct measure of wealth). Including wealth makes part-time work marginally statistically significant and negative (working part-time is associated with about a 7% lower probability of reporting being very satisfied).

Is it because older workers who are working full-time are dissatisfied with their jobs? If we restrict the sample to only those who are working, job satisfaction is significantly positively related to life satisfaction, but it doesn't make full-time work status any less statistically significant (or any less negative). There's no difference in the job satisfaction-life satisfaction relationship between full-time workers and part-time workers.

Is it because older workers who are working full-time want to work less but for some reason can't? The GSS asked people if they wanted to work more hours, or fewer hours. Wanting to work more hours is associated with lower life satisfaction. Wanting to work fewer hours is not. These results don't differ between full-time and part-time workers, and they also don't make make full-time work status any less statistically significant (or any less negative).

So, it appears to be a fairly robust result - working full-time when you're older makes you less happy. Or maybe less happy older people prefer to work (reverse causality is still a possibility). I know I'd rather be working when I'm older than doing this.

Matt and I are completing the write-up of a working paper on this at the moment. Keep an eye out for it here (it should be up by the end of October). We'll also be presenting on this at the Labour, Employment and Work conference in Wellington in November.

Thursday, 2 October 2014

Why study economics? Sheepskin effects edition

Over at Offsetting Behaviour, Eric Crampton highlighted that economists have higher lifetime earnings than other graduates. The source of this claim is from this Jordan Weissmann article. I've previously written on the reasons why students should study economics (see here and here), and for a lot of prospective students deciding on their majors it really does come down to how much they can earn having done different majors. On this metric, economics does pretty well for the median graduate. However, the key addition from the Weissmann article is that, at the top end of the income distribution for each major, economists earn more than all other majors. Here's why:
So why are econ grads so good at making it rain? Part of it is that the finance and consulting industries like recruiting them, not necessarily for their specific skills, but because they consider the major a basic intelligence test. Granted, we're probably not seeing the effect of Goldman Sachs or Private Equity salaries in these charts, since they only stop at the 95th percentile of earners—but banking is a big industry, and it pays well.
Eric adds a good point which I want to expand on:
Grade-seeking students of lesser abilities drop economics for other majors; those who are left earn their grades.
One of the key characteristics of a degree or diploma is the signal that it provides to prospective employers about the quality of the applicant for positions they have available. Employers don't know up front whether any particular applicant is good (intelligent, hard working, etc.) or not - there is asymmetric information, since each applicant knows their own quality. One way to overcome this problem is for the applicant to credibly reveal their quality to the prospective employer - that is, to provide a signal of their quality. In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality applicants, would provide the signal), and it must be more costly for the lower quality applicants. Qualifications (degrees, diplomas, etc.) provide an effective signal (costly, and more costly for lower quality applicants who may have to sit papers multiple times in order to pass, or work much harder in order to pass). Qualifications confer what we call a sheepskin effect - they have value to the graduate over and above the explicit learning and the skills that the student has developed during their study.

Now, economics may provide a stronger signal of quality (a more valuable sheepskin effect) than other majors. Why would that be? Economics is by no means an easy major for most students. It involves learning calculus and statistics, and developing important critical thinking and logical reasoning skills. All of these things are hard (but ultimately worthwhile, given the returns to an economics major in terms of higher earnings) so, as Eric argues, lower-ability students tend to select themselves into majors other than economics.

When it comes to graduates, employers can't easily tell the difference in quality between economics and knitting [*] graduates in terms of their quality. However, if economics is known (by employers) to be more difficult (i.e. more costly in terms of time and effort) for students, then an economics major would provide an additional signal (over and above that of the degree itself) of the higher-than-average quality of the student.

We could make a similar argument for econometrics (regarded by most students as the most difficult part of an economics major). Top grades in econometrics (or even slightly-above-average grades in econometrics) may provide an additional signal of quality, over and above the signal provided by the economics major (and the degree). Which is why I always strongly recommend to our economics students that they include econometrics in their programme of study. That allows them to take advantage of multiple layers of sheepskin effects.

So, sheepskin effects provide another reason why studying economics is a great idea. If you want to convince employers that you are a top-quality student, it's hard to beat receiving top grades in a more difficult major.

*****

[*] OK, I made that up - we don't have knitting graduates. I don't doubt there are some potential students who would be keen on a Bachelor of Knitting though.

Saturday, 27 September 2014

Do single-sex schools make girls more competitive?

It is often argued that single sex schools are good in the sense that they reduce gender gaps (see here for a rundown of recent evidence). This recent paper in the journal Economics Letters (ungated here) by Soohyung Lee (University of Maryland), Muriel Niederle (Stanford University), and Namwook Kang (Hoseo University, Korea) caught my attention because it looks at whether the gender gap in competitiveness is narrowed by single-sex schooling.

The general problem with trying to estimate the effects of single-sex schooling on any outcomes is that students (or rather, their parents) self-select into single-sex or coed schools. So, its not generally possible to separate the effect of single-sex schooling from the unobserved student or family characteristics that are related to the choice of school. On top of that, single sex schools in many countries (like New Zealand) are more likely to be private schools that can be more selective about the students they admit.

Lee et al. exploit a unique feature of the South Korean education system - that students are randomly assigned to middle schools. From the study:
The key challenges to estimating the effect of single-sex schooling are two-fold: first, coeducational and single-sex schools often have different qualities, and second, students often select which type of school they attend. We address these challenges by examining middle school students (grades 7 to 9) in Seoul, South Korea. This experimental group is well-suited for the purpose of our study because a student is randomly assigned to a single-sex or coeducational school within a school district and all school districts have both single-sex and coeducational schools...
Therefore,  we identify the causal effect of single-sex schooling on competitiveness by estimating simple regression models controlling for school-district fixed effects and individual characteristics.
Participants in the study were asked to solve as many simple addition problems as they could in three minutes. They could then choose to participate in a tournament where they would be paid only if they were the top performer in a randomly-selected group of four students. Those who are more competitive will more likely choose the tournament (the study also includes controls for risk aversion, and for students who want to avoid denying others the chance to win the tournament). The experiment is run twice - first at the beginning of the second term of the 2011-12 academic year (August 2011), and second near the end of the academic year (February 2012).

The authors find:
...girls are less likely than boys to choose tournament: 29.9% of boys select tournament in Task 3, while 22.3 girls do (p-value of testing no gender gap: 0.032). This difference remains even after we control for students’ characteristics.
The results contrast with earlier and widely cited work in the U.K. (earlier ungated version here) by Alison Booth and Patrick Nolen (both at the University of Essex and Australian National University). However, Booth and Nolen's sample were not randomised by school type.

There are a couple of reasons that make me worry about the robustness of the results in Lee et al.'s paper. First, it is essentially an impact evaluation - what is the impact of school type on competitiveness? Given that there are three variables of interest (gender, school type, and before/after), I would have expected them to use difference-in-difference-in-differences (aka DDD - see here for a quick, but somewhat technical, description of DDD). Their simple regression controls lacks the appropriate controls for the direct effect of gender (although this might have been included in student characteristics, which weren't reported), school type interacted with before/after (in case different schools have different general effects over time), gender interacted with before/after, and the triple-interaction (which is the variable of interest in DDD). While this doesn't necessarily invalidate their results, it would be interesting to see what their results look like in a DDD analysis.

Second, the timing of the two rounds of data collection is an issue. Given that the first round occurred after the students had already commenced middle school, the results likely underestimate any impact of single-sex schooling on competitiveness. So, demonstrating a statistically insignificant effect of single-sex schooling on narrowing the gender gap doesn't demonstrate that there is no effect, because perhaps most of the effect occurs in the first term of middle school. We don't know.

I have to agree with the authors when they conclude.:
...whether policies expanding single-sex schools will promote gender equality is a question that requires more thorough empirical investigation.
For me, this paper just doesn't answer the question on whether single-sex schooling narrows gender gaps or not.

Tuesday, 23 September 2014

Why KiwiRail losses might be a good thing

Last week in ECON110 we covered natural monopoly. One of the interesting aspects of natural monopoly is what might happen when the government owns one. Such is the case with KiwiRail, which the government purchased back from Toll Holdings in 2008, after it was originally privatised in 1993.

KiwiRail was in the news again last month, having made a loss of $248 million in the year to June 30, 2014. That follows a loss of nearly $175 million in the previous year (PDF). Now, some of those losses are writedowns and impairments, but that aside, should we really be worried about big losses from a government-owned natural monopoly?

I previously blogged about natural monopolies earlier in the year, but didn't talk specifically about government-owned natural monopolies. First some background theory - a natural monopoly arises where one producer of a product is so much more efficient (by efficient I mean they produce at lower cost) than many suppliers that new entrants into the market would find it difficult, if not impossible, to compete with them. It is this cost advantage that creates a barrier to entry for other firms, and leads to a monopoly. Natural monopolies typically arise where there are large economies of scale (when, as a firm produces more of a product, their average costs of production fall). Economies of scale are common when there is a very large up-front (fixed) cost of production, and the marginal costs (the cost of supplying an additional unit of the product) are small (the cost structure is shown in the figure below, with a simplifying assumption that the marginal cost of production is low and constant). The markets for utilities, where the up-front cost includes the cost of having all of the infrastructure in place, are good examples. Rail is another example, since you need the tracks, the rolling stock, and the associated stations and other buildings in place before you can start to provide rail services.



Now natural monopolies, like other firms, are assumed to be profit maximisers. That is, they will operate at the point where marginal revenue is equal to marginal cost. That is, they will operate at the price PM and the quantity QM in the diagram above. At that point, the producer surplus is the area PMBHPS, while the firm's profit is the area PMBKL (the difference between profit and producer surplus arises because of the large up-front fixed costs, which are subtracted from profits, but not from producer surplus). However, consumer surplus in this market is GBPM, and total welfare is GBHPS. This leaves a deadweight loss equal to the area BEH.

Now, if the government owned the natural monopoly, it doesn't necessarily have to profit maximise if it doesn't want to. Government could choose to maximise total welfare instead. It would do this by setting the price at the point where marginal social benefit is equal to marginal social cost. That is, the market will operate at the price PS and the quantity QS. At that point, producer surplus is zero (since every unit is sold for marginal cost), but the profit is negative (JDEPS) because price is below average cost. On the other hand, consumer surplus is GEPS, and total welfare is maximised at GEPS.

Having the natural monopoly make a loss (and this is an economic loss, so it includes opportunity costs, and would be greater than any accounting loss) may be a good thing because it increases total welfare. However, relative to profit maximisation, it entails a transfer of welfare from taxpayers (who ultimately end up paying the loss) to consumers of rail services (and ultimately, to consumers of stuff that is transported by rail).

Sunday, 21 September 2014

Is there adverse selection in the life insurance market?

I blogged yesterday about adverse selection in the home insurance market. But does adverse selection apply in all insurance markets? What about life insurance? Adverse selection requires private information, and it requires that the informed party must be able to benefit from keeping the private information secret.

In the case of life insurance, if you have a terminal illness or have made lifestyle choices that increase your mortality risk, then that is likely to be private information. Because you are higher risk, you should pay a higher premium. However, because the life insurance company can't tell the high-risk and low-risk people apart, that leads to a pooling equilibrium. The life insurance company must assume that everyone is high risk, and raise premiums as a result.

So, if there is adverse selection in the life insurance market, we should expect to see that people with life insurance are more likely to die than people without life insurance. Which leads me to this recent paper in the journal Economics Letters (ungated here) by Timothy Harris and Aaron Yelowitz (both of University of Kentucky). Using data from the 1990 and 1991 panels of the Survey of Income and Program Participation in the U.S., combined with mortality data from the Social Security Administration's Master Beneficiary Record, Harris and Yelowitz find:
...no significant evidence of adverse selection. In virtually all specifications, those who have higher mortality are no more likely to hold life insurance.
In fact, the authors find some evidence of advantageous selection (the opposite of adverse selection - in this case, where lower risk individuals are more likely to have life insurance). But before you think this means that this proves a lack of adverse selection, consider this. Markets, particularly insurance markets (including life insurance) can be pretty adept (and often sophisticated) in mitigating the problems of adverse selection. In the case of life insurance, simply comparing those with and without a life insurance policy in terms of mortality doesn't tell the full story about adverse selection. Insurers spend some effort in screening applicants for life insurance, including questions about medical history, incidence of disease in your parents, etc. before they make a decision about offering insurance (and what the premium will be). The most risky applicants will be eliminated during this screening phase. Indeed, the authors note this themselves:
Although the empirical findings are consistent with the concept of advantageous selection, it is important to recognize the importance of underwriting in the life insurance market. All existing empirical analyses examine life insurance holdings, not applications. Insurers ask extensive questions and require medical exams prior to approval of an application. These institutional features suggest caution before claiming that applicants are advantageously selected; rather the underwriting process potentially screens out high-risk applicants who would otherwise obtain life insurance.
So, if we had no underwriting or screening processes, maybe we would observe adverse selection in the life insurance market. Or maybe not. Simply looking at mortality after an insurance contract is negotiated in the absence of screening would not be enough, because of potential moral hazard problems. Moral hazard arises when, after an agreement is made, one of the parties has an incentive to change their behaviour (usually to take advantage of the terms of the agreement) in a way that harms the other party. In the case of life insurance, once a person has life insurance their incentives change slightly - they may engage in more risky behaviour safe in the knowledge that their family will be provided for in the case of a skydiving accident, for instance. So, we might expect to see higher mortality among the insured than the non-insured not because of adverse selection, but because of moral hazard.

The authors are correct in asserting that we should look at applications for life insurance. Adverse selection is a problem of pre-contractual opportunism after all. To assess whether adverse selection exists in this market, the best approach would be to look at applications and medical histories and risk factors for life-threatening diseases of applicants, and compare with non-applicants. While looking at actual outcomes (in terms of mortality data) is somewhat appealing, it runs into issues of whether the observed difference arises because of adverse selection (the applicant was at higher risk before they obtained insurance), moral hazard (the applicant became more risky to insure after they obtained insurance), or some combination of the two.

Saturday, 20 September 2014

Big trees, home insurance and adverse selection

One of the most difficult concepts we cover in ECON100 and ECON110 each semester is the problem of adverse selection. Adverse selection arises when there is information asymmetry - specifically, there is private information about some characteristics or attributes that are relevant to an agreement, that is known to one party to an agreement but not to others.

However, information asymmetry by itself is not enough for an adverse selection problem (e.g. I know whether I like the colour yellow or not, but that private information doesn't affect many market transactions I engage in - at least not to a large enough extent to cause market failure). The informed party must be able to benefit from keeping the private information secret - this is an example of pre-contractual opportunism on the part of the informed party.

An adverse selection problem arises because the uninformed party cannot tell those with 'good' attributes from those with 'bad' attributes. To minimise the risk to themselves of engaging in an unfavourable market transaction, it makes sense for the uninformed party to assume that everyone has 'bad' attributes. This leads to a pooling equilibrium - those with 'good' and 'bad' attributes are grouped together because they can't easily differentiate themselves. This creates a problem if it causes the market to fail.

In the case of insurance, the market failure may arise as follows (this explanation follows Stephen Landsburg's excellent book The Armchair Economist). Let's say you could rank every person from 1 to 10 in terms of risk (the least risky are 1's, and the most risky are 10's). The insurance company doesn't know who is high-risk or low-risk. Say that they price the premiums based on the 'average' risk ('5' perhaps). The low risk people (1's and 2's) would be paying too much for insurance relative to their risk, so they choose not to buy insurance. This raises the average risk of those who do buy insurance (to '6' perhaps). So, the insurance company has to raise premiums to compensate. This causes some of the medium risk people (3's and 4's) to drop out of the market. The average risk has gone up again, and so do the premiums. Eventually, either only high risk people (10's) buy insurance, or no one buys it at all. This is why we call the problem adverse selection - the insurance company would prefer to sell insurance to low risk people, but it's the high risk people who are most likely to buy.

Which brings me to the example of big trees and home insurance. One of my extramural ECON110 students asked me about this video for Youi insurance. It provides a good example of potential adverse selection, but one that is easily solved.

The insurance company doesn't know whether or not you have 'enormous trees that will fall and crush your house' (having big trees next to your house is private information). So, maybe they make an assumption that you do, and in order to compensate for the higher risk, they charge a higher insurance premium.

Of course, markets have developed ways of solving the adverse selection problem. If the informed party can find some way to credibly reveal the private information to the uninformed party, we call this signalling (I've previously written on signalling, in the context of wedding costs). If the uninformed party can find some was of revealing the private information, we call this screening.

In the case of the video, the screening solution to the big trees adverse selection problem is pretty simple. Just ask the person if they have big trees! [Of course, if they do have enormous trees next to their house, there is some incentive to misrepresent themselves and say 'no', but that's why you have a clause in the insurance contract that voids the contract if the insured person provides false information.]

Alternatively there is a signalling solution to the big trees adverse selection problem. The homeowner could take a photo of their house, demonstrating that there are no enormous trees next to it. Easy and credible.

[HT: Tracey from my ECON110(NET) class]