Tuesday 27 May 2014

Is there a simple trick to reduce over-eating at buffet restaurants?

I recently read an interesting 2013 paper from the journal Applied Economics, by Erez Siniver of COMAS (in Israel), Yosef Mealem of Netanya Academic College (Israel) and Gideon Yaniv of COMAS (ungated version here). The authors are attempting to explain why people overeat at buffets. It's interesting because, as the authors state:
Traditional economics suggests that the entry price constitutes a sunk cost that should be ignored when deciding how much to eat; only incremental costs and benefits matter. Hence, since the marginal cost is zero, eating should rationally cease at the point of fullness, where the marginal utility from eating falls to zero.
In other words, if people are rational they will eat at the buffet until they are full. It shouldn't matter how much they have paid to enter the buffet, because that cost is sunk (it cannot be recovered) and so it shouldn't come into the decision about how much to eat. Of course, most people aren't fully rational and one of my favourite field experiments of all time, conducted by a student of Richard Thaler and described briefly here, showed that when people at a pizza restaurant were given a full refund before ordering they ate less.

The Siniver et al. paper investigates a related question: whether the time of paying (before or after the meal) affects the amount that buffet patrons eat. Again, the time of paying should not affect a rational person's decision on how much to eat at the buffet. Specifically, they:
...report on the results of two experiments destined to test this conclusion, where the time of paying for the buffet serves as a proxy for the quality of treatment. In the experiments, conducted in collaboration with a sushi restaurant on campus, we offered students and staff an all-you-can-eat sushi buffet during four hours at lunch time. Half of the participants were asked to pay for the buffet before eating, and half were told that they would pay after they finished eating.
The argument is that customers who are asked to pay before the meal feel less trusted by the restaurant (it's as if the restaurant is worried they will eat and run without paying). Discontented customers (those who are asked to pay beforehand) will eat more because the more contented customers (those who pay afterwards) reach the point where they have 'gotten their money's worth' at a lower quantity of consumption. The authors find:
Controlling for other explanatory variables, such as gender, food quality, drink consumption and body mass index (BMI), sushi consumption exhibits a statistically significant decline of about 4.5 units from the paying-before to the paying-after group, thereby supporting our hypothesis.
The reduction by 4.5 units of sushi is about a 20% reduction in consumption, which is quite substantial. What this shows clearly once again is that people are not fully rational - the time of paying shouldn't affect the amount that they eat, but it does. So a simple policy nudge might be able to affect eating behaviour.

The policy implication that the authors draw is that if governments want to reduce the contribution of buffets to over-eating and obesity, they should implement a policy that buffet restaurants must have their customers pay after the meal rather than beforehand. Restaurants receive an addition benefit from having their customers pay after the meal - it reduces the number of required transactions and therefore reduces the costs to the restaurants. Indeed, most buffet restaurants I have been to charge separately for drinks, which means having the customers pay after the meal rather than before the meal is required anyway.

How far can the idea of restaurants trusting customers, and this resulting in reduced food consumption, be extended? If customers have a self-payment option on exit from the restaurant (where they swipe their credit card and pay without needing to deal with restaurant staff, similar to the self-service lanes at supermarkets), would that reduce consumption even further? Or, does that create a backlash because it reduces the perceived quality of service? Could make for some interesting follow-up research.

Saturday 24 May 2014

Is the marginal cost of electricity falling? Some evidence from recent two-part pricing changes

There's some interesting things going on in electricity prices at the moment. On the one hand, we have the Statistics New Zealand showing that costs of electricity generation are rising, and that these costs are being passed onto commercial customers in the form of higher spot prices of electricity (see for example here: "...higher costs to generate electricity because of low hydro-lake levels and more expensive thermal generation...").

On the other hand, back in March my electricity retailer changed their pricing plans. Since these pricing plans are an example of two-part pricing and we have recently covered pricing strategy in ECON100, I thought this was a timely opportunity to blog about this. It also gives us a clue as to what is happening to marginal cost for the retailer.

Two-part pricing occurs when the supplier splits the price into two parts (no surprises there!): (1) an up-front fee that gives the consumer the right to purchase; and (2) a separate price per unit of the good or service. Many goods and services are sold with two-part pricing, including telephone services (monthly fee plus cost per call), some theme parks (cost for entry, plus cost for each ride), golf (club membership, plus green fees), and electricity. The two parts of the electricity price are the Daily Fixed Charge (which is a fixed amount per day connected to the electricity network), and the price per unit of electricity (measured in kWh).

Suppliers use two-part pricing in order to increase profits. This is how it works in theory. In the diagram below, the 'traditional' profit-maximising firm will maximise its profits by selecting the price where marginal revenue is exactly equal to marginal revenue. This occurs at the quantity QM, and the per-unit price PM. At this price-quantity combination, consumers receives a surplus (the difference between what they were willing to pay and what they actually pay) equal to the area ABC. Producer surplus (profit contribution) is equal to the area CBDF.



Using two-part pricing, the firm could instead charge a fixed fee equal to exactly ABC for the consumers to have the right to purchase, and a per-unit price of PM, and the consumers would be still willing to buy QM of the product. In this case, there would no longer be a consumer surplus (because this is offset by the fixed fee), and producer surplus would increase to ABDF.

However, the firm could do even better than this. If they instead reduced the per-unit price to PS and increased the fixed fee to be equal to the area AEF, the consumer would be willing to purchase the quantity QS. There would still be no consumer surplus, but producer surplus would be the entire area AEF. Importantly, the optimal per-unit price is equal to marginal cost.

Now, think about what happens if the marginal costs of production fall. Consumer surplus increases, so the optimal size of the fixed fee increases, while the optimal per-unit price decreases.

Now, back to the change in electricity prices. From the letter I received from Genesis in March:
...Your bill reflects a variety of costs, including generation and transmission costs, metering charges and our own business costs... We have recently reviewed our electricity prices in your area and from 6 April 2014 your electricity prices... will change. The weighting of the two components of your pricing plan has also changed. The daily fixed charge (the fixed cost of supplying energy to your property) and the variable change (the cost of electricity you use) have been aligned to better reflect the cost structure used by your network company...
The pricing changes in the letter refer to a 76.9% increase in the Daily Fixed Charge, and decreases in the per-unit charge of between 7.7% and 15.3% (depending on pricing plan). Assuming that Genesis is pricing effectively, then the increase in fixed fee and reduction in the per-unit charge are consistent with a reducing marginal cost of electricity.

Of course, there are alternative explanations. First, perhaps Genesis weren't pricing in isolation but were engaged in a strategic pricing battle with other firms. There is competition among the different retail electricity suppliers, after all. Where firms set similar per-unit charges, they might compete in terms of the fixed fee, driving the fee downwards (until it barely covers the fixed costs of transmission of electricity, maintenance of lines, etc.). This lowers profits. However, competitive pressures as an explanation seems unlikely. For the fixed fee to rise, that would suggest that price competition between the firms has decreased. I would argue that competition is no less now that it was before (in fact it might be more competitive now due to the likes of PowerSwitch) - so why raise the Daily Fixed Charge now?

Second, perhaps Genesis deliberately under-priced the potential fixed fee (but not due to competitive reasons). One potential problem with two-part pricing is if you have heterogeneous demand and you set the fixed fee too high, low demand customers opt out of purchasing from you. So, maybe Genesis intentionally under-estimated the fixed fee in order not to drive customers away. However, again this seems unlikely. They've suddenly raised the Daily Fixed Charge now - did they hugely misinterpret the potential fixed fee before and now suddenly realised that customers wouldn't leave in droves if they raised the fee?

Third, electricity transmission prices are regulated in New Zealand, and electricity transmission charges are fixed costs that make up part of the Daily Fixed Charge. So maybe the Commerce Commission allowed a substantial increase in transmission charges recently? I can't see anything on the Commerce Commission website to suggest they have made a recent change.

So, perhaps that just leaves lower marginal costs of production as an explanation?

One last point: Why aren't the Greens jumping on this change in pricing plans? Surely Genesis isn't alone in making this change (see the point on competitive pressures above)? This change in pricing strategy may have a negative environmental impact. With lower marginal costs, the per-unit price falls and consumers purchase more. Given that New Zealand's marginal electricity production is thermal generation (as I understand it hydro, geothermal, wind, etc. are "always on", so if additional peak load generation is needed it is coal/gas generation), so if retail electricity consumers are purchasing more electricity doesn't that mean more carbon emissions? Having said that, residential consumers make up only about a third of electricity demand in New Zealand (see here - a really useful primer on the electricity sector in New Zealand), so maybe it's not a big deal?

Sunday 11 May 2014

Pricing at movie theatres still defies logic. Or does it?

This week in ECON100 we are covering pricing strategy. As part of preparing for lectures, I've been going back over some material I have collected over the years, including this article from the International Review of Law and Economics by Barak Orbach (University of Arizona) and Liran Einav (Stanford) (ungated earlier version here).

The Orbach and Einav article is interesting because it lays out two interesting puzzles in the pricing of movies at movie theatres. Both puzzles relate to why movie theatres charge the same price for different movies (price uniformity), but they relate to different dimensions of the puzzle:
1. The movie puzzle refers to price uniformity across movies that run at the same time. Namely, the situation of two movies that are playing simultaneously at the same theater and are price uniformly, even when one movie has just been released, is much more popular, or occupies the screen for more time.
2. The show-time puzzle refers to the lack of price differentiation between weekdays and weekends or across seasons. That is, price uniformity across show times (with the prime exception of matinees). 
Orbach and Einav identify and dismiss a number of possible explanations of the puzzle from behavioural economics and transaction cost economics, and I think it's worth exploring these briefly:

Perceived fairness: If consumers believe that price changes are in some way unfair, they will react negatively towards the good. Changes in price that relate to changes in cost are often perceived as fair, whereas changes in price that relate to changes in demand are more likely to be perceived as unfair profit taking on the part of the firm. The counter-argument recognises that consumers are not fully rational, and value gains more than losses. If movie theatres framed the price differential as a discount for low-demand movies, rather than a surcharge on high-demand movies, it is unlikely that consumers would perceive it as unfair.

Unstable demand: Consumers might perceive variable pricing as a signal of quality. So, lowering price might actually decrease the number of tickets sold rather than increase them. However, while this might be the case when first introduced, consumers would soon adapt to movies becoming cheaper as they age and not take this as a signal of quality.

Demand uncertainty: Because the appeal of certain movies is unknown, it is not possible to price them prior to their release. Of course, the obvious counterargument is that demand is somewhat known - movie theatres know when new releases should be shown across multiple screens and a large number of screening times or not, for instance.

Menu and monitoring costs: There are explicit costs associated with changing prices (menu costs), but with the advent of large LCD screens that show movie prices these costs are minimal. There is also a chance that variable pricing might confuse consumers and deter them from going to the movies (lest they arrive and find that the movie they wanted to see is much more expensive than they anticipated). However, as most consumers plan ahead by checking the screening time online, showing price information alongside screening time would reduce the confusion. There are also costs associated with making sure that consumers who purchased tickets for low-price movies don't sneak into a high-price movie instead (monitoring costs). However, the additional profit from variable pricing would likely more than offset the added monitoring cost (since multiplex theatres must already monitor their moviegoers to ensure that they don't leave one movie just finishing and sneak into another just starting).

Overall, the paper concludes that:
The practice [uniform pricing] seems to persist partially due to misconceptions of exhibitors and partially due to distributors' enforcement of uniform pricing. While distributors are not allowed to intervene in box-office pricing, occasionally they enforce uniform pricing by refusing to deal with exhibitors that wish to switch to variable pricing.
So, essentially it comes down to the market power of movie distributors to force the exhibitors to price in a certain way. The distributors have market power because there are few substitutes for the summer blockbuster movie, and if the movie theatres choose not to price as the distributors want them to for the low-demand movies, they might miss out on the high-demand blockbusters.

Anyway, the paper was originally written in 2001 (and finally published in 2007!), but surprisingly little has changed in the meantime. I took a look at a couple of random multiplex cinemas in the U.S. (here and here) to double-check what I was talking about. There is some move towards variable pricing across time slots (weekdays and matinees are cheaper than weekends and evening showtimes), and price discrimination still reigns (lower prices for students and seniors who have more elastic demand for movie tickets), but no variation across movies. This is the same situation we see in New Zealand (see here or here). Premium seating options are increasingly common, and there are price differences between 2D and 3D movies, but within seat and movie types there is uniform pricing.

Or is there? I'm not convinced that there is no variable pricing at play here. It might be limited by the market power of the distributors, but consider these two points.

First, consumers are typically unable to use complementary passes at new movies. However, the length of time a new movie remains "no complementaries" varies by movie. A summer blockbuster might remain "no complementaries" for a few weeks, whereas a B-grade horror flick might only be "no complementaries" on opening night or not at all.  You might argue though that the price is the same, whether the consumer is using a ticket purchased on the day or a complementary ticket which was purchased earlier. However, complementary passes are usually distributed in bulk to third parties and for less than the full ticket price. The effect of this is that the average price paid by consumers for a movie varies by movie. It's probably not a large difference (although if the movie theatres didn't enforce this rule, along with many others I would certainly be saving any free passes for the high-demand movies), but it will have some effect of changing the average price received by the movie theatre even though their posted price doesn't change.

Second, movie theatres are very deliberate in their selection of session times for movies. The low-demand movies are more likely to be playing at low-demand times (Monday afternoon, etc.), when ticket prices are lower. So again, this will affect the average price the movie theatres receive for each ticket sold for different movies with low-demand movies (being more likely to be in low-price session times) having a lower average price than high-demand movies.

So, overall while variable pricing is not observed explicitly in the market, I would argue that there is at least some variable pricing at play through the non-price strategies undertaken by the movie theatres.

Thursday 8 May 2014

Drinking behaviour, drink driving, and more on the drinking age

I found this article on texting yourself to moderate your drinking behaviour interesting. From the article:
A University of Auckland researcher is about to begin a full research trial where the participants will get a text message like this, written by themselves, to remind them not to drink too much...
"My premise is most of us are reasonably intelligent people. So why don't we tap into that and allow people to create their own messages."
The premise is that people compose text messages to themselves that they will receive later in the night, reminding them not to over-indulge, and that messages from themselves are more likely to be successful than messages from others. Karen Renner (PhD candidate at the University of Auckland claimed that her initial study showed a 23 percent reduction in alcohol-related harms. I can't find any published study thus far, but if you're interested her research protocol is recorded here. It appears she is using YAAPST (Young Adult Alcohol Problem Severity Test), which "is a sensitive measure for mild alcohol-related consequences, such as hangover, feeling sick, being late for work/school, etc." Reducing hangovers by 23 percent is a useful outcome. I look forward to the results of the wider study, which you can join as a participant here.

However, while thinking about this study it is worth noting that even as "relatively intelligent" people we are notoriously bad at self-control. If we were rational decision-makers, we would realise that drinking too much increases the risk of getting ourselves (or others) into trouble. That's why behavioural economists advocate for pre-commitments - prior (and irreversible) actions that commit us to a certain course of action - Dan Ariely discusses a few pre-commitments here (wearing the granniest pair of granny underwear to ensure you won't bed a guy on the first date, priceless).

I wonder whether a text message to yourself constitutes a large enough pre-commitment to modify your behaviour. There is little cost to you of ignoring a text message, not matter how coarse the language you use. In other words, it's not binding (see here for other examples). An interesting alternative might be, if you're not home by some self-imposed curfew your home computer gives money to charity (unless you're home to stop it!), or to your ex or other person you'd rather not give money to. There's got to be an opportunity for a new app in there (and now I've posted the suggestion, if you develop one you ought to cut me in!).

Speaking of drinking behaviour, there is a recent paper in the Journal of Health Economics by Frank Sloan, Lindsey Eldred and Yanzhi Xu (all of Duke University), which looks at the behavioural economics of drink driving (gated, and I can't see an ungated version anywhere online). Using survey data from the U.S., they investigated a number of questions about the behaviour of drink drivers, including:

Question: "Does the cognitive ability of persons who report they drank and drove in the past year differ from those who did not? Perhaps drinking and driving is a byproduct of cognitive deficits."


Answer: Possibly not. Drink drivers differed in cognitive ability from non-drink-drivers in only one of their three measures of cognitive ability (self-reported memory).

Question: "Is such behavior attributable to lack of knowledge of DWI laws? One reason for lack of knowledge is that the cost of acquiring the requisite information may be higher for some individuals in part because of lower cognitive ability."

Answer: Drink drivers actually have higher knowledge of the DWI laws than non-drink-drivers, so lack of knowledge doesn't explain drink driving.

Question: "Do drinker drivers lack self-control, as indicated by a lower propensity to plan for the future and by greater overall impulsivity?"

Answer: Yes. Drink drivers are more impulsive, and less prone to plan events involving drinking (such as selecting a designated driver in advance). They also find that drink drivers have higher rates of time preference (they place higher values on the present relative to the future than others), and some evidence of time inconsistency and hyperbolic discounting.

One last bit of interest I noted from the article was this:
According to our survey findings, the probability of arrest for DWI, conditional on driving after having had too much to drink is 0.008. Considering the probability of prosecution and conviction for DWI, the probability of a DWI conviction given a drinking and driving episode is about 0.006.
Given that low probability of receiving a penalty, even a rational decision-maker might find that the costs of drink-driving (penalty for drink-driving multiplied by the low risk of being caught and penalised) was lower than the benefits. So it need not be the case that we assume that drink-drivers are irrational to explain their behaviour.

In terms of reducing drink driving, while pre-commitment might seem attractive as a solution (since drink drivers are more likely to lack self-control), it might not work well since drink drivers are less likely to plan ahead and create a pre-commitment not to drink-drive. Forcing the pre-commitment onto recidivist drink drivers (like ignition interlocks, and related driver licensing changes) would seem like an appropriate intervention based on these results.

On a somewhat related note, the week before last I posted a comparison of two studies examining the effects of the change in the drinking age on hospitalisations in New Zealand. Just days after that post, a new paper by Taisia Huckle and Karl Parker (both of Massey University) looking at the effect of the change in the drinking age on alcohol-involved crashes was released by the American Journal of Public Health (ungated PDF version here). Here's the abstract:
Objectives. We assessed the long-term effect of lowering the minimum purchase age for alcohol from age 20 to age 18 years on alcohol-involved crashes in New Zealand.
Methods. We modeled ratios of drivers in alcohol-involved crashes to drivers in non-alcohol-involved crashes by age group in 3 time periods using logistic regression, controlling for gender and adjusting for multiple comparisons.
Results. Before the law change, drivers aged 18 to 19 and 20 to 24 years had similar odds of an alcohol-involved crash (P = .1). Directly following the law change, drivers aged 18 to 19 years had a 15% higher odds of being in an alcohol-involved crash than did drivers aged 20 to 24 years (P = .038). In the long term, drivers aged 18 to 19 years had 21% higher odds of an alcohol-involved crash than did the age control group (P ≤ .001). We found no effects for fatal alcohol-involved crashes alone and no trickle-down effects for the youngest group.
Conclusions. Lowering the purchase age for alcohol was associated with a long-term impact on alcohol-involved crashes among drivers aged 18 to 19 years. Raising the minimum purchase age for alcohol would be appropriate.
The paper was covered by the NZ Herald here. Some of the problems with the paper have been discussed by Thomas Lumley at StatsChat and by Eric Crampton at Offsetting Behaviour, so I won't bother going over the same ground again.

Monday 5 May 2014

Gary Becker, 1930-2014

Sad news this week with the passing of Nobel laureate Gary Becker (aged 83). Becker's work has influenced generations of economists, and while I don't necessarily agree with his libertarian views on all topics, I have very much enjoyed reading his work, especially "A Treatise on the Family". Becker was one of the pioneers of applying economics to a range of social issues, and those of you familiar with my research will recognise that in a lot of senses it follows the pathway blazed by Becker and others.

His Nobel lecture is highly recommended reading for my ECON110 class in the first week, and underpins a lot of the work we do in that class.

The University of Chicago website has a wonderful obituary, full of tributes from his colleagues and friends. I especially like this quote from Kevin Murphy:
Gary was an inspiration to several generations of Chicago students - instilling in them the love for economics that he lived and breathed.
Economics teachers and university researchers should all aspire to this. The world of economics has lost one of its greatest contributors, and we are all a little poorer for that.

You can read more about Gary Becker and his contributions to economics here [HT: Greg Mankiw]. Marginal Revolution has links to "Some neglected Gary Becker open access pieces", which are well worth a read too.

Sunday 4 May 2014

Getting around the problems of natural monopoly

This week in ECON100 we are covering monopoly markets and pricing with market power. As part of this, we talk about natural monopoly, which can be quite tricky so I thought I would blog on it.

Natural monopoly doesn't have anything to do with nature. A natural monopoly arises where one producer of a product is so much more efficient (by efficient I mean they produce at lower cost) than many suppliers that new entrants into the market would find it difficult, if not impossible, to compete with them. It is this cost advantage that creates a barrier to entry for other firms, and leads to a monopoly.

Natural monopolies typically arise where there are large economies of scale. Economies of scale occur when, as a firm produces more of a product, their average costs of production fall. Economies of scale aren't uncommon (here's a bunch of examples), but for natural monopoly to arise, the economies have to be large. This happens when there is a very large up-front (fixed) cost of production, and the marginal costs (the cost of supplying an additional unit of the product) are small. When this is the case, the average cost (AC) and marginal cost (MC) curves look something like this:


Examples of industries with this type of cost structure include anything with a large up-front infrastructure cost (water supply, electricity generation and supply, telecommunications, rail, roads, mail, etc.), but potentially lots of IT-based industries as well (search engines, instant messaging, social networks, etc.). This Ars Technica article from last month provides a good example from the U.S., in the form of Internet Service Providers (ISPs). According to the article:
A new fiber provider needs a slew of government permits and construction crews to bring fiber to homes and businesses. It needs to buy Internet capacity from transit providers to connect customers to the rest of the Internet. It probably needs investors who are willing to wait years for a profit because the up-front capital costs are huge. If the new entrant can't take a sizable chunk of customers away from the area's incumbent Internet provider, it may never recover the initial costs. And if the newcomer is a real threat to the incumbent, it might need an army of lawyers to fend off frivolous lawsuits designed to put it out of business.
So, here is the problem. There is a large up-front cost associated with setting up an ISP (or expanding your ISP into a new location), because the would-be ISP needs government permits (expensive) and needs to lay the cables necessary to connect their customers (very expensive - the article estimates that Google spent US$84 million to build a fibre network that went past 149,000 homes in Kansas City, without including the costs of actually connecting any of the homes, an average cost of around $560 per home if all of them connected). All of these costs need to be paid before even one customer can be connected to the ISP. Add onto that the cost of defending lawsuits, and that is a pretty big up-front (fixed) cost.

If the new entrant ISP has only a small subscriber base, this fixed cost is spread over only a small number of customers, meaning that their average costs will be very high. Compare that with an incumbent ISP which already has many subscribers, spreading their fixed cost over a large number of customers and leading to lower average costs. The incumbent ISP could easily keep prices at a point where the new entrant ISP would be making a loss, eventually running out of money and closing down.

Of course, you might note that predatory pricing is illegal (including in New Zealand). However, the incumbent ISP wouldn't need to engage in predatory pricing here. The price they set would not need to be artificially low (i.e. below cost), in order to force the new entrant out. The incumbent ISP could continue to make a profit (albeit a smaller profit than before) while the new entrant ISP makes losses.

So, we are probably left with a single (natural monopoly) firm serving the market. The main problems with monopoly providers are that they charge a high price (they are profit maximising, so they produce the quantity where MR=MC, i.e. QM on the diagram below, and sell it as the price PM), which limits quantity below the socially-efficient amount of the product (where AR=MC, i.e. at QS on the diagram below, with the price PS).



How can government solve the problem of natural monopoly then?

One potential solution is to regulate price. By introducing a price ceiling (a legal maximum price), you can force the monopoly to charge a lower price than they otherwise would (ultimately you could set the price ceiling at PS), which increases welfare. The first problem is that if you set the price ceiling at the price that maximises welfare (PS), then the natural monopoly will make a loss (since PS is less than the firm's average costs CS) and may choose to shut down (leaving you with no market for the good at all). An alternative is to set the price ceiling high enough that the firm makes no loss (price ceiling of at least CS) - this won't maximise welfare, but it will increase welfare above the monopoly pricing situation. However, even if the natural monopoly makes no profit or a small profit, there are still problems with regulating price. The main problem is that it is pretty inflexible and won't readily adjust to changes in the market. For instance, if demand increases substantially, we would expect price to increase but in this case it is held low by the price ceiling. This would lead to under-investment in new capacity by the monopoly, and decreases in service quality, etc.

A second solution is for the government to own the natural monopoly itself. Government ownership of natural monopolies is common in many countries (think about rail, telecommunications, water supply, electricity, etc.). That way, the government can charge whatever price it wants, and can ensure that economic welfare is increased (again, at the expense of profits). An advantage of government ownership is that the government can usually borrow more cheaply than private firms, which means that the costs of paying for the large infrastructure investment to set up the natural monopoly firm (in the case of utilities, for example) are lower. The government also doesn't need to worry about collateral - consider this bit from the Ars Technica article:
"One of the really terrible things about being in this business is the infrastructure you're building is very expensive, but it has no collateral value for the bank," Montgomery also said. "If I put $1 million of fiber in the ground and go to the bank, it'll say, 'ok I'm going to need a million dollars' worth of collateral. The fiber isn't worth anything to us, so you're going to have to cough up something else, gold bars, cash, something.'"
However, government ownership is generally less efficient that private ownership. Private firms must answer to shareholders, who expect profits. So private firms have incentives to seek gains in efficiency. Government-owned firms have weaker incentives to seek gains in efficiency (this is a type of X-inefficiency). So, in the long run government ownership may actually make society worse off, and the best option may be for government to set up the natural monopoly (and take advantage of the low borrowing costs), then privatise.

A third solution is to find some way of ensuring your market has the benefits of competition (so firms will compete on price, service quality, etc.), without the necessity of firms incurring multiple instances of the fixed costs. One way of achieving this (in the case of utilities) is to share the infrastructure between many firms. Essentially, you separate the competitive parts of the business (usually retail) from the natural monopoly parts (usually the infrastructure itself). For instance, with ISPs you have one set of fibre linking to all houses in an area but the fibre is not owned by any of the competing firms. Instead, the fibre network may be owned by the government, or by a single firm regulated by the government, and access to the network is provided to all ISPs on the same cost basis. This gives none of the firms a cost advantage over any other, and all contribute to the costs of network maintenance, etc.

Of course, this last solution is not without its problems. The natural monopoly still exists (in the form of the firm that owns the network infrastructure itself), and will need to be regulated using one of the previous two options). However, the consumers are to some extent insulated from the activities of the natural monopoly and have competition in the marketplace for the product, and benefit from the resulting lower prices, better service quality, etc. This last solution is essentially what we have in place in the electricity market in New Zealand, and in the roll-out of ultra-fast broadband.