Sunday, 30 July 2023

What Google Maps thinks of the value of my time

For some reason, driving to Tauranga seems to be fairly inspirational for my blog of late (see here). This week, while driving to Tauranga, I noted that Google Maps was recommending that the fastest route to my destination was to take the Takitimu Drive toll road. It would save four minutes. But wait! It would cost me a toll of $2.10. Is it worth it to pay $2.10 in order to save four minutes?

There is a trade-off here - time vs. money. I should have taken the toll road if the value of the time savings was at least as great as the monetary cost of $2.10. How high does the value of my time have to be in order for the time to be more valuable than that? If four minutes were worth exactly $2.10, then that implies a per-hour value of time of $31.50 ($2.10/4*60). If the value of my time is more than that (which it likely is, based on the hourly rate I get paid in my day job and a 40-hour workweek), then I should have taken the toll road.

I didn't take the toll road. Was I being irrational? First, it turns out that my regular route didn't have as much traffic as Google Maps thought, so it's unlikely that there were four minutes of savings to be had by taking the toll road (I had guessed as much, which is why I didn't go that way). Second, paying the toll requires me to actually remember. If a driver forgets to pay their toll (which is pretty normal for me), then Waka Kotahi sends them a demand for payment of $7.50. If the cost of the toll road was $7.50 instead of $2.10, then my time would have to be worth at least $112.50 ($7.50/4*60) per hour for the toll road to be worthwhile. That's far more than the pay rate in my day job. Taking into account my regular failure to pay the toll immediately, I probably made the right decision.

However, coming back to the point of this post, is Google Maps aware of the trade-offs in its route suggestions? Saving four minutes at a cost of $2.10 would have been a good deal for me, but not for someone who earns the minimum wage ($22.70 per hour, before tax). When the trade-off is minutes for minutes, it is a simple matter of choosing the shortest route. But when the trade-off involves an actual monetary cost, we should probably be careful about what Google Maps is recommending.

Read more:

Saturday, 29 July 2023

The economic costs and benefits of a 130kph speed limit in Germany

Germany currently has no speed limit on the autobahns. It is the only country in Europe not to have a speed limit on highways, which makes it a significant outlier. There is a recommended maximum speed of 130 kilometres per hour, but it is not a legal limit. Should there be a legal limit? This is an empirical question that economics can help with - would the benefits of a speed limit outweigh the costs?

That is the question addressed in this recent article by Stefan Gössling, Jessica Kees (both Linnaeus University), Todd Litman (Victoria Transport Policy Institute, Canada), and Andreas Humpe (Munich University of Applied Sciences), published in the journal Ecological Economics (ungated earlier version here). The particular speed limit they evaluate is 130 kilometres per hour, and the paper outlines in some detail the costs and benefits that are included (bearing in mind these are the costs and benefits of introducing a speed limit, where one does not already exist).

On the costs side, they consider time losses due to slower speeds, estimating a total private cost to German drivers of €1052.6 million. On the benefits side, they estimate fuel savings (€765.7 million), and fewer traffic jams (€0.1 million), for a total private benefit of €765.8 million. However, there are also social benefits, including supply chain costs avoided (€284.8 million), lower maintenance and infrastructure costs for roads (€247.6 million), decreased land fragmentation (€30.1 million), decreased air pollution from tire wear (€62.4 million), lower CO2 emissions (€292.5 million), decreased fuel subsidies (€37.6 million), and decreased vehicle accident costs (€283.0 million), for a total of €983.3 million. So, while the private benefits of the introduction of the speed limit are lower than the private costs by €32.2 million, when the social benefits are included, the overall benefits outweigh the costs by €951.1 million.

However, it doesn't seem to me to be so clear-cut. A couple of the social benefits strike me as arguable. The supply chain costs (which are the costs associated with the supply chain for infrastructure and vehicles) may be double-counted with the value of the lower CO2 emissions, while the cost of land fragmentation may be included within the infrastructure costs for roads. It isn't clear. Nevertheless, excluding the costs that are double-counted would still leave the benefits greater than the costs. Even without a sensitivity analysis (which would test how sensitive the results are to changes in the underlying assumptions), the clear conclusion is that Germany should implement a speed limit on the autobahns.

The real question, though, is whether that speed limit should be 130kph, or some other speed. Unfortunately, Gössling et al. don't address that question at all (perhaps they have some follow-up work coming?). Earlier work (which I blogged about here) suggested that the optimal speed limit might be close to 55mph (or 89kph) in California, Oregon and Washington states in the US. The analysis there was similar (in terms of the costs and benefits considered), but the analysis was better because it considered the marginal impact of speed limit changes.

As I note in my ECONS102 class, the optimal quantity of something is best determined by thinking about marginal benefits and marginal costs. Hopefully we'll see some analysis along those lines for Germany in the future.

Read more

Thursday, 27 July 2023

Book review: The Narrow Corridor

My last book review was of Daron Acemoglu and James Robinson's 2012 book Why Nations Fail. It's taken me a while (with many interruptions), but I recently finished reading their 2019 book The Narrow Corridor. I described it briefly in that earlier review as a follow-up to Why Nations Fail, but that may be somewhat unfair. The Narrow Corridor takes a slightly different direction, asking the question of how and why human societies have achieved liberty, which Acemoglu and Robinson define by quoting John Locke:

...perfect freedom to order their actions and dispose of their possessions and persons, as they think fit... without asking leave, or depending upon the will of any other man.

In Acemoglu and Robinson's view, societies are able to achieve liberty for their population when they operate within the 'narrow corridor', which is:

Squeezed between the fear and repression wrought by despotic states and the violence and lawlessness that emerge in their absence...

The book rests on several metaphors, of which the narrow corridor is one. Another key metaphor that Acemoglu and Robinson employ dates back to Thomas Hobbes: Leviathan. Hobbes used the metaphor of Leviathan (which comes from the Book of Job in the Bible) to represent a social contract between the people and an absolute sovereign, who would protect the population from lives that would otherwise be "solitary, poor, nasty, brutish, and short". This was one of the origins of modern social contract theory, which we now use to explain the existence of government more generally, rather than the population's acquiescence to an absolute sovereign per se.

Acemoglu and Robinson adopt the metaphor of Leviathan, but give it even greater nuance. They distinguish between a 'Despotic Leviathan', where the power of the state dominates society, and an 'Absent Leviathan', where the power of society dominates a weak state. In between these two, within the narrow corridor, lies the 'Shackled Leviathan', where the power of the state, and the power of society, are more balanced. It is within the narrow corridor where the greatest liberty for the population is to be found. Later in the book, they introduce a fourth category, the 'Paper Leviathan', where both state power and societal power are weak, leading to a state that is "despotic, repressive, and arbitrary" - the worst characteristics of both the Despotic Leviathan and the Absent Leviathan.

The third metaphor that Acemoglu and Robinson employ is what they term the 'Red Queen Effect', which occurs when society and the state each push the other along, each balancing out the worst tendencies of the other, as the society accelerates along the narrow corridor. I found this metaphor to be somewhat strained and not helpful, but it is sufficiently similar to the 'red queen hypothesis' in evolutionary biology (which I blogged about here). However, later in the book Acemoglu and Robinson talk about Red Queen Effects getting out of control, which I think breaks the metaphor.

Much of the book is given over to numerous examples of states in each category, as well as states moving between categories. This material is heavy on description, with narratives that seem to fit the overall framework well, as you would expect. However, as in their earlier book, I was left with questions of 'how'. How can a society shackle a Despotic Leviathan, or encourage the Absent Leviathan to return, and move itself into the narrow corridor. This is where the book is lightest, and where future work is most desperately needed.

Given the focus of Why Nations Fail on the role of institutions, it was somewhat surprising that institutions did not feature more strongly in this book. However, the biggest lacuna to me was the absence of social capital. Having recently read Robert Putnam's Bowling Alone (which I reviewed here), it is difficult to ignore the key role that social capital must play in ensuring that society and the state are both kept in balance. It is not until nearly the end of the final chapter that Acemoglu and Robinson mention the mobilisation of society and the "ability and willingness of the population to organise and form associations outside the government", which is one expression of social capital. A broader consideration of the role of social capital in the processes that Acemoglu and Robinson discuss would be a welcome addition.

Nevertheless, despite this omission, I was glad to have read the book, especially having just read Why Nations Fail last month. While I feel like we still don't have a fully formed picture here, I think that Acemoglu and Robinson have helped us a little way along our own narrow corridors.

Wednesday, 26 July 2023

Unnatural deaths and apartment prices

If you were looking to buy a house, and you found out that someone had recently died an unnatural death in the house, would that affect what you are willing to pay for the house? I suspect it would for many people, being a negative characteristic of the house. Based on hedonic demand theory, which my ECONS102 class briefly covered this week, negative characteristics reduce the overall price of the good. 

How does that work? Hedonic demand theory recognises that when you buy some (or most?) goods you aren't so much buying a single item but really a bundle of characteristics, and each of those characteristics has value. The value of the whole product is the sum of the value of the characteristics that make it up. For example, when you buy a house, you are buying its characteristics (number of bedrooms, number of bathrooms, floor area, land area, location, etc.). You are also buying the characteristic of whether the house has had a recent unnatural death, or not. When you bundle all of the characteristics' values together, you get the value of the house.

So, does an unnatural death really reduce house prices? That is the research question addressed in this 2018 article by Zheng Chang (City University of Hong Kong) and Jing Li (Singapore Management University), published in the journal Regional Science and Urban Economics (ungated earlier version here). They use housing unit (mostly apartments, I guess) sales data from Hong Kong housing estates between 2001 and 2015, and first note that:

Influenced by Taoism, traditional Chinese believe that people who died as a result of violence or unnatural events can become “ghosts” who can disturb successive occupants through various means... Housing units in which unnatural deaths have occurred are called “haunted units,” which are regarded as bad Feng Shui, and are unsuitable for habitation...

So, the expectation is that a recent unnatural death will reduce house prices. Comparing houses with and without a recent unnatural death (and controlling for housing unit characteristics), Chang and Li find that:

...housing values drop about 25% for units with deaths, 4.5% for other units on the same floor, 2.6% for other floor units in the same building, and 1% for units in other buildings of the same estate. However, the average house price of units in other estates within 300m increases 0.5%. For units with deaths, the price decline is sustained across the whole study period. The price impact on units in other geographic scopes follows a U shape and starts to reverse after 4–5 years of a death.

In other words, there is evidence for a sustained negative impact of an unnatural death on housing unit prices, as well as a shorter-term impact on surrounding housing units. Even having a housing unit on the same floor, or in the same building, as the unit where there was a recent unnatural death, is enough to lower the housing unit's price. Sometimes, superstition matters for consumer preferences.

Tuesday, 25 July 2023

Tornadoes and the markets for medicines

The New Zealand Herald reported last week:

The fallout from a Pfizer factory being damaged by a tornado could put even more pressure on already-strained drug supplies at US hospitals, experts say.

The tornado touched down near Rocky Mount, North Carolina, and ripped up the roof of a Pfizer factory that makes nearly 25 per cent of Pfizer’s sterile injectable medicines used in US hospitals, according to the drugmaker...

HOW WILL THIS AFFECT HOSPITAL DRUG SUPPLIES?

It will likely lead to some long-term shortages while Pfizer shifts production to other locations or rebuilds, said Erin Fox, senior pharmacy director at the University of Utah Health...

Hospitals also may switch to different forms of a drug by giving a patient an antibiotic pill instead of an IV if that person can handle it. If a larger vial size of a drug is more readily available, they may order that and then fill several syringes with smaller doses ready for use.

Since my ECONS102 class covered the model of supply and demand last week, this seems like a timely example to look at. First, let's consider the market for injectable medicines, as shown in the diagram below. Injectable medicine production has reduced due to the unavailability of the large Pfizer factory, so there has been a decrease in supply, shown by the supply curve shifting up and to the left, from S0 to S1. If injectable medicine prices were to remain at the original equilibrium price (P0), then the quantity of medicines demanded (Q0) would exceed the quantity of medicines supplied (QS) at that price, because injectable medicine producers are only willing to produce QS medicines at the price of P0, after the supply curve shifts. There would be a shortage of injectable medicines, as the article notes as a likely consequence.

However, the problem of the shortage could be solved, if the market was able to adjust. How would that work? Consider what happens when there is a shortage. Some buyers (hospitals, for example), who are willing to pay the market price (P0), are missing out on medicines. Some of them will find a willing seller, and offer the seller a little bit more, in order to avoid missing out. In other words, buyers bid up the price. The result is that the price increases, until the price is restored to equilibrium, at the new (higher) equilibrium price of P1. At the new equilibrium price of P1, the quantity of injectable medicines demanded is exactly equal to the quantity of injectable medicines supplied (both are equal to Q1). We can say that the market clears. There is no longer a shortage.

However, that's not the only solution. Perhaps there are substitute goods that buyers can switch to. The shortage of injectable medicines, or an increase in injectable medicines, both cause incentives for buyers to switch. The article mentions switching to antibiotic pills instead of IV antibiotics. The effect that will have on the antibiotic pill market is shown in the diagram below. The demand for antibiotic pills increases from DA to DB, the price increases from PA to PB, and the quantity traded increases from QA to QB.

It seems like there is no good solution here. Either injectable medicines increase in price, or they become less available (or both!), and/or substitute medicines become more expensive as well. In a public healthcare system (like New Zealand), it will be the government (taxpayers) that will foot the bill. In private healthcare systems (like the US), it will be patients (or insurers, which ultimately means patients through higher insurance premiums). that pay more. The only winners are likely to be the producers of substitute medicines, who might see an increase in profits from the increasing demand for their products.

Monday, 24 July 2023

The Victorian government shows they can avoid the sunk cost fallacy

You may have seen the news last week. The Victorian state government in Australia has cancelled the 2026 Commonwealth Games. As Jack Anderson (University of Melbourne) wrote in The Conversation:

The cancellation of the 2026 Commonwealth Games by Victorian Premier Daniel Andrews took all stakeholders – Commonwealth Games officials, athletes, sports bodies and local government officials – by surprise.

The Andrews administration will likely deal with the political fallout from not honouring its contract to host the games, but there may be legal and reputational damage ahead.

The decision was a surprise, but not for the reason many people think. Once a government has decided to hold a big event, they will usually be loath to change their mind. Behavioural economics suggests that quasi-rational decision-makers are susceptible to the sunk cost fallacy. Sunk costs are costs that have already occurred and that cannot be recovered, like the millions the Victorian government has already spent on the Commonwealth Games. Sunk costs should not affect decisions because, regardless of what the decision-maker chooses to do, those sunk costs have already been incurred. Any money that the government has already spent on the Games has already been spent, and will have been spent regardless of whether or not the Commonwealth Games goes ahead. So, at this point, the government should make the decision about whether to go ahead with the Games should be made on the basis of costs and benefits that are to come. Essentially, the Victorian government weighed up the billions of dollars they would face in the future against the benefits from hosting the Games. The costs must have outweighed the benefits.

The sunk cost fallacy typically occurs because of mental accounting, which suggests that we keep 'mental accounts' associated with different activities. We put all of the costs and benefits associated with the activity into that mental account, and when we stop that activity, we close the mental account associated with it. At that point, if the mental account has more costs in it than benefits, it counts as a loss. And because we are loss averse, we try to avoid closing the account. If the Victorian government were affected by mental accounting, they may have still gone ahead with the Games, trying their hardest to avoid banking a loss on the Games. Mental accounting is responsible for keeping us in unpromising projects for too long, as well as unhappy relationships, and bad jobs.

So, the Victorian government were not affected by mental accounting (just like Warner Bros, when they cancelled the release of the Batgirl movie). Even the prospect of bad publicity (of which there has been plenty, and which must have been anticipated) was not enough to dissuade them from the cancellation.

Sunday, 23 July 2023

Why indifference curves don't cross

This post follows up on the previous two (see here and here), by explaining why indifference curves don't cross each other. As noted in those earlier posts, indifference curves are the way that we represent the decision-maker's preferences in the constrained optimisation model. An indifference curve connects all of the bundles of goods that provide the decision-maker with the same amount of utility (satisfaction, or happiness).

We'll stick with using the consumer choice model as our illustrative model, and show what happens when indifference curves cross, as shown in the diagram below. The indifference curve I1 represents some higher level of utility than the indifference curve I1. So, the bundle of goods C is better than Bundle A, because it lies on a higher indifference curve - Bundle C provides the consumer with more utility than Bundle A. Bundle C is also clearly better than Bundle A because Bundle C has the same amount of Good X, but more of Good Y. And, as we assumed earlier, more is always better than less.

The problem comes in when we compare Bundle A and Bundle C with Bundle B. Bundle A and Bundle B are just as good as each other. They are both on the indifference curve I0, so they must provide the consumer with the same amount of utility. Bundle B and Bundle C are just as good as each other. They are both on the indifference curve I1, so they must provide the consumer with the same amount of utility.

So, to summarise, Bundle A is just as good as Bundle B (same utility), and Bundle B is just as good as Bundle C (same utility). That means that Bundle A must be just as Good as Bundle C. And yet, we started this example by saying that Bundle C is clearly better than Bundle A. Clearly this makes no sense. Bundle A and Bundle C cannot simultaneously provide the same utility when Bundle C is better than Bundle A.

That is because when indifference curves cross, it violates the mathematical property of transitivity. I prefer to say that it simply makes no sense. So, while indifference curves need not necessarily be parallel, they certainly can't cross each other.

Read more:

Saturday, 22 July 2023

Why indifference curves are curves (and when they are not)

Yesterday's post outlined how to construct an indifference curve. I left the question of why indifference curves are curves, and not straight lines, unanswered. So, I want to cover that off in this post.

As I noted in yesterday's post, indifference curves are the way that we represent the decision-maker's preferences in the constrained optimisation model. We'll stick with using the consumer choice model as our illustrative model. For the consumer, an indifference curve connects all of the bundles of goods that provide the consumer with the same amount of utility (satisfaction, or happiness).

To understand why indifference curves are curves, we first need to recognise that consumers' choices are subject to diminishing marginal utility. Marginal utility is the additional utility that the consumer receives from consuming one more unit of the good. Marginal utility decreases as the consumer consumes more of a good because of satiation (which, taken literally, means that the consumer gets full as they eat more). Consider the example of a consumer that is hungry and has a pizza in front of them. They eat a slice. It gives them some marginal utility (satisfaction). They eat another slice. The second slice gives them some more marginal utility, but not as much as the first slice (because they aren't as hungry anymore). They eat another slice. The third slice gives them some more marginal utility, but not as much as the first or second slice. And so on.

Now, how does that relate to the indifference curve? The slope of the indifference curve is known as the marginal rate of substitution. It is the quantity of one good (Good Y) that the consumer is willing to give up to get one more unit of the other good (Good X), and be just as satisfied (they would have the same utility afterwards). This marginal rate of substitution (MRS) is equal to the ratio of the marginal utilities, [-MUX/MUY], for reasons that I won't go into here (because it requires calculus - but if you are interested, scroll down towards the end of this blog post). The MRS is a negative number, because the indifference curve is downward sloping.

It turns out that it is diminishing marginal utility that leads the indifference curve to be a curve. Consider the indifference curve I0 shown in the diagram below. At the top of the curve, at Bundle A, the consumer has not much of Good X, but lots of Good Y. The marginal utility of Good X will be high (because the consumer doesn't have much, and diminishing marginal utility means that the marginal utility will be high when the consumer doesn't have much of a good). The marginal utility of Good Y will be low (because the consumer has a lot, and diminishing marginal utility means that the marginal utility will be low when the consumer has a lot of a good). So, the ratio [-MUX/MUY] will be a big number in absolute terms (because we are dividing a big number by a small number). The marginal rate of substitution is high, which means that the slope of the indifference curve is steep.

At the bottom of the curve, at Bundle B, the consumer has lots of Good X, but not much of Good Y. The marginal utility of Good X will be low (as explained above, but in reverse). The marginal utility of Good Y will be high. So, the ratio [-MUX/MUY] will be a small number in absolute terms (because we are dividing a small number by a big number). The marginal rate of substitution is low, which means that the slope of the indifference curve is flat.

Now think about moving along the indifference curve. It is a curve, and not a straight line, because as we move along the indifference curve, the marginal utility of Good X decreases, and the marginal utility of Good Y increases. So, the ratio [-MUX/MUY] becomes a smaller number (in absolute terms), which means that the marginal rate of substitution is smaller, and the indifference curve must therefore be a bit flatter. We start at bundles of goods (like Bundle A) where the indifference curve is steep, and end at bundles of goods (like Bundle B) where the indifference curve is flat.

However, having established that diminishing marginal utility leads the indifference curve to be a curve and not a straight line, it turns out that the indifference curve is not always a curve. There are some special cases. One special case is perfect substitutes (which I have blogged about before here). Perfect substitutes are goods that, in the eyes of the consumer, are identical. The consumer is indifferent between them. One example is red M&Ms and blue M&Ms. When goods are perfect substitutes, the indifference curves are straight lines, as shown in the diagram below for red M&Ms and blue M&Ms. That's because, for every red M&M the consumer gives up, they would be willing to accept one blue M&M and remain just as satisfied (same utility). In other words, the marginal rate of substitution is constant (and equal to one, in this case). Because the marginal rate of substitution is constant, the indifference curve must always have the same slope, so it is a straight line.

Another special case is perfect complements. Perfect complements are goods that must be consumed together, in a specific ratio. One example is left shoes and right shoes. If the consumer has one pair of shoes, and is given more left shoes, their utility remains the same - they are no better off, because they still only have one pair of shoes. Similarly, if they are given more right shoes, their utility remains the same, because they still only have one pair of shoes. That leads to indifference curves that are right angles, as shown in the diagram below. The consumer's utility is determined by the number of complete pairs of shoes that they have.

Perfect substitutes and perfect complements are opposite extremes of what indifference curves may look like. In most cases, indifference curves are somewhere between the two extremes. And, if you think about it, in-between a right angle and a straight line is a curve, which is how we usually draw indifference curves.

Read more:

Friday, 21 July 2023

Constructing an indifference curve

When I was writing my previous post that applied the consumer choice model (otherwise known as the constrained optimisation model for the consumer), I noticed that in this earlier post applying the same model, I had promised a more detailed discussion of indifference curves. Some sixteen months on, it must be time for me to make good on that promise. So, here's the explanation that I have developed over a number of years, that I use to explain indifference curves in my ECONS101 class.

We'll start by limiting ourselves to one application of the constrained optimisation model - the model for consumer choices. Next, we need some assumptions. We'll assume that the goal of the consumer is to maximise their utility (their satisfaction, or happiness). We'll also assume that the consumer is only buying two goods, Good X and Good Y. And finally, we'll assume that more of each good is always better than less (so, having more of a good will always increase the utility of the consumer). [*]

We can represent the possible bundles of goods that the consumer might choose to consume in a diagram, as shown below. Consider one bundle of goods roughly in the centre of the diagram, Bundle B, which includes XB of Good X, and YB of Good Y.

Now, let's compare Bundle B with other bundles of goods that the consumer might choose. That is shown in the diagram below, by separating the diagram into four quadrants using dotted lines. Now, think about the comparison of Bundle B with bundles in those other quadrants. All of the bundles of goods that lie in the grey shaded quadrant up and to the right of Bundle B must be better than Bundle B (in the sense that they provide the consumer with more utility). That's because those bundles of goods either contain more of Good X, more of Good Y, or more of both goods. And more is always better (higher utility) than less. Next, all of the bundles of goods that lie in the grey shaded quadrant down and to the left of Bundle B must be worse than Bundle B (in the sense that they provide the consumer with less utility). That's because those bundles of goods either contain less of Good X, more of Good Y, or less of both goods. And because more is always better, less is always worse.

What about the other two quadrants? The bundles of goods in those quadrants are not obviously always better, or worse, than Bundle B. To get our head around those, let's draw a big circle around Bundle B, as shown in the next diagram below. Now, think about what happens in the comparison of bundles of goods, as we move anticlockwise around the circle, starting with Bundle D. Bundle D must be better than Bundle B (because Bundle D has more of Good X than Bundle B). That is also true of every bundle of goods as we move around the circle to Bundle E, which is also better than Bundle B (because Bundle E has more of Good Y than Bundle B). Then, we continue around the circle to Bundle F, which is worse than Bundle B (because Bundle F has less of Good X than Bundle B). Somewhere along the way between Bundle E and Bundle F, we moved from bundles of goods that are better than Bundle B to bundles of goods that are worse than Bundle B. So, somewhere along that part of the circle is a bundle of goods that is just as good as Bundle B (because it provides exactly the same amount of utility to the consumer as Bundle B does). Let's say that bundle is Bundle A.

Now, let's go back to our circle. Bundle F was worse than Bundle B. That is also true of every bundle of goods as we move around the circle to Bundle G, which is also worse than Bundle B (because Bundle G has less of Good Y than Bundle B). Then, we continue around the circle to and back to the start at Bundle D, which we recall is better than Bundle B. Somewhere along the way between Bundle G and Bundle D, we moved from bundles of goods that are worse than Bundle B to bundles of goods that are better than Bundle B. So, somewhere along that part of the circle is another bundle of goods that is just as good as Bundle B (because it provides exactly the same amount of utility to the consumer as Bundle B does). Let's say that bundle is Bundle C.

Now, we could repeat this exercise for other circles that are larger, or smaller, than the circle we drew above. And in every case, we would find that there are two bundles of goods on each of our new circles that are just as good as Bundle B - one bundle in the top left quadrant, and one bundle in the bottom right quadrant. If we then draw a curve that joins up all of those bundles of goods that are just as good as Bundle B (because they provide exactly the same amount of utility to the consumer as Bundle B does), we would have a curve that we call the indifference curve. That is shown in the diagram below, and is labelled I0.

It's called an indifference curve because the consumer is indifferent between any of the bundles of goods on that curve, because all of the bundles of goods on the curve provide the consumer with exactly the same amount of utility). If we gave the consumer the choice between Bundle A and Bundle B, they wouldn't care which one they were given - they are indifferent between those two options. Similarly, if we gave the consumer the choice between Bundle B and Bundle C, they would be indifferent. And if we gave the consumer the choice between Bundle A and Bundle C, they would be indifferent. And the same for any other bundle of goods on that indifference curve I0.

So, now you can see where we get indifference curves from. They are a necessary feature of all constrained optimisation models, not just the constrained optimisation model for the consumer. For example, in my ECONS101 class, we also briefly look at the constrained optimisation model of the worker, and the constrained optimisation model of the saver, and both of those feature indifference curves as well. Now, you may be wondering why we draw indifference curves as curves, rather than straight lines. I'll address that point in my next post.

*****

[*] If this assumption didn't hold, and having more of a good made a consumer worse off, it wouldn't be a good, it would be a bad. We can draw indifference curves for bads. The same principles apply, it's just that higher utility is not up and to the right anymore.

Wednesday, 19 July 2023

Fuel price increases revisited, and the Law of Demand

Last week, I posted about the incentive effects of a fuel price change, noting that:

When the price of petrol went up on 1 July this year, it created an incentive for people to consume less petrol.

Now that I've covered the consumer choice model (or the constrained optimisation model for the consumer) in my ECONS101 class, it's time to revisit what happens when the price of fuel goes up, from the perspective of a fuel consumer. I'm not going to go through the basics of the consumer choice model here though, as I did that in this post about kūmara prices last year, so refer to that for the basic setup of the model.

The consumer choice model for the fuel consumer is shown below. The consumer can choose to buy a bundle of goods that includes some quantity of fuel (measured along the x-axis) and some quantity of 'all other goods' (AOG; measured along the y-axis). The starting point for our consumer is shown in black. The straight line that runs from M/Pa to M/Px0 is the consumer's budget constraint when the price of fuel is low (Px0). The consumer is trying to get to the highest possible indifference curve, which is the indifference curve I0. The consumer buys the bundle of goods E0, which contains X0 fuel.

Once the price of fuel goes up (to Px1), the budget constraint pivots inwards (shown by the red budget constraint, which runs from M/Pa to M/Px1). The consumer can no longer buy the bundle of goods E0, because it lies outside the consumer's budget constraint (it is outside the consumer's feasible set). Now, the highest possible indifference curve that the consumer can reach is the red indifference curve I1. The consumer buys the bundle of goods E1, which contains X1 fuel.

So, this model shows that when the price of fuel increases (from Px0 to Px1), the consumer decreases the amount of fuel that they buy (from X0 to X1). That is the Law of Demand, which is one of the most robust findings in economics. And it is clear that the consumer choice model demonstrates the same incentive effect of the fuel price increase that I discussed in my post about fuel prices last week.

Read more:

Tuesday, 18 July 2023

The market for wheat is back in the news

Russia and Ukraine are two of the world's largest exporters of wheat, accounting for around a quarter of the total global exports of wheat. So the war in Ukraine has potentially had a major effect on the world market for wheat, as I noted in this post last year. However, the impact hasn't been as great as it was early on in the war, because Turkey brokered an agreement to allow Ukrainian exports of wheat. However, that agreement has now broken down, as CNN reported today:

Russia said Monday it was suspending its participation in a crucial deal that allowed the export of Ukrainian grain, once again raising fears over global food supplies and scuppering a rare diplomatic breakthrough to emerge from Moscow’s war in Ukraine.

The agreement, brokered by Turkey and the United Nations in July 2022, was officially set to expire at 5 p.m. ET on Monday (midnight local time in Istanbul, Kyiv, and Moscow).

Kremlin spokesperson Dmitry Peskov told reporters on Monday that Russia would not renew the pact right now, saying it “has been terminated.”

The impact of Russia withdrawing from the agreement on the global wheat market is shown in the diagram below. Prior to Russia withdrawing from the agreement, the market was at equilibrium, with supply S0 and demand D0. The global price of wheat was P0, with Q0 units of wheat traded. Then, with supply from Ukraine disrupted, supply decreases to S1. The price of wheat goes up to P1, with less wheat (Q1) will now be traded.

It is the higher price and lower availability of wheat that has many developing countries concerned, as they are the major importers of wheat, like Egypt, Nigeria, and Indonesia.

The wheat market isn't only in the news because of the Russia-Ukraine conflict though. There has also been concern about the coming El Niño weather pattern, as reported by Reuters last month:

Australia's production of winter crops is set to fall from record highs, with wheat output seen declining more than 30%, the country's agricultural department said, as forecasters predict dryness due to the El Nino weather pattern.

Australia is the world's second largest wheat exporter, supplying mainly to buyers in Asia, including China, Indonesia and Japan.

Total Australian winter crop production is forecast to fall by 34% to 44.9 million tonnes in 2023–24, around 3% below the 10-year average to 2022–23 of 46.4 million tonnes, according to the June crop report from the Department of Agriculture, Fisheries and Forestry.

Again, this is a reduction in global supply, which should lead to an increase in the price of wheat and a decrease in quantity traded. However, not everyone buys the simple economic argument here. David Ubilava (University of Sydney) wrote in The Conversation last week:

The global supply and prices of most food is unlikely to move that much. The evidence from the ten El Niño events in the past five decades suggests relatively modest, and to some extent ambiguous, global price impacts. While reducing crop yield on average, these events have not resulted in a “perfect storm” of the scale to induce global “breadbasket yield shocks”.

How does Ubilava's view reconcile with the simple analysis of decreasing supply shown above? He explains that:

El Niño does induce crop failures, but for food grown around the world the losses tend to be offset by positive changes in production across other key producing regions.

For example, it can bring favourable weather to the conflict-ridden and famine-prone Horn of Africa (Djibouti, Ethiopia, Eritrea and Somalia).

A good example is wheat.

Ubilava then provides some data shown that there is little correlation between El Niño events and global wheat prices. Wheat prices don't always increase when there is an El Niño event, and prices often fall. This is because, even though supply from Australia may decrease, supply from other sources increases and offsets much (if not all) of the effect of decreasing Australian supply on the world market. 

However, the combined effect of Russia withdrawing from the grain export deal with Ukraine, and El Niño, is likely to lead to decreasing global supply. Countries that import wheat are right to be concerned.

Read more:

Monday, 17 July 2023

Not all potholes should be fixed

The National Party announced a policy this week of fixing potholes, if they are elected later this year. As the New Zealand Herald reported yesterday:

National is pledging to pour $500 million over three years into a Pothole Repair Fund to address what it calls the “shocking state of our local roads and state highways”...

The announcement follows a nationwide campaign from the National Party to highlight the state of the roading network, and in particular, encouraging people to send in photos of potholes.

National’s transport spokesperson Simeon Brown, who unveiled the policy in Auckland alongside leader Christopher Luxon today, said there would also be a new directive for Waka Kotahi/NZ Transport Agency to double the current rate of roading renewals and make “fixing the roads” the number one priority.

Do we really need to fix all of the potholes? It's an unpopular question, and may have an unpopular answer, because the answer could well be 'no'.

There are two ways to approach this question using the basic tools of economics I covered in my ECONS102 class last week. The first way is to consider fixing potholes as a yes/no decision for each individual pothole - this is an application of incremental analysis. Using incremental analysis, we weigh up, for each pothole, the benefits of fixing it against the costs of fixing it. If the benefits outweigh the costs, then the pothole should be fixed. If the costs outweigh the benefits, then the pothole should not be fixed.

When would the costs of fixing a pothole outweigh the benefits? The benefits of fixing a pothole will be much lower on a remote rural road that is infrequently used than on a major highway. The costs of fixing a pothole will be much higher in a remote location than somewhere close to urban centres, because of the time cost of getting workers to the pothole to fix it. So, it is possible that potholes in remote rural locations may have costs that are greater than benefits. Repairing those potholes certainly has lower benefits and higher costs than repairing potholes on major highways close to urban centres.

The second way to approach the question is to consider the optimal number of potholes to fix - this is an application of marginal analysis (similar to my post last week about elk). We can use marginal analysis to find the optimal quantity of pothole repairs - if there are more potholes than the optimal quantity of repairs, then it makes sense to leave some potholes unrepaired.

The marginal analysis model is illustrated in the diagram below. Marginal benefit (MB) is the additional benefit of repairing one more pothole. The marginal benefit of pothole repairs is downward sloping. Not all pothole repairs provide the same benefit. As noted above, those on main highways likely provide much higher benefit than those on remote rural roads. If we target resources to the highest benefit potholes first, each additional pothole that is repaired must provide less additional benefit (lower marginal benefit) than the previous one. Marginal cost (MC) is the additional cost of repairing one more pothole. The marginal cost of pothole repairs is upward sloping - the more potholes are repaired, the higher the opportunity costs of repairing one more pothole. Think about the labour involved. The more workers are diverted to pothole repair, the more society is giving up in other production from those workers. Or, to attract more workers to pothole repair, we would have to offer higher wages. Either way, the marginal cost of pothole repairs increases as we do more repairs. The 'optimal quantity' of pothole repairs occurs at the quantity where MB meets MC, at Q* pothole repairs in the diagram.

Now, consider what happens if we conduct more than Q* pothole repairs, such as Q2. For every pothole repair beyond Q*, the extra benefit (MB) of each repair is less than the extra cost (MC) of each repair, making us worse off. So, it is clear that it is possible to repair too many potholes, in which case not all potholes should be fixed. If we are repairing more than Q* potholes, then we are repairing too many.

It is also possible to repair too few potholes. That would be the case if we repaired fewer than Q* potholes, such as Q1. For every pothole repair below Q*, the extra benefit (MB) of each repair is more than the extra cost (MC) of each repair, so we would be better off with one more pothole repair.

So, both incremental analysis and marginal analysis suggest that not all potholes should necessarily be fixed. Having said that, it is still possible that every pothole should be repaired. Using incremental analysis, if the cost of repairing potholes is sufficiently low, and/or the benefits of repairing potholes is sufficiently high, then it is possible that the benefits outweigh the costs for repairing every pothole. Similarly, using marginal analysis, it is possible that the marginal benefit of repairing the very last pothole is greater than the marginal cost of repairing that pothole. That would only be the case if there were fewer than Q* potholes in total.

Should the National Party be pledging to spend an additional $500 million on pothole repairs? Without knowing more details about the incremental benefits and costs of individual pothole repairs, or the marginal benefits and costs of pothole repairs generally, we can't answer that question directly. However, just as it is possible to spend not enough on pothole repairs, it is possible to spend too much.

Saturday, 15 July 2023

Bounded rationality and driving in holiday traffic

On Friday afternoon, I drove from Hamilton to Tauranga. Friday was the Matariki public holiday, and traffic was heavy, especially on the Waikato Expressway. As anyone who has tried to drive southbound along the Expressway at the start of a holiday weekend has discovered, the end of the Expressway is a bottleneck that backs traffic up quite significantly.

So, as we got closer to Cambridge (and the last off-ramp before the end of the Expressway), I asked my daughter to check Google Maps. Sure enough, there was twenty minutes of congestion at the end of the Expressway. But, she informed me, there was only six minutes of congestion if we exited the Expressway and drove through Cambridge instead. So, that's what we did, and rejoined the highway south of Cambridge and in front of the majority of the congestion.

Why am I writing about my driving experience on an economics blog? Because it illustrates something that I discussed with my ECONS102 class this week - bounded rationality. A purely rational driver would have complete knowledge about their route options and times, and would choose the fastest route (through Cambridge, as I did). However, the more drivers who choose the fastest route, the slower and more congested that route would get, until eventually it was no quicker than the slower route. In the case of the Expressway on Friday, given that there was fourteen minutes difference between the two options, it is clear that many drivers were not acting in a purely rational manner.

Instead, they may be boundedly rational. Bounded rationality, which was introduced to economics by 1978 Nobel Prize winner Herbert Simon in 1955, suggests that the rationality of decision makers is limited by the information that they have access to. In the case of drivers, they may act on the information that they have before they leave home (when congestion may not be so bad), or they may base their decision on the 'usual' level of congestion (which is limited), in which case they believe that the Expressway is the faster option. This is also related to the illusion of knowledge - we think that we know more than we actually do. Drivers think that they know the Expressway is faster than driving through Cambridge, so they don't bother to check. The illusion of knowledge is one of a number of heuristics and biases that affect decision-makers - we can refer to those decision-makers as quasi-rational.

Either way, the end result is that the purely rational drivers (like me! [*]) got to enjoy a 14-minute faster journey than the boundedly rational and quasi-rational drivers, who stayed on the Expressway.

*****

[*] In no way can I claim a general tendency to purely rational behaviour. In fact, in this case I was only able to act in a purely rational way because I had a passenger with immediate access to Google Maps!

Friday, 14 July 2023

The consequences of changes in the relative price of wealth and time

In my ECONS101 class this week, we covered some of the basic concepts in economics, one of which is relative prices. The relative price can be simply thought of as the price of one alternative (or one good) compared with another. If both prices are measured in monetary terms, we can calculate the relative price as the ratio of the two prices (P1/P2). However, it need not be the case that the price of each alternative is measured in monetary terms. Regardless of how they are measured, relative prices are important, because changes in relative prices create incentives for decision-makers to change their behaviour. When the relative price of an alternative increases, decision-makers will be less likely to choose that alternative, or will choose to do less of it. When the relative price of an alternative decreases, decision-makers will be more likely to choose that alternative, or will choose to do more of it.

In class, we discussed the role of the relative price of coal and labour in providing incentives for the adoption of less labour-intensive production methods during and after the Industrial Revolution. However, there are any number of other interesting real-world examples of the effect of relative prices. I was particularly interested in this recent post by Tyler Cowen on the Marginal Revolution blog:

Real GDP per capita has doubled since the early 1980s but there are still only 24 hours in a day. How do consumers respond to all that increased wealth and no additional time? By focusing consumption on goods that are cheap to consume in time. We consume “fast food,” we choose to watch television or movies “on demand,” rather than read books or go to plays or live music performances. We consume multiple goods at the same time as when we eat and watch, talk and drive, and exercise and listen. And we manage, schedule and control our time more carefully with time planners, “to do” lists and calendaring. A search at Amazon for “time management,” for example, leads to over 10,000 hits.

As income (and wealth) have increased, the relative price of time-intensive activities has increased. As a result, as Cowen notes, we tend to do less time-intensive activities. Or, we find ways of combining those activities with other activities so that the time cost is lessened. In addition to the examples that Cowen gives, we eat lunch at our desks, we text and drive, and we watch television and scroll through social media. Another way of thinking about this is that the opportunity cost of time-intensive activities has increased - we now give up more income in order to 'consume' time-intensive activities. Like relative prices, when the opportunity cost of an activity increases, we tend to do less of it (in this case, less time-intensive activities). Cowen also makes the following point:

By the way, the same theory also explains why life often appears to unfold at a slower, more serene pace in developing nations. It’s not just an illusion of being on holiday. In places where time is less economically valuable, meals stretch more leisurely, conversations delve deeper, and time itself seems to trudge rather than race. In contrast, with economic development comes an increased pace of life–characterized by a proliferation of fast food, accelerated conversation, and even brisker walking...

'Island time' might be a real thing, with an underlying economic cause. Lower income in developing countries lowers the opportunity cost of time-intensive activities, so people in developing countries do more of them. When the opportunity cost of time is low, being late for a meeting is less costly, both to those who are late, and to those who are waiting. Changes in the relative price of time also explain why the pace of life appears to slow down (for some people, at least) at retirement. The opportunity cost of time for retirees is lower than for working people, and so engaging in time-intensive activities becomes less costly when people retire.

Most of us are not recent retirees though, and we face an inexorable increase in the pace of life, arising from the increasing relative price of time-intensive activities. However, this seems in stark contrast to the 'slow movement' that has arisen in recent years, which advocates for slowing down the pace of life. Adherents to the slow movement are deliberately engaging in time-intensive activities and using more time than is necessary to do so. That doesn't seem to make sense in light of the increasing relative price (and opportunity cost) of time.

That is, until you realise that people in the slow movement may be engaging in conspicuous consumption (of time). Conspicuous consumption is a costly signal of social status (as I've noted before here and here). Only those who are truly wealthy or high status can afford to waste a lot of time on slow food, slow travel, or slow gardening. People in the slow movement face the same opportunity costs of time as everyone else, but they are willing to pay those costs in order to demonstrate their high social status. Think about the celebrities or social media influencers who advocate for the slow movement. I may be wrong, but I don't see too many working-class folk among them.

Relative prices really do matter for decision-making, and changes in relative prices create incentives for people to change their behaviour. And surprising as it may seem, we can sometimes explain important social changes as happening as a result of changing relative prices.

Thursday, 13 July 2023

How increasing drug overdose deaths change the optimal drug policy

Charles Fain Lehman wrote an interesting article in National Affairs last month, about the US drug overdose crisis and policy. There is lots of economics in the article (and I encourage you to read it), but I want to focus on this bit:

Today's drug crisis, however, dramatically alters the balance of costs and benefits. Loss of life dwarfs all of the other costs imposed by drug use, on both the individual and society. As a consequence, the level of drug use that can be tolerated drops, particularly for those drugs that are most likely to lead to death. This means the amount of time and energy the government dedicates to the mitigation of drug use and addiction must increase substantially.

As I noted in yesterday's post about elk, we can use a model of marginal analysis to show the optimal quantity of something. We can also use the same model to show what happens to the optimal quantity when the costs or the benefits of the activity change. Lehman's main argument in the article is that the current US drug crisis has rebalanced the costs of drug use towards overdose deaths, massively increasing the cost to society of drug use.

So, consider a model of the quantity of drug use (at the societal level), as shown in the diagram below. There are two ways to think about this model. First, we could consider the marginal costs and marginal benefits of drug use. The marginal benefit (MB) of drug use is the additional benefit a drug user gets from consuming one more dose of the drug. The marginal benefit of drug use decreases as the drug user consumes more, due to satiation (which, literally, means getting full as you eat more). Each drug user benefits from consuming more drugs, but each dose of drugs they consume doesn't provide them with as much benefit as earlier doses. The marginal cost (MC) of drug use is the additional cost a drug user faces from consuming one more dose of the drug. The marginal cost of drug use increases as the drug user consumes more, due to increasing opportunity cost. As the drug user consumes more drugs, they have to divert more resources to drug use, and the amount and/or value of resources they give up for drug using increases as they consume more. On top of that, the drug user faces increasingly negative health consequences as their drug use continues. The optimal quantity of drug use occurs at the quantity where marginal benefit is exactly equal to marginal cost (for more on this, read yesterday's post).

Before the current drug overdose crisis, the marginal costs of drug use are shown by the curve MC0. The optimal quantity of drug use is Q0 (which is the quantity where MB is equal to MC0). Now, during the drug overdose crisis, the marginal costs of drug use are much higher (MC1), and the optimal quantity of drug use is lower (Q1, which is the quantity where MB is equal to MC1). This is the argument that Lehman is making, and later in the article he argues that because the costs (to society) of drug use have increased, and the optimal quantity of drug use is now lower, the US needs to find more effective tools for reducing drug use.

Not everyone will agree with Lehman's policy prescription (he is in favour of compulsory drug treatment, for example), but it seems fairly clear that in order to get down to the new (and lower) optimal level of drug use, some policy prescription (pun intended!) is required.

[HT: Marginal Revolution]

Wednesday, 12 July 2023

How many elk are too many elk?

It's an important question that you've probably never put much thought towards: how many elk are too many elk? As Oregon Public Broadcasting reported last month, it's a question that the town of Warrenton, Oregon, has had to consider:

Over the last 30 years, the elk population along Oregon’s northern coast has ballooned. An elk sighting used to be an unexpected thrill, but now the animals, which can weigh 1,000 pounds, are trampling pets to death, ramming cars and even attacking people.

The population has boomed for several reasons. There is less willingness to shoot elk than back in the day. City limits have expanded into elk habitat. And elk have gained a taste for the plants humans like to cultivate, such as rhododendrons and grass.

Elk [*] are cool, and their babies are so cute:

Look at that little guy. Is it really possible that a town could have too many elk, and be thinking about culling the elk population, like the town of Warrenton is? Marginal analysis, which I covered in my ECONS102 class this week, says that the answer is probably yes. Despite the extreme cuteness of baby elk, it is possible for there to be too many of them.

To see how, consider the diagram below. Marginal benefit (MB) is the additional benefit to a town of one more elk. The marginal benefit of elk is downward sloping - it is very cool for your town to have its very own elk. I wish my town had an elk. The marginal benefit of the first elk is surely very high. But once your town has an elk, each additional elk adds some more benefit, but not as much additional benefit as the previous elk (because your town already have some). By the time that you get to an elk herd where every household can have their own elk, the marginal benefit of adding more elk is likely to be very low. Economists note that this process of diminishing marginal benefit happens due to satiation (which, literally, means getting full as you eat more - even if we aren't eating the elk). Marginal cost (MC) is the additional cost to a town of one more elk. The marginal cost of elk is upward sloping - the more elk a town has, the higher the opportunity costs of adding more elk. Consider it this way: as a town adds more and more to the elk population, they have to give up more valuable resources to support that elk population. Initially, they can probably keep the elk in reserves and parks, but eventually the number of elk are going to grow to such an extent that household gardens and lawns are covered in elk. The 'optimal quantity' of elk occurs where MB meets MC, at Q* elk.

Now, consider a town that has more than Q* elk, such as Q2. For every elk beyond Q*, the extra benefit (MB) of each elk is less than the extra cost (MC) of each elk, making the town worse off. So, it is clear that it is possible to have too many elk. If a town has more than Q* elk, then it has too many.

It is also possible for a town to have not enough elk. That would be the case for a town that has fewer than Q* elk, such as Q1. For every elk below Q*, the extra benefit (MB) of each elk is more than the extra cost (MC) of each elk, so the town would be better off with one more elk.

Finally, exactly how many elk is Q*? Our model doesn't answer that question directly. It will depend on exactly how large the marginal benefit of elk is, and how quickly a particular town gets satiated by elk (in other words, how steep the marginal benefit curve is). It will also depend on exactly how large the marginal cost of elk is, and how quickly marginal cost increases with more elk (in other words, how steep the marginal cost curve is). Nevertheless, we can be pretty sure that there is an optimal quantity of elk for each town, and that it is possible for a town to have too many of them. Regardless of how cute baby elk are.

*****

[*] Weird as it may be, the plural of elk is also elk. Just in case you were wondering.

Tuesday, 11 July 2023

The incentives associated with an anticipated fuel price change

In my ECONS102 class today, we discussed incentives. Broadly, incentives for decision-makers to change their behaviour arise when there is a change in the costs and/or benefits of the alternatives that they could choose. When the costs of something go up, we tend to do less of it. When the costs of something go down, we tend to do more of it. The reverse is true of benefits.

As a recent example, consider this article from the New Zealand Herald from a couple of weeks ago:

Massive queues are forming outside Auckland petrol stations as motorists try to take advantage of the last days of cheaper petrol.

The price per litre will jump around 29 cents from Saturday, when Government subsidies on petrol excise duty and road user charges end.

After June 30, the 25 cents per litre discount on petrol will be added at the pump - by the time GST is added the reinstated tax will add nearly 29 cents to the litre price.

When the price of petrol went up on 1 July this year, it created an incentive for people to consume less petrol. However, in the days leading up to the change in price, there was an incentive for consumers to fill up their car (and other available containers) before the price increase. And that's what many did.

But, is it worthwhile for consumers to queue for the cheaper fuel, when it may entail a long time spent waiting? That depends on the costs and benefits of waiting, which depend in turn on how big the price change was, how much the consumers were going to save, and how long they had to wait.

Let's consider a car like mine (a Nissan Altima), with a 60-litre petrol tank. Let's say the tank isn't completely empty, so the petrol consumer is filling it with 50 litres of petrol. If the cost saving is 29 cents per litre (from the article), the total cost saving for filling the petrol tank is $14.50. That is the benefit of waiting in the queue for cheaper fuel.

What is the cost of waiting? That depends on the opportunity cost of the consumer's time - what they give up for each minute spent waiting. As a simple way of measuring this, let's take the minimum wage as a starting point. At the minimum wage (currently $22.70 per hour), each minute spent waiting has a cost of 38 cents. That means that if the consumer waits 39 minutes or more for fuel, the costs of waiting (39/60 x $22.70 = $14.75) would exceed the benefits (of $14.50). Or, if we take the after-tax minimum wage (of about $18.72 per hour), then if the consumer waits more than 46 minutes for fuel, the costs of waiting (46/60 x $18.72 = $14.66) would exceed the benefits. And, for consumers whose opportunity cost is higher than the minimum wage, the number of minutes spent waiting before the costs exceed the benefits would be even lower. For example, the Prime Minister (with a salary of $471,000, or about $24.33 per minute after tax) would only be able to wait about 35 seconds for fuel before the costs exceeded the benefits.

Incentives really matter, and they are determined by the costs and the benefits of the alternatives we might choose. And when we think about the costs and benefits, we need to also consider the opportunity costs. Queueing for cheap petrol isn't necessarily a good idea for everyone.

Sunday, 9 July 2023

The demand side of the market for sex services in Britain

The market for sex services is a market like other markets. There is a demand for services (by clients), a supply of services (by sex workers), and a price. However, unlike other markets we often don't know much about the demand or supply of sex services. Data isn't widely available, because one or both sides of the market may be illegal, or participation in the market is stigmatised. Nevertheless, some studies do manage to get data that highlights some features of the market (see here and here, for example).

As another example, this 2018 article by Marilena Locatelli (University of Turin) and Steinar Strøm (University of Oslo), published in the journal Kyklos (ungated version here) looks at the demand side of the market in Britain. Specifically, they used data from the National Survey of Sexual Attitudes and Lifestyles (NATSAL3), undertaken in 2010-2012, which collected data on whether, and how often, men aged 20-74 years had purchased sex services. Locatelli and Strøm looked at the factors associated with purchase of sex services, and found that:

Men travelling abroad, living in London, drug users, religious men and men with middle-class income are more often together with prostitutes than other [men].

There are a couple of surprises in there. Locatelli and Strøm don't have a good explanation for why men who report belonging to a religion are more likely to purchase, and purchase more, sex services than men who do not belong to a religion. Locatelli and Strøm explain their results in terms of income in the following way. Sex services were purchased most by middle-income men. They were purchased less by low-income men, which is likely to be an income effect, suggesting that sex is a normal good (like many other goods, consumers buy more as their income increases). Sex services were also purchased less by high-income men, where the income effect may be outweighed by a potential loss of reputation for high-income consumers of sex services. However, that should also apply to religious men, and yet that doesn't appear to be the case.

Locatelli and Strøm highlight one other result that has policy implications, which is that:

...learning about sex in school has a significant and sizeable negative marginal effect on the expected number of times with a prostitute. To require that sex education at school should be compulsory in all schools could therefore help in reducing prostitution in Britain.

That sex education may lower demand for sex services will come as a surprise to many people. On the other hand, Locatelli and Strøm's results don't answer the question of how sex education would reduce demand for sex services. Perhaps sex education leads to more casual sex (an obvious substitute for paid sex). They don't have data on casual sex, but they do find that engaging in masturbation is associated with less demand for sex services (although, as almost all of the sample reports masturbation, it is difficult to read too much into that result).

This study provides some interesting results, but there are much more important reasons to support sex education in schools than hoping it will reduce demand for sex services.

Monday, 3 July 2023

Have large language models killed online data collection?

Data is the lifeblood of empirical social science research. Whether it be quantitative or qualitative data, or both, you couldn't do empirical research without it. Self-evidently, the quality of data matters. As the saying goes, garbage-in-garbage-out. You want high quality data to analyse. So, this new working paper by Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West (all École Polytechnique Fédérale de Lausanne) should be causing some disquiet, especially among those who use Amazon mTurk and similar sources for generating data, because in the paper the authors:

...quantify the usage of LLMs by crowd workers through a case study on MTurk, based on a novel methodology for detecting synthetic text. In particular, we consider part of the text summarization task from Horta Ribeiro et al. (2019), where crowd workers summarized 16 medical research paper abstracts. By combining keystroke detection and synthetic text classification, we estimate that 33-46% of the summaries submitted by crowd workers were produced with the help of LLMs.

Yikes! Between one-third and one-half of mTurk workers are already using large language models (LLMs) like ChatGPT to complete their work. It is easy to see that using mTurk for collecting data from experiments, surveys, etc. has just become untenable. At least, it is untenable if researchers want data collecting from real humans, rather than from LLMs masquerading as humans.

It gets worse though. It isn't just mTurk where this is likely to be a problem. Any online survey is now vulnerable to being completed by a LLM, rendering most online data collection fraught. Journal editors and reviewers will no doubt become aware of this in the future (if they aren't already), so publishing research based on data collected from humans using online methods is going to become a whole lot harder to get published in future.

It's not going to end there. Since LLMs are now generating a non-trivial proportion of online content, a lot of online data is going to lose credibility. And, to top it all off, if future LLMs are being trained on internet-sourced data, they will effectively be being trained on data that is partially generated by today's relatively-low-quality LLMs. There doesn't seem to be much of a way around this.

Anyway, getting back to the Veselovsky et al. article, they aren't as negative in their conclusions as I am above:

All this being said, we do not believe that this will signify the end of crowd work, but it may lead to a radical shift in the value provided by crowd workers.

I guess it depends on what you want the crowd workers to do. As I said above, they won't be contributing much of value to researchers in the future (unless the researchers are researching LLMs). Part of the lifeblood of social science research is bleeding away.

[HT: Marginal Revolution]

Sunday, 2 July 2023

The impact of large language models on job interviews, from both sides

The Financial Times had an interesting article (paywalled) recently about the impact of artificial intelligence on recruitment:

Students applying for graduate jobs this summer can take advantage of a new personal interview coach. If they send over a specific job description they can receive tailored interview questions and answers — and feedback on their own responses — all for free.

The coach, offered by the job search engine Adzuna, is not human but an artificial intelligence bot known as Prepper. It can generate interview questions for more than 1mn live roles at large companies, in industries ranging from technology and financial services to manufacturing and retail.

For a graduate job in PwC’s actuarial practice, the chatbot spits out questions such as: “What skills do you think an actuarial consultant should have?” and “How would you explain actuarial concepts to a client who is not from a finance background?”. When a user answers a question, Prepper generates a score out of 100, and tells them which parts worked well and what was missing.

Prepper is part of a new wave of chatbots powered by generative AI — from ChatGPT to Bard and Claude.

As I noted in my previous post, employers are trying to overcome an adverse selection problem. Job candidates know whether they are high quality or not, but employers do not. High-quality job candidates use signalling to reveal to employers that they are high quality, as I noted in my previous post. Employers use screening to try to identify the quality of job candidates. One screening tool is the job interview. Candidates' likely quality as employees is revealed through the job interview process. However, if AI is prepping job candidates, then that reduces the efficacy of the screening process. If low-quality job candidates can be effectively prepped to seem like they are high-quality job candidates, the screening process fails to solve the adverse selection problem.

On the other hand:

Grace Lordan, an economist at the London School of Economics and director of The Inclusion Initiative, which studies diversity in corporate settings, says companies, particularly technology groups, are experimenting with generative AI to conduct initial interviews.

“One of the biggest areas of bias is actually the interview,” she says. “This is when people’s affinity bias, or representative bias, which means choosing people who look like others in the organisation, comes in.”

AI-conducted interviews could go some way to removing that bias, she says. “Generative AI is quite convincing as an avatar. Using AI as another serious data point will allow pushback from the machines [against human bias].”

So, perhaps there are offsetting benefits of AI on the job interview process, if the AI is being used by the employer. If AI leads to job interviews that are more effectively able to screen for high-quality job candidates, by reducing bias in the interview process, then that can be a good thing. The AI might also be better able to ask the searching questions that distinguish high-quality and low-quality job candidates.

Of course, that assumes that the AI is interviewing a human job candidate. What happens when a job candidate, being interviewed by an AI job interviewer, is being given real-time prompts on how to answer by an AI chatbot? Or, in a Zoom job interview, the job candidate simply replaces themselves with an avatar or a 'deep fake' video of themselves generated in real time, and using an AI-scripted voiceover. If it hasn't happened already, it is going to be happening soon. Will that be the death of the job interview as a screening tool? Time will tell.