Friday, 31 May 2019

British common law may have been good for development, but not for women

For the most part, former colonies adopted the legal systems and processes of their colonisers. Countries that were former British colonies adopted a common law system, while those that were former colonies of French, Spanish, Portuguese, or other continental powers, adopted a civil law system. There is plenty of research that strongly suggests that British common law was good for development (for example, see this 2008 review of the literature). One of the underlying reasons for this is that civil law has weaker enforcement of contracts and property rights, which inhibits investment.

However, it appears that all is not rosy in the common law world. A 2018 article by Siwan Anderson (Vancouver School of Economics), published in the American Economic Review (currently ungated, but just in case here is an earlier ungated version), identifies a substantial negative consequence of British common law in sub-Saharan Africa. This arises because of differences in the treatment of marital property between civil law and common law countries. Anderson explains that:
...the community marital property regime of the civil law countries gives equal protection to women in case of divorce, typically an even split of property between spouses, and legally protects widows. This is in stark contrast to the separate marital property regime of the former British colonies who adopted the Married Women’s Property Act of 1882. Very few of these countries have added provisions to these outdated marriage laws and they provide little, or no, protection for women in event of marital dissolution.
I think in this quote, Anderson has 'community marital property' and 'separate marital property' back-to-front (and checking on her reference to this World Bank report, it seems so). Civil law countries have a separate marital property regime (each spouse retains individual ownership over their own assets), while common law countries have a community marital property regime (spouses have joint ownership of assets, but in this case men may have additional rights over this property when a marriage is dissolved). Anderson notes that Muslim women, and those in polygynous marriages (marriages with multiple wives), have a 'separate marital property' regime, even in common law countries:
Separate marital property is the default regime in classical Islamic law... The default marital property regime for polygynous marriages is also separate as, in this case, it protects a wife from having to share her private property with other wives...
These differences in marital property regimes have implications for bargaining power within the household. In a household bargaining model, decisions made within the household are determined by the relative bargaining power of the decision-makers (e.g. the spouses). A spouse who will have greater access to property in the case of divorce has a higher 'threat point' - they have more bargaining power, and can exert more influence over household decision-making.

In the case of divorce, non-Muslim women in common law countries (who are not in polygynous marriages) will be much worse off (because they keep a smaller share of the property) than women in civil law countries. This means that women in civil law countries have a higher threat point and therefore greater bargaining power within the marriage. This increases their influence over household decisions, including increasing their ability to negotiate safer sex practices with their husband, and therefore reducing their susceptibility to HIV.

Using data on around 308,000 women and 190,000 men from the Demographic and Health Surveys in 25 sub-Saharan African countries, Anderson tests for differences in HIV rates between common law and civil law countries. About 45 percent of the countries in Anderson's sample have common law, and the rest have civil law. As a starting point, it is worth noting that:
On average, 6.8 percent of women in our sample are HIV positive (this compares to 4.6 percent of men). In common law countries, the average female HIV infection rate is approximately 9.5 percent. It is close to one-half, at 4.6 percent, in civil law countries.
You might legitimately worry that there are ethnic differences between countries that are former British colonies (and therefore common law) and other countries (civil law), but Anderson notes that national boundaries are not consistent with boundaries of ethnic groups, and many ethnic groups cross borders, and that:
51 percent of the individual sample (a total of roughly 157,000 women) fall into partitioned ethnic groups, and 35 percent of the total sample (approximately 108,000 women) are in partitioned ethnic homelands with different legal origins.
That provides enough variation to separately identify the effects of common law from any confounding by ethnic differences. Anderson finds that:
...female HIV infection rates are at least 25 percent higher in common law countries...
We see that the positive correlation between common law and female HIV only holds for non-Muslim and non- polygynous women. By contrast, there are no significant effects for the sample comprised only of Muslim and/or polygynous women. Therefore, the key correlation only exists for those women who should be affected by the differences in marital property law across civil and common law countries.
There are no significant effects of common law on HIV infection among men. Anderson goes on to show that:
...all else equal, women residing in common law countries are significantly less likely to use a contraception method requiring consent from her male partner, and incidentally protecting her from contracting HIV. The estimated coefficient... reflects about a 30 percent decrease... in women using protective contraception in common law compared to civil law countries.
Again, this relationship is not significant for Muslim women, or women in polygynous marriages. Similar results are found for men, in terms of contraceptive use. Finally, Anderson finds:
...a consistently negative relationship between this index of female autonomy and weaker marital property laws (common law). Once again, the relationship only holds for those women for which the legal variation is relevant (non-Muslim and non-polygynous).
Overall, this paper provides strong evidence that common law was not in all ways good for countries, or more specifically for women. These countries have retained outdated laws, and while Western common law countries have long since moved on and given equal property rights to women, many sub-Saharan African countries lag behind. One consequence has been the feminisation of the HIV epidemic in Africa. Legal reform in these countries is urgently needed.

[HT: Marginal Revolution, last year]

Thursday, 30 May 2019

Transaction utility and the behavioural economics of discounts

Ralph-Christopher Bayer (University of Adelaide) wrote in The Conversation yesterday about the behavioural economics of discounting:
Because consumers are human beings, our actions aren’t necessarily rational. We have strong emotional reactions to price signals. The sheer ubiquity of discounts demonstrate they must work.
Lets review a couple of findings from behavioural (and traditional) economics that help explain why discounting – both real and fake – is such an effective marketing ploy.
When will firms offer discounts on their products? If they are profit-maximising firms (the assumption we usually make in economics), then they will lower prices if it increases profits. When prices are lowered consumers will buy more. That is the straightforward Law of Demand. However, lower prices don't automatically raise profits, because while the firm will sell more items, it sells those extra items and all the rest of the items that they could have sold at the higher price at the new lower price. Profits might even go down.

So, firms will only lower prices if it is more profitable to do so. However, if firms are better off with lower prices, you would (rightly) wonder why they need to discount - why would they ever offer the higher price, if they already know the lower price is more profitable? They should just start with the lower price.

There can be good reason for discounting. For some retailers, starting with a high price and discounting later has nothing to do with behavioural economics. For instance, consumers who wait until later to buy may have a lower willingness-to-pay (or be more price sensitive) than consumers who buy early. In this situation, it makes sense for the retailer to sell 'new season' items at a high price, but heavily discount those same items at the end of the season (this is what economists refer to as 'temporal price discrimination'). However, this is not what most retailers are doing when they discount items.

Most retailers are trying to take advantage of behavioural economics, as Bayer explains:
The prospect of buying something leads us to compare two different changes: the positive change in perceived value from taking ownership of a good (the gain); and the negative change experienced from handing over money (the loss). We buy if we perceive the gain to outweigh the loss.
Suppose you are looking to buy a toaster. You see one for $99. Another is $110, with a 10% discount – making it $99. Which one would you choose?
Evaluating the first toaster’s value to you is reasonably straightforward. You will consider the item’s attributes against other toasters and how much you like toast versus some other benefit you might attain for $99.
Standard economics says your emotional response involves weighing the loss of $99 against the gain of owning the toaster.
For the second toaster you might do all the same calculations about features and value for money. But behavioural economics tells us the discount will provoke a more complex emotional reaction than the first toaster.
Research shows most of us will tend to “segregate” the price from the discount; we will feel separately the emotion from the loss of spending $99 and the gain of “saving” $11.
Bayer is describing the idea of transaction utility (which I have blogged about before in this context). When we buy an item, we get utility (satisfaction or happiness) from receiving the item (which we call consumption utility), plus we get utility from the transaction itself (transaction utility). If we feel like we are getting a good deal, that makes us happier about our purchase. It doesn't make us any more satisfied with the item itself, but it increases our transaction utility. Higher total utility (consumption utility plus transaction utility) makes us more likely to buy the item.

Retailers can exploit this, and often do. By posting a high 'regular price' or 'recommended price' and showing a deep discount, they increase the consumer's perception of getting a good deal, and increase their transaction utility. This makes them more likely to make the purchase, because it increases their total utility.

Transaction utility can wear off pretty quickly though. You know that feeling of buyer's remorse, when you've bought something and you felt really good about it at the time, but soon after you think it wasn't worth it and maybe you want to change your mind? That's the transaction utility wearing off, and you're realising that the consumption utility alone is not enough to make the item worthwhile. But it's too late! Bayer's conclusion is relevant here:
The bottom line: beware the emotional appeal of the discount. Whether real or fake, the human tendency is to overrate them.
Read more:


Wednesday, 29 May 2019

Higher minimum wages and youth crime

Does a higher minimum wage reduce crime? The White House Council of Economic Advisors thinks so. However, from the perspective of basic economics, it is an open question. On the one hand, we might expect higher minimum wages to increase income for low-income workers. That increases the opportunity costs of property crime - if these low-income workers engage in property crime and are caught, they give up more income as a result (because of jail time, etc.). On the other hand, the evidence strongly suggests that higher minimum wages reduce employment (see my latest post on this topic here). So, even if low-income workers earn more, fewer of them have jobs. Those that are made unemployed have greater incentives to engage in property crime.

A recent NBER Working Paper (ungated version here) by Zachary Fone (University of New Hampshire), Joseph Sabia (San Diego State University), and Resul Cesur (University of Connecticut) addresses this open question. They use data from the U.S. crime data and data from the National Longitudinal Study of Youth over the period 1998 to 2016, and data on national, state, and local minimum wages. Over that period, there were:
...3 Federal minimum wage increases, 217 state minimum wage increases, 77 local minimum wage increases, and 116 living wage ordinances enacted.
So, there is more than enough variation in the data to identify the relationships between minimum wage increases and crime. Fone et al. focus on youth crime, which is sensible because youths are more likely to be affected by higher minimum wages (either positively or negatively). They also compare the effects for those aged 16-19 years and 20-24 years with those aged over 25 years. They find that:
...minimum wage increases enacted from 1998 to 2016 led to increases in property crime for those between the ages of 16-to-24, with an estimated elasticity of 0.2. This finding is robust to the inclusion of controls for state-specific time trends, survive falsification tests on policy leads, and persist for workers who earn wages such that minimum wage changes bind. Increases in property crime appear to be driven by adverse employment and hours effects of minimum wages. We find little evidence that minimum wage increases affect violent or drug crimes.
The elasticity of 0.2 means that a 10% increase in the minimum wage is associated with a 2% increase in property crime. Interestingly, and importantly, the effects are statistically significant for younger people (aged 16-19 or 20-24), but not statistically significant for those aged over 25 years. This can provide some confidence that what they are finding is related to the minimum wage changes, and not to some underlying overall crime trends (although underlying youth crime trends might still be a problem), because if it was driven by underlying trends there would be a significant effect for the 25+ age group as well. They also find that living wage provisions (which are usually larger than minimum wages, but affect only a subset of those working for low wages) increase property crime by 9.1%.

Of course, this paper is far from the last word on the effects of minimum wages on crime, and is inconsistent with previous evidence (including a paper I blogged about last year on minimum wages, earned income tax credits and female recidivism). However, it does show that we need to take more than just the standard welfare effects of the minimum wage into account when evaluating the costs and benefits of a higher minimum wage.

Read more:


Tuesday, 28 May 2019

Autonomous cars, pedestrians, and the game of chicken

I was interested to read this article in The Conversation last month by Jason Thompson (University of Melbourne) and Gemma Read (University of the Sunshine Coast), about the interactions of humans and autonomous cars. In particular, they use some very simple game theory to represent the interaction between cars (autonomous or otherwise) and humans:
A simple example of how this might happen comes from game theory. Take two scenarios at an intersection where pedestrians and vehicles negotiate priority to cross first. Each receives known “pay-offs” for behaviour in the context of the other’s action. The higher the comparative pay-off for either party, the more likely the action.
In the left-hand scenario below, the Nash equilibrium (the optimum combined action of both parties) exists in the lower left quadrant where the pedestrian has a small incentive to “stay” to avoid being injured by the manually driven car, and the driver has a strong incentive to “go”.
However, in the scenario on the right, the autonomous vehicles has a desire to act flawlessly and pose no threat to the pedestrian at all. While this might be great for safety, the pedestrian can now adopt a strategy of “go” at all times, forcing the AV to stay put.
Here's their associated diagram. The red explosions show the Nash equilibrium in each case:

However, I think they have their analysis wrong. I don't think they've really considered the game theory implications here fully. In the game on the left, the car has a dominant strategy to "Go" - going provides a payoff that is always higher than staying. The car driver should never choose to stay, including if the pedestrian chooses to "Go" as well. But, if both pedestrian and driver choose to "Go", then the car runs over the pedestrian. It's hard to see how the car driver is better off going if the pedestrian goes (unless the disutility of washing the blood off their car really is less than the utility gained from getting through the intersection faster).

Similarly, in the game on the right, the pedestrian has a dominant strategy to "Go". But again, would they really choose to "Go" if the car is going too? Maybe in Thompson and Read's world, pedestrians aren't seriously injured or killed when they run into cars, but in the real world it seems like a pedestrian would be pretty stupid to go if a car is going.

A more realistic representation of the game is in the payoff table below. If both the car and the pedestrian go, then both lose. The loss to the car driver is pretty high (damage to their car; maybe jail time for running down a pedestrian), but not as high as the loss to the pedestrian (being injured or killed by a car is pretty serious). If car or pedestrian goes, and the other doesn't, then whichever one goes gets a larger positive payoff. If both car and pedestrian stay, then a standoff ensues, and both get a payoff of zero.


Where is the Nash equilibrium in this game? We can use the 'best response method' to find the equilibrium. To do this, we track: for each player, for each strategy, what is the best response of the other player. Where both players are selecting a best response, they are doing the best they can, given the choice of the other player (this is the textbook definition of Nash equilibrium). In this game, the best responses are:
  1. If the car chooses to go, the pedestrian's best response is to stay (since 1 is a better payoff than -5) [we track the best responses with ticks, and not-best-responses with crosses; Note: I'm also tracking which payoffs I am comparing with numbers corresponding to the numbers in this list];
  2. If the car chooses to stay, the pedestrian's best response is to go (since 3 is a better payoff than 0);
  3. If the pedestrian chooses to go, the car's best response is to stay (since 1 is a better payoff than -3); and
  4. If the pedestrian chooses to stay, the car's best response is to go (since 3 is a better payoff than 0).
There are two Nash equilibriums in this game: (1) where the car goes and the pedestrian stays; and (2) where the pedestrian goes and the car stays. Notice that both players in this game would be better off if they are the one to go, and worse off if they are the one to stay. But, if both choose to go, then they end up in the worst possible outcome! This is an example of the chicken game (which I have previously blogged about here).

A game with two Nash equilibriums doesn't really give us much guidance as to what the ultimate outcome will be. Both players want to go, but both want to avoid the situation where they are both trying to go at the same time. So, how do we solve this problem?

Currently, we solve the problem using a mixture of road rules and social norms. At pedestrian crossings, cars always stay and pedestrians can go. At traffic lights, cars and pedestrians stay or go depending on whether their light is red or green. At other places, pedestrians stay and cars go - that's why we learn as children to look both ways before crossing the road (a social norm).

However, autonomous cars create an interesting situation, as Thompson and Read highlight in their article. If autonomous cars are programmed to always give way to pedestrians (or, equivalently, to always avoid a collision with a pedestrian where possible), then that means the car will always stay. If the pedestrian knew (for sure) that the car was going to stay, they would choose to go. Every time. Suddenly, we end up in the top right box of the payoff table.

Why is this a problem? As Thompson and Read note:
Now imagine crossing a road or highway in a city saturated by autonomous cars where the threat of being run over disappears. You⁠ ⁠(⁠o⁠r⁠ ⁠a⁠n⁠y⁠ ⁠o⁠t⁠h⁠e⁠r⁠ ⁠mildly i⁠n⁠t⁠e⁠l⁠l⁠i⁠g⁠e⁠n⁠t⁠ ⁠a⁠n⁠i⁠m⁠a⁠l⁠)⁠ might quickly learn that ⁠⁠oncoming traffic poses no threat at all. Replicated thousands of times across a dense inner city, this could produce gridlock among safety-conscious autonomous vehicles, but virtual freedom of movement for humans – maybe even heralding a return to pedestrian rights of yesteryear.
Thompson and Read make the (obviously tongue-in-cheek) suggestion that a solution is for autonomous cars to occasionally, purposefully, run into pedestrians. That seems unlikely (though potentially effective).

Ultimately, this game of chicken is one that pedestrians are likely to win. So, while many are claiming that autonomous vehicles are the solution to traffic problems, they could actually end up making them worse.

Monday, 27 May 2019

Making yourself scarce is making yourself appear more valuable

One of the fundamental concepts in economics is scarcity. It is scarcity (of time, money, or other resources) that forces people (or firms, or government) to make choices. Scarcity is also one of the key factors that underlies the value of goods and services. Holding all else constant, goods that are scarcer will command a higher price. And businesses use this to good effect (the best example at that link: Starbucks' unicorn frappucino, which sold out in a few days).

Interestingly, Cindy Lamothe wrote in the New York Times earlier this month about scarcity value, but in relation to your worth as an employee (or potential employee):
And while conventional wisdom tells us we should eagerly embrace every opportunity that comes our way, playing a little hard to get has its advantages.
Study after study has shown that opportunities are seen to be more valuable as they become less available, meaning that people want more of what they can’t have, according to Robert Cialdini, a leading expert on influence and the author of “Pre-Suasion: A Revolutionary Way to Influence and Persuade.”
“What the scarcity principle says is that people are more attracted to those options or opportunities that are rare, unique or dwindling in availability,” Dr. Cialdini said. The reason behind this idea has to do with the psychology of “reactance”: Essentially, when we think something is limited to us, we tend to want it more.
Lamothe's advice is to make yourself less available (i.e. more scarce). You can do this by: being less eager (keeping your enthusiasm in check, or playing a little hard-to-get); not jumping the gun (taking your time to investigate an opportunity, and fully considering it); knowing your market value (so that you know how much your skills are worth, and you won't sell yourself short); and adopting an abundance mindset (reappraising negative self-talk, and recognising that there are unlimited possibilities, which helps give you more self-confidence). She concludes that:
Ultimately, appearing less available isn’t about limiting our enthusiasm or being unnecessarily hard on ourselves. It’s about trusting in our own self-worth so we can be proactive, experts say. This means mindfully aligning our excitement into strategy.
So, make yourself less available (and more scarce). It'll increase your perceived value. Or at least as an interim measure, maybe learn to say 'no' once in a while (some advice that I should probably follow!).

Sunday, 26 May 2019

Roman roads and infrastructure's impact on development

How persistent is economic development? When you look back over history, on average the places that are most developed today seem to be the places that were most developed a century ago. Or two centuries ago. Or a millennium ago. It's an interesting question. Another interesting question is, does infrastructure contribute to development? These two questions are, of course, closely related. Infrastructure is (usually) built to last a long time, and new infrastructure (such as roads, railways, etc.) are often built over top of, or next to, existing infrastructure.

This 2018 CEPR discussion paper by Carl-Johan Dalgaard (University of Copenhagen), Nicolai Kaarsen (Danish Economic Council), Ola Olsson (University of Gothenburg), and Pablo Selaya (University of Copenhagen), provides us with some guidance towards the answer to both questions (and the authors wrote an interesting article at Vox on their paper as well). The paper looks at the persistence of Roman roads, and whether Roman roads in 117 CE are related to modern-day road networks, as well as modern day development (as measured by night lights intensity, and population density, with both data sources from 2010). To give you some idea of the results, here's Figure 2 from the paper, which shows the Roman roads (red lines) and modern night lights for the area surrounding the Roman town of Lutetia (modern-day Paris):


Notice how the night lights cluster around the Roman roads? That demonstrates the persistence of development in the areas immediately surrounding the location of the ancient roading network. Dalgaard et al. analyse data across the whole Roman Empire (as it was in 117 CE). At that time, there was an estimated 80,000 km of paved roads. Controlling for geographic features, proximity to water bodies, distance to Rome and to the borders of the Empire, and the location of historical mines (as well as country and language fixed effects), they find that:
...historical road networks and geography account for about one fifth of the variation in contemporaneous road network across grid cells.
A one percentage point increase in Roman road density is associated with a statistically significant 0.15 percent increase in modern road density. In addition:
...Roman road density appears strongly associated with economic activity, both in the past and in the present... In our full specification we find that economic activity during antiquity rises by about 0.6 percent for every percentage point increase in road density; in the modern day context we find elasticities in the range 0.5 - 1 depending on the indicator.
Those effects are large. However, you would be right to be cautious at this point, because these are correlations, rather than causation. However, Dalgaard et al. are not done. They have an interesting natural experiment that provides some indication that the results may be causal. I hadn't realised this, but the parts of the Roman Empire in the Middle East and North Africa abandoned wheeled transport (in favour of caravans of camels) in the first millennium CE. That meant that the Roman roads were not maintained in the MENA region, unlike in Europe. So, if Roman roads cause development, and the observed relationship between modern development and Roman roads isn't instead driven by something else like the roads being placed where development would have happened anyway, then the roads should have an impact in Europe, but not in the MENA region. And indeed, that's what they find:
We find that in the MENA region, Roman roads lose predictive power vis-a-vis modern day roads. Moreover, Roman road density does not predict current day economic activity within the MENA region. In contrast, in the European region, where the roads were maintained, our baseline results carry over.
What can we take away from this? Infrastructure is persistent - it lasts a long time. And, it can have positive impacts on development.

[HT: Marginal Revolution, last year]

Saturday, 25 May 2019

Sydney needs to understand that demand curves for water slope downwards

One of the most robust findings in economics is that demand curves slope downwards. If something becomes more expensive, on average people will buy less of it (quantity demanded decreases when price increases). If something becomes less expensive, people will buy more of it (quantity demanded increases when price decreases). So, this article from The Conversation earlier in the week should come as no surprise:
Melbourne’s average residential water consumption is 161 litres per person per day. In Sydney, for 2018, it was 210 litres per person. That is nearly 50 litres more per person – a difference of about 30%!
So why do Sydney residents use so much more water than people in Melbourne?...
The lower water use per resident of Melbourne is a major element in the city’s lower water thirst. If you live in Sydney and use the average amount of water a day (210 litres per person per day) that will cost you just 48 cents per day. The price is A$2.28 per thousand litres.
Water is far more expensive in Melbourne, which has variable pricing for residential water. The more water you use, the higher the progressive cost per litre.
Each of Melbourne’s three water retailers charge more money for low and high water usage. For example, Yarra Valley Water charges A$2.64 (per 1,000 litres) for water use less than 440 litres a day. For more than 881 litres a day it charges A$4.62, which is 75% higher than the lowest water use charge. For intermediate amounts the charge is A$3.11. This sends an important price signal to residents – it pays to conserve water. In comparison, Sydney charges a flat rate for each litre of water, with no penalty for higher water users.
If you want people to do less of something, including consuming less water, one robust way to ensure that outcome is to make it more expensive. People will demand less water, if water costs them more. Both Sydney and Melbourne are worried about running out of water (and for good reason: they don't want to go through what Cape Town did in 2017-18). Increasing the price of water seems like such a simple solution. Maybe it's too simple? After all, according to the article, Sydney decreased the price of water in 2016. It's no wonder they are staring at a future water crisis.

Friday, 24 May 2019

Why banks probably shouldn't be afraid of fintech firms, but depositors should

Jeremy Kahn and Charlie Devereaux wrote an interesting article in Bloomberg Business last week, on how banks are fighting back against fintech firms:
Scrappy online financial startups have spent the past few years building buzz, backing and the beginnings of a customer base.
For a while, the world’s banking giants largely ignored them. Now they’re starting to feel the heat—and fighting back with the most formidable weapon in their arsenals: cash.
Spain’s Banco Santander SA announced a few weeks ago that it will funnel 20 billion euros ($22 billion) into digital transformation and information technology in the next four years.
On an annual basis, that works out to one-and-half times all of the venture capital Europe’s fintech startups received in 2018—a disparity highlighting that, despite all their rhetoric about burying existing banks, fintech firms and neo-banks are still monetary pipsqueaks facing an uphill battle against entrenched competition.
There's been a lot of rhetoric over the last several years about disruption in the banking sector, and how small fintech firms will eventually crush the banks (for example, see here or here). However, how at risk are the conventional banks really? I'd argue that they're probably not at risk (at least, not yet), and not just because they have financial backing that the fintech firms can only dream about.

This is a story about trust. To see why, let's rewind to a time when banks were still very new. Depositors (or savers) couldn't necessarily be sure that their money was safe in a bank. They might worry that their bank was a crook, and honest bankers had a challenge to convince depositors that they weren't crooks. Essentially, there was a problem of adverse selection in the banking market.

Adverse selection may arise when there is information asymmetry - that is, when there is some private information about characteristics or attributes that are relevant to an agreement, and that information is known to one party (the informed party) to an agreement but not to others (the uninformed parties). The informed party then uses their access to that information to their own advantage (and to the disadvantage of the uninformed party).

In the case of banking, the 'agreement' is between the depositor (who has money) and the bank (who offers to store the money for the depositor). In the early days of banking, the problem was that information about whether the banker was honest, and wouldn't run off with the depositors' money and leaving them broke and angry, was private information. Each banker knew whether they were an honest banker, but the depositors didn't know who was an honest banker. You might think I am joking, but in the early days of banking, crooked bankers were a very real problem.

So, in the early days of banking, the real problem was a lack of trust. Depositors couldn't trust that any old banker would keep their money safe. It wasn't easy to tell the honest bankers and the crooks apart. So, how could an honest banker convince depositors that they were an honest banker?

Honest bankers engaged in signalling. Signalling is when the informed party (in this case, the banker) tries to reveal the private information (that they are honest) to the uninformed party (the depositor). There are two important conditions for a signal to be effective: (1) it needs to be costly; and (2) it needs to be more costly in a way that makes it unattractive for those with the low quality attributes (the crooks) to attempt. One way that signals could meet the second condition is if they are more costly to the crooks. These conditions are important, because if they are not fulfilled, then those with low quality attributes could signal themselves as having high quality attributes - the crooks could easily pretend to be honest bankers.

In the early days of banking, a banker signalled that they were honest by engaging in a costly building exercise. Have you ever wondered why old banks are often huge stone buildings with big classical columns and suchlike? A big stone building is more difficult for bank robbers to break into, sure. But it is also very costly to build. And, if you're intending to build a building for your bank and keep it for a long time (which is what an honest banker would do), it's much less costly than building the bank and leaving it behind when you move onto the next town full of suckers (which is what a crook would do). Depositors could trust the bankers who had big expensive buildings, because having a big expensive building was only something that an honest banker would have.

Anyway, back to fintech firms and modern banks. Fintech firms are new. They haven't had time to develop trust with depositors, or a reputation for being safe, to the extent that conventional banks have. Depositors couldn't know which fintech firms are honest, and which are crooks.

How can fintech firms signal to depositors (or savers) that they are honest? Fintech firms don't have big stone buildings. And basically, anything that an honest fintech firm does to try and differentiate itself from a crook can be easily copied by the crooks. Maybe it's not the banks who should be worried about the fintech firms - it's the depositors who should be worried!

However, maybe there is one way for a fintech firm to signal they are honest, and it is fairly ironic. Being owned by a bank is costly for a fintech firm, as Kahn and Devereaux note:
And while most fintechs are turning losses, they have one big thing going for them: they don’t have outmoded technology weighing them down. Many major banks would need to spend billions of dollars just to bring their IT systems into the 21st century. Even Santander’s Parthenon back-end software platform is increasingly antiquated even though it is newer than what a lot of the other European banks use.
“While they can copy our features, they cannot copy our cost base,” Starling Bank said in a statement. “They have to contend with legacy technology, not to mention the massive costs of maintaining a branch network and the slowness to action that is inevitable with large bureaucracies.”
Only an honest fintech firm would be willing to face the costs of having a bank on board. And, banks would only want to associate with honest fintech firms (or at least, we can hope that's the case!). A crooked fintech firm isn't going to want to face the costs of associating with a bank. So, maybe being owned by a bank is an effective signal to depositors that they can trust a fintech firm? This key point is missing from the conclusion to Kahn and Devereaux's article:
For now, though, [conventional banks'] giant budgets will loom large over the fintech industry—especially if digital banks fail to win over deposits in the next few years.
Fintechs will need to build “a truly different customer journey” to capture significant market share, said James Lloyd, the Asia Pacific financial technology lead for consulting firm Ernst & Young LLP. “I don’t think it will be sufficient to just have another bank product in a digital format, offering a slightly better customer experience.”
Part of the customer experience is finding some way to signal to customers that they can trust you. In an era where Bernie Madoff and the Global Financial Crisis are still casting a long shadow, trust in finance firms is more important than ever.

[HT: New Zealand Herald]

Saturday, 18 May 2019

Jeffrey Clemens on the disemployment effects of the minimum wage

The debate among economists over whether minimum wages reduce employment has been ongoing ever since David Card and the late Alan Krueger published this paper in 1994, if not longer. My reading of the evidence, and especially the recent evidence out of Seattle and Denmark (see my posts here and here and here for more), is that higher minimum wages do reduce employment.

Jeffrey Clemens has a new article on the Cato Institute website that I think does a great job of summarising the literature (including those articles I blogged about), and is well worth reading. Here is part of the introduction:
This policy analysis discusses four ways in which the case for large minimum wage increases is either mistaken or overstated.
First, the new conventional wisdom misreads the totality of recent evidence for the negative effects of minimum wages. Several strands of research arrive regularly at the conclusion that high minimum wages reduce opportunities for disadvantaged individuals.
Second, the theoretical basis for minimum wage advocates’ claims is far more limited than they seem to realize. Advocates offer rationales for why current wage rates might be suppressed relative to their competitive market values. These arguments are reasonable to a point, but they are a weak basis for making claims about the effects of large minimum wage increases.
Third, economists’ empirical methods have blind spots. Notably, firms’ responses to minimum wage changes can occur in nuanced ways. I discuss why economists’ methods will predictably fail to capture firms’ responses in their totality.
Finally, the details of employees’ schedules, perks, fringe benefits, and the organization of the workplace are central to firms’ management of both their costs and productivity. Yet data on many aspects of workers’ relationships with their employers are incomplete, if not entirely lacking. Consequently, empirical evidence will tend to understate the minimum wage’s negative effects and overstate its benefits.
Do read the whole article, if you are interested in what the (especially latest) research on the minimum wage has to say.

[HT: Marginal Revolution]

Wednesday, 15 May 2019

Handgun purchase delays and suicide

Some statistics really make you sit up and take notice. For instance, in the U.S., there are about 60 firearms-related suicides every day (you can find the data at the CDC website here). That's nearly twice the number of people who die by firearms-related homicide, and more than half of all suicides. Clearly, policy should be trying to address this issue.

Waikato's Economics Discussion Group recently discussed this recent article by Griffin Edwards (University of Alabama at Birmingham), Erik Nesson (Ball State University), Joshua Robinson (University of Alabama at Birmingham) and Fredrick Vars (University of Alabama), published in the Economic Journal (ungated version here). In the paper, Edwards et al. looked at the effect of mandatory handgun purchase delays on firearms-related suicides and homicides in the U.S.

With a handgun purchase delay, you can't simply rock up to a gun store and walk away with a pistol - you have a stand-down period, after which you can return to pick up your weapon. The theoretical argument here is that this should reduce firearms-related suicides, because it provides for a cooling-off period, during which the potential victim has an opportunity to change their mind (about inflicting harm on themselves), or others may have an opportunity to intervene. You wouldn't expect such a cooling-off effect in the case of firearms-related homicides.

Using state-level data on whether handgun purchase delay policies are in place, Edwards et al. find that:
...any mandatory purchase delay reduces firearm-related suicides by between 2% and 5%, and we find no statistically significant substitution towards non-firearm suicides. Additionally, mandatory purchase delays are not statistically significantly related to homicides.
More or less, that is what you would expect from theory. Of course, this policy has no effect on people who already own a gun. If you're thinking about this sort of policy to reduce suicide, you would go into it knowing that it would only be effective for preventing suicides where the potential victim has to first purchase a gun. And, it won't stop them from trying some alternative means (although Edwards et al. do show a negative, but not statistically significant, effect on all suicides). The paper made me wonder about whether this sort of policy would be effective in a country like New Zealand. Of interest on that point, Edwards et al. also find that the:
...effect seems to be largest in states with relatively few firearms and that the effect dissipates as firearm prevalence increases.
As I noted in a post in March, gun ownership (of all types, not just handguns) is much lower than in the U.S. So, maybe that gives some cause for optimism for a policy like this in New Zealand? However, I'd argue that we already have a mechanism in place that delays firearms purchases. People wanting to buy a gun for the first time need a firearms licence, and the licence granting process has an in-built delay while reference checks are made, etc. So, there is already a cooling-off period for anyone who might be thinking of self-harm but first needs to buy a gun.

Not all gun restricting policies are necessary.

Read more:


Monday, 13 May 2019

When you ban plastic bags from supermarkets...

... people will take the supermarket baskets instead. As the New Zealand Herald reported a couple of weeks ago:
Since the move away from having plastic bags in their stores, some Countdown supermarkets have had the odd shopping basket go walkabout.
For the most part, Countdown says Kiwis remember to bring their own shopping bags and use them to carry their groceries.
However, one customer found out the hard way yesterday afternoon that some people clearly haven't got used to bringing their own bags.
When the customer walked into Countdown Lynfield in Auckland he could not find any of their distinctive green shopping baskets.
A staff member told him 175 baskets had been stolen.
Why steal a supermarket basket? Basket thieves could be rational, and weigh up the costs and benefits of their actions. The options for a would-be basket thief are to take the basket, or to buy a reusable bag, or to do neither and find some other way of transporting their groceries home. Assume the benefits of all three methods of transporting groceries are the same (after all, the outcome is that the groceries end up at home, either way). The difference is in the costs, and the rational basket thief will choose whichever option has the lowest cost.

Reusable bags have a monetary cost (at Countdown they start from $1 each). Not using a bag or a basket comes with a cost in the form of the awkwardness of transporting the groceries (balancing multiple items in your arms), or having them rolling around loose in the back of the car, etc. Taking a basket comes with a moral cost (the thief feels bad about taking the basket) and a social cost (other people will think badly of the basket thief if they find out).

How this calculus plays out will be different for different individuals. Those who feel moral and social costs greatly will buy the bags (or do without a bag or basket), while those who care less about moral concerns and how others think about them will take a basket.

How should the supermarkets respond? Increase the costs to would-be basket thieves. Make the security doors beep loudly when a basket is taken out, drawing attention to them. Have penalties for stolen baskets. Shame thieves by posting CCTV photos of them and their ill-gotten baskets. All of these options increase the costs of stealing a basket. They won't deter the most brazen basket thieves, but basket theft will reduce. When the costs of doing something increase, we tend to do less of it (including stealing supermarket baskets).

Friday, 10 May 2019

Evidence that the supply of methamphetamine is increasing

Consider the market for methamphetamine. If you are targeting police resources at the suppliers of methamphetamine, you would expect to see an increase in the street price of methamphetamine. This is because the costs of supplying have increased (once you factor in the higher costs associated with the greater risk of being caught, higher penalties, or more effort spent by sellers to try and avoid detection by the police). This is illustrated in the diagram below. The market is initially in equilibrium with the price P0, and Q0 methamphetamine is traded (and consumed). The supply decreases from S0 to S1, and so the equilibrium price increases from P0 to P1, and the quantity of methamphetamine is traded (and consumed) falls from Q0 to Q1.

An increase in policing causes a decrease in the supply of methamphetamine, and an increase in the street price. So, if we observed an increase in the street price of methamphetamine, could we safely conclude that enforcement efforts are successful, as was claimed in 2011? No, because a decrease in supply is not the only possible cause for an increase in price. Consider the market diagram below. The market is initially in equilibrium with the price P0, and Q0 methamphetamine is traded (and consumed). The demand increases from D0 to D2, and so the equilibrium price increases from P0 to P2, and the quantity of methamphetamine is traded (and consumed) increases from Q0 to Q2.

So, you can see that we would observe an increase in the street price if supply decreases, or if demand increases (or indeed if both of those things happened at the same time). However, in only one of those situations does the consumption of methamphetamine decrease, and that is what you probably wanted to know. Unfortunately, back in 2011 the data on consumption wasn't so good. As the article linked above notes, the number of border seizures increased. However, that doesn't by itself suggest that quantity consumed has decreased, because perhaps there was more getting through without being detected as well.

Fortunately, now we can start to get at an answer to what is going on in the market for methamphetamine. As the New Zealand Herald reported a couple of weeks ago, there are new data available:
New Zealanders spend nearly $1.4 million cash on methamphetamine every single day, according to police analysis of three months of drug testing of wastewater.
Described by scientists as "one large urine test", the wastewater testing started with three sites in 2016 - Whangarei, Auckland's North Shore and Christchurch - but was rolled out nationwide last November.
The ESR testing at 38 sites now captures 80 per cent of the population and officials hope it will paint a clearer picture of New Zealand's drug habits.
An average of 16kg of methamphetamine has been consumed each week in November, December and January according to the preliminary results released today.
Yes, you read that right. Toilet water is being tested for drugs, and has to be more accurate than survey-based data (since people may not answer truthfully). What does this new data say about consumption changes over time? The article notes that:
Wastewater testing shows methamphetamine consumption has increased since 2016, said Detective Sergeant Daniel Lyons from the National Drug Intelligence Bureau, a joint team with Customs and the Ministry of Health.
So, the quantity of methamphetamine consumed (and traded) has increased over time. As Eric Crampton notes, the price of methamphetamine has decreased slightly since 2008 (and is lower than the price quoted in the Voxy article as well). If we extrapolate and say that the increase in quantity dates back to 2010, then an increase in quantity and a decrease in price is consistent with an increase in supply, not a decrease in supply (or at least, an increase in supply that is larger than any change in demand). Essentially, this is the opposite of the first diagram from earlier in this post.

Is that realistic? In a different post, Eric Crampton notes:
...imagine that the police just kinda gave up on meth. They stopped reporting on progress on meth back in 2015, when it was looking pretty obvious that the drugs had won the drug war. If they gave up, then it would be cheaper to cook meth from pseudoephedrine now than it was in 2008, so that product could be delivered at a lower price point. Alternatively, if there have been tech developments in small-batch cooking that have radically lowered the cost of production in that sector since 2008, then 2008 prices may not be the best guide.
Both of those situations (less policing, and lower costs of production) are consistent with an increase in supply.

Wednesday, 8 May 2019

Why study economics? Uber edition...

I've written a large (and growing) number of posts about opportunities for economics graduates in tech companies (see the list at the end of this post). But what do those graduates do for the tech firms? This PBS New Hour video explains what economists do at Uber:


Like that video, most of the discussion you see online is about jobs for economics PhD graduates. But in my experience there's plenty of opportunity for students with an economics undergraduate major. Also, there's plenty of value for students who are not doing an economics major (or minor) to pick up some useful skills by taking one or more economics papers. Employers value highly the types of skills that economics teaches, including the ability to ask critical questions, to work with data, and to understand human behaviour.

[HT: Marginal Revolution]

Read more:

Sunday, 5 May 2019

Book review: Dollars and Sex

I just finished reading Marina Adshade's 2013 book, Dollars and Sex. The subtitle is "How Economics Influences Sex and Love", and Adshade essentially summarises a large number of research papers that use economic theory to investigate topics related to sex and love. The genesis of the book, apparently, was Adshade's teaching of a course on "the economics of sex and love". Probably we need more university courses like that!

The book is essentially an interesting collection of stories and research summaries (including, for instance, research papers that I have previously blogged about here and here and here). In reading the book, you'll learn why college students have less sex (on average) than non-college students of the same age (which seems hard to believe if you're a university student, but is supported by evidence!). You'll also find out why Bill Gates doesn't have a harem, despite having enough wealth and income to support many wives. And about the economics of infidelity (which references research by Bruce Elmslie, who was a visitor at Waikato some years back), where Adshade writes:
Infidelity is an economic story, but not for the reason you might have expected - that wealthy men are the most likely to be unfaithful to their wives - but because the decision to have, or not have, extramarital sex is the solution to a cost-benefit problem. The costs in this story are a function of several economic factors, including lost income in the case of divorce, while the benefits are, for the most part, biological.
I found the book to be a good read, but there were some surprising omissions and missed opportunities. For instance, in the section on marital infidelity, there was no mention of game theory. Even though Adshade talks about bargaining power within the marriage in several places (it is a recurrent theme in the book), it seems to me that an appropriate framework in several places includes some consideration of the strategic interactions of partners (i.e. game theory).

Also, in the section on online dating, there is no mention of adverse selection. Perhaps Adshade is taking a similar view to Paul Oyer in his book Everything I Ever Needed to Know about Economics I Learned from Online Dating (which I reviewed here), but it is difficult to tell. In my (brief) experience with online dating, adverse selection was a serious problem (and if you want to know more about adverse selection in online dating, read this post from 2015).

Adshade obviously takes a broad view of what economics can help us to understand (as do I!). However, at one point she notes that:
...economic inquiry has its limits, and explaining religious doctrine is not a bad place to draw the line.
The counterargument to that is, of course, that explaining religious doctrine using economics is exactly one of the things that Peter Leeson does (see my review of his book, WTF?! An Economic Tour of the Weird). Despite those issues, this is still a good book, and an interesting read for those looking slightly 'off-beat' applications of economics (of whom, I am one). In that case, it is a recommended read.

Thursday, 2 May 2019

Waikato is #1 in PBRF for economics

The government's latest Performance Based Research Fund results are out (you can find them here). This is the research assessment exercise that all universities go through every six years or so, which gives a ranking, by discipline, in terms of research performance. Every researcher receives a ranking (A, B, C(NE), C, or R), where an A is a world-class researcher, and an R is research inactive (NE stands for New and Emerging - basically, researchers who are newly-minted PhDs).

Here's a summary of the results for the universities (proportionally, and excluding R grades, which are not reported):


The darker blue parts of the bars represent higher PBRF grades. As you can see, in terms of the proportion of A-ranked researchers, Waikato is top (19.2% of researchers at Waikato are ranked A), and Otago is second (18.2%). In the proportion of A-ranked and B-ranked researchers, Waikato is also top (83.4% of researchers at Waikato are ranked A or B), and daylight is second (or Otago is second, if you prefer, at 71.9%).

In raw numbers, Waikato was second only to Otago in terms of the absolute number of A-ranked researchers (2.5 full-time equivalent A-ranked researchers at Waikato, vs. 3 at Otago). In terms of the number of A-ranked and B-ranked researchers combined, Waikato was third (11.02 FTE, behind Auckland with 13, and Otago with 12). But you have to remember that Waikato has a much smaller number of economists than either Auckland or Otago.

The take-away message is simple: Right now, you have a much higher chance of regularly interacting with top economics researchers by studying at Waikato than at any other university in New Zealand.

[Update]: Eric Crampton at Offsetting Behaviour has more on this topic.