Wednesday, 30 November 2022

Book review: Rockonomics

When Alan Krueger passed away in 2019, the economics profession lost one of its top labour economists. Krueger was known for his famous research with David Card on the minimum wage, as well as work on the economics of education and on inequality. However, he was also an active researcher on the economics of popular music (see this earlier post), and much of his work is collected in the book Rockonomics (published in 2019). Krueger explains his interest in this topic on page 2 of the book, where he writes that:

...music is all about telling stories.

Economics is also about telling stories...

I like the representation of economics as telling stories, and that is how I explain the art of answering exam questions to students - it is like telling a story. Krueger's book tells an interesting story as well, of the economics of the music industry. It is full of insightful explanations, such as this point on why there has been increasing collaboration in music over time:

"Despacito," the most streamed song in 2017, is a good example: it is by Luis Fonsi and Daddy Yankee and features Justin Bieber. If you listen carefully to songs that feature other singers, you will notice that the star normally appears early in the song, within the first thirty seconds. This is logical because streaming services only pay royalties for music that is streamed for at least thirty seconds. In other words, economic incentives of streaming are directly affecting the way songs are written, composed, and performed.

Or this on price discrimination in ticket prices:

Charging a higher price for a seat that is closer to the stage, just like an airline charging a higher price for a first-class seat, is a natural way to price-discriminate and extract the greatest revenue from concert attendees. When the prices vary across good and bad seats, fans self-sort into price tiers (or, equivalently, sections of the venue) based on their willingness to pay.

Krueger and I will have to agree to disagree on whether higher prices for first-class airline tickets are price discrimination (because first-class seats are more expensive, the difference in price is not purely explained by differences in consumers' willingness to pay). Krueger also explains some things about the music industry that have come in for criticism, such as:

The infamous Ticketmaster service fee, which seems out of proportion to the service actually provided, is a way to channel revenue back to venues or promoters, and indirectly to artists, for tickets that are underpriced... Part of Ticketmaster's business model is to act as a heat shield to protect artists from the reputational fallout from charging a higher price.

Clearly, there is more to the economics of the music industry than meets the eye. The effect of music streaming comes in for particular attention. Krueger clearly explains the economics here (see the quote above as an example), along with debunking three misconceptions about the economics of streaming: (1) that streaming is not a zero-sum game, where one additional stream for Artist A reduces income for Artist B; (2) that the amount an artist earns per stream is not a meaningful measure of the contribution of the streaming service to artists' incomes, because it is based on a share of the total streaming service's revenue; and (3) that there is not a simple way of converting the number of streams into 'album equivalents' to measure the overall popularity of artists.

Finally, it is not often that a book will convince me to change my behaviour. However, Krueger convincingly points out that artists earn more from streaming services when consumers are paid subscribers rather than using the ad-supported version. That convinced me to switch my Spotify premium subscription (which I allowed to lapse earlier this year) back on.

Overall, this is a really interesting book, both for those with an interest in the music industry, and for those with an interest in economics. For readers in the intersection between those two groups, this book is a must-read.

Friday, 25 November 2022

The economists' fantasy league

RePEc (Research Papers in Economics) has a new feature: The IDEAS Fantasy League:

The IDEAS fantasy league allows you to pretend you are at the helm of an economics department. Your goal is to improve its ranking relative to other departments in the league. You can do this by trading economists and by choosing which ones to activate in your roster.

Yes, it is fantasy football, but played with economists. I gave up on fantasy football some years ago, after I was spending way too much time on my roster. But I couldn't resist setting up a team of economists. I even drafted quite well, getting Guido Imbens (who ranks #23 in the world on RePEc's 10-year ranking, which is what is used for determining the fantasy league standings) and Justin Wolfers (he's only ranked in the top 8% of economists, but as one of my favourite economists, I still think it is cool that he is in my roster).

On the transfer market, I notice that someone has already put up James Heckman for auction. Could be a steal for someone willing to bid enough to get him (Heckman is ranked #55 overall).

Sadly, the league is only open to economists who have a RePEc profile. Some of the rules are kind of amusing, like not being able to short an economist, and not being able to have yourself on your roster. However, I wonder whether there are incentives to start citing the economists on your roster, or co-authoring with them, in order to boost their ranking?

It will be interesting to see how it plays out. I'll update you on my roster's progress.

[Update: Apparently, this is not a new feature - it's been around since about 2013, and I've only just become aware of it!]

[HT: Marginal Revolution]

Wednesday, 23 November 2022

How students respond to academic probation

For a wide variety of reasons, some first-year university students fail to perform to the expected standard. Many universities deal with these students by placing them on some form of academic probation - for example, the student on probation might be required to pass a certain proportion of their papers successfully, or they will be denied re-entry into their programme of study. The purpose of academic probation is to provide an incentive for students to increase their study effort, or to deal with whatever other issues are getting in the way of their academic success.

How do students respond to being placed on academic probation? That is the research question addressed in this 2018 article by Marcus Casey, Jeffrey Cline, Ben Ost, and Javaeira Qureshi (all University of Illinois at Chicago), published in the journal Economic Inquiry (ungated earlier version here). In regard to the US, they note that:

...most universities maintain standards that are sufficiently high that nearly 25% of U.S. undergraduates will be placed on academic probation at some point during their tenure...

That strikes me as fairly high, relative to my experience here in New Zealand. Perhaps we are a little more permissive with re-entry than the US? Moving on, Casey et al. use data from a large (about 17,000 students) urban public university, with a sample comprised of:

...nine cohorts of freshmen undergraduates who entered the university during the fall semester between 2004 and 2013...

In total, Casey et al. observe outcomes for over 29,000 students. Students at the university are placed on academic probation at the end of any term where their GPA falls below 2.0 (on the typical US grade scale). To be removed from probation, a student needs to raise their cumulative GPA above 2.0.

Casey et al. use a regression discontinuity research design, where they compare the outcomes of students just below the cutoff for academic probation, with students just above the cutoff. To avoid issues related to students with a GPA of exactly 2.0 (those students turn out to be meaningfully different from students on either side of the boundary), they exclude students at exactly 2.0 (in what is termed a 'donut' strategy). The outcome variables they investigate include four-year and six-year graduation rates, as well as a range of student course-taking behaviours in future terms.

Before we get to their results, it is worth thinking for a moment about what a student might do when placed on academic probation. If the student is keen on remaining a student, they will want to get off probation, raising their cumulative GPA back above 2.0. In theory, they could achieve this by working harder, spending less time partying or working for income. Alternatively, they could reduce their course workload, giving them more time to devote to their remaining courses. Or, they could choose their courses strategically, ensuring that they are taking more 'easy' courses that offer higher grades, or dropping out of courses where it becomes clear that they might 'achieve' a failing grade. It is these sorts of strategic course-taking behaviours that Casey et al. are looking for in their data. And that is what they find, with some added wrinkles:

Our main finding is that probation causes students to engage in a variety of strategic behaviors that help to increase GPA without increased effort. This strategic behavior, however, does not appear for all groups. In particular, underrepresented minorities - black and Hispanic students - show relatively little evidence of strategic behavior whereas probation causes non-minorities to attempt fewer credits, fewer higher level courses, and substantially increases the probability of withdrawing from a course... Course withdrawal is a particularly important dimension of strategic behavior because it allows students to avoid very low grades that would drag down their GPA substantially.

The heterogeneity here is important. Underrepresented minorities were less likely to undertake the strategic behaviours that would result in their coming out of academic probation than other students. It would have been interesting to see if the results were similar when comparing students who are first-in-family at university with other students. Casey et al. suggest that underrepresented minorities may be "less aware of institutional policies and supports". Students who have parents (or siblings) who have previously been to university have greater 'social capital', since their parents (or siblings) can advise them on how to negotiate their way through university successfully. A similar lack of social capital exists for underrepresented minorities (almost by definition, given that they are underrepresented). On the surface, the difference in strategic behaviours between underrepresented minorities and other students in response to academic probation might suggest that academic probation contributes to inequality in graduation outcomes between underrepresented minorities and others. However, that assumes that academic probation actually makes a difference to students' graduation rates, and yet Casey et al. find that:

Probation has little impact on 4- and 6-year graduation rates for any group, suggesting that students who drop out or are expelled as a result of academic probation may have eventually dropped out anyway.

Yikes. Is academic probation really just an exercise in institutional virtue signalling, with no real impact on student graduation outcomes at all? Fortunately, this is just one result from a single (albeit large and public) university in the US. However, it does suggest that we need to look closer at what sorts of activities are associated with academic probation. In the university that Casey et al. looked at:

Students placed on academic probation are also required to have an additional meeting with the academic advising office in the following term.

That seems like a minimalist approach to probation, and unlikely to really change any of the underlying drivers that resulted in the student being put on academic probation in the first place. Other universities take a much more holistic approach to ensuring student success. It would be interesting to repeat this study in a university with a more student-centred approach to academic probation. At such an institution, students probably would still respond to the academic probation incentive in strategic ways,. However, if a more student-centred academic probation does a better job of addressing the underlying drivers of academic under-performance, students might also persist beyond their academic probation period, and the graduation outcome may well be different in those cases.

Tuesday, 22 November 2022

Taxes are fungible, so you can't really direct where your individual taxes are spent

Last month, there was an article in The Conversation by Jean-Paul Gagnon (University of Canberra) and co-authors, proposing a way of 'democratising taxation'. I meant to comment on it at the time, but was busy with teaching and assessment, and have only come back to it now. Anyway, here's what Gagnon et al. proposed:

Most of us accept tax, if grudgingly. But many aren’t happy with how it is spent.

Enter TaxTrack – our hypothetical proposal for democratising taxation, details of which are to be published in the Australasian Parliamentary Review.

Our idea is that Australians who want a greater say in where their taxes go could be given a TaxTrack number, which would trace those dollars and direct them only to places they wanted them to go.

If they wanted, they could view the invoices their contributions had helped pay, and they could specify which invoices their contributions should not pay – perhaps by prohibiting the spending of their money on things such as military ammunition, or specifying that a certain proportion was directed to healthcare.

Governments would have to work with those instructions, cutting spending in areas that lacked support and boosting it in areas for which there was overwhelming support.

On the surface, that proposal sounds interesting, perhaps attractive. However, there is a fundamental problem with the proposal, which means that it simply won't achieve what Gagnon et al. propose. That problem is that tax payments are fungible. That means that a dollar of tax paid by any taxpayer is exactly the same as a dollar of tax paid by any other taxpayer. Even if the government earmarked some individual taxpayers' tax payment for a particular purpose (or purposes), the government could simply reallocate other taxpayers' tax payments to the remaining purposes, leaving the overall effect unchanged.

A numerical example may suffice. Say that there are four taxpayers: (1) Taxpayer A pays $10,000 in tax; (2) Taxpayer B pays $20,000 in tax; (3) Taxpayer C pays $30,000 in tax; and (4) Taxpayer D pays $100,000 in tax. The total tax received by the government is $100,000, which goes into a big pool that the government uses to finance its spending. And say that there are two ways for the government to spend those tax receipts: (1) Social services (like health, education, and welfare), which receives a budget allocation of $50,000; and (2) Administration costs, which receives a budget allocation of $50,000. Total government spending is $100,000, and the budget is balanced (tax receipts is equal to government spending). Since all the spending is paid out of the pool, it doesn't matter which taxpayer contributed to exactly which spending.

Now, say that Taxpayer B decides that they don't want any of their tax payment to go towards administration costs. They only want to fund social services with their $20,000 tax payment. This is what Gagnon et al. are proposing. What does the government do? They make sure that Taxpayer B's $20,000 goes towards social services. They then take the remaining $80,000 of tax receipts, and allocate $30,000 to social services, and $50,000 to administration costs. The overall effect is no change in spending allocation, because total spending on each category of spending is the same as before. The only difference is that Taxpayer B can think to themselves, 'at least my tax payment isn't funding those worthless administration costs'. There is no other effect at all.

However, TaxTrack could have an impact on government spending, but only if a large proportion of taxpayers sign up for TaxTrack. Continuing with the example above, if both Taxpayer B and Taxpayer D decide that they only want their tax payments to go to social services, then the government would have to direct $60,000 towards social services. What does the government do in that case? Do they cut back on administration costs, spending only $40,000 in that area? Or do they continue to spend the budgeted $50,000 on administration costs, and run a deficit? TaxTrack doesn't provide an answer to that question.

And it should be a real concern. If every taxpayer wants their tax to be spent on sexy causes like climate change mitigation, or saving endangered frogs, or pre-school education, or large subsidies for tourism operators, how will the government fund courts, or police, or parliamentary services, or all of the other unsexy but necessary things that governments have to spend taxes on? They either have to increase the deficit, or cut that spending. Neither is likely to be an optimal solution.

Fortunately, I think TaxTrack would have the opposite problem. Instead of forcing the government into suboptimal allocations of spending, I think that not enough people would sign up for it for TaxTrack to make any difference at all to what the government does. I mean, how many people even engage in effective budgeting for their own household, let alone would be willing to allocate their tax dollars to various specific funding streams?

If taxpayers want to hold the government to account for its spending plans, the way to do that is through the political process. Adding another bureaucratic tool that would serve no purpose other than making a small proportion of taxpayers feel better about where their tax dollars were being spent is not going to achieve that. Arguably, it might even make taxpayers less likely to want to hold the government to account. Currently, if a taxpayer doesn't like the allocation of government spending, they can complain, or try to vote out the incumbent politicians at the next election. But, if the taxpayer feels like their tax dollars are being used in the way they intended, they are less likely to apply the same level of accountability to the government's overall spending allocation.

TaxTrack should be a non-starter for Australia. Thankfully, it hasn't been proposed for use in New Zealand.

Monday, 21 November 2022

An important note of caution on meta-analysis

I've written a number of posts that mention meta-analysis (most recently here), where the coefficient estimates from many studies are combined quantitatively to arrive at an overall measure of the relationship between variables. Meta-analysis has the advantage that, while any one study can give an incorrect impression of the relationship simply by chance, that problem is much less likely when you look at a whole lot of studies. At least, that problem is much less likely when there is no publication bias (which occurs when studies that show statistically significant effects, often in an expected direction, are much more likely to be published than studies that show statistically insignificant effects). Fortunately, there are methods for identifying and correcting for publication bias in meta-analyses.

However, a recent post on the DataColada blog by Uri Simonsohn, Leif Nelson and Joe Simmons (the first of a series of posts), raises what appear to be two fundamental issues with meta-analysis:

Meta-analysis has many problems... But in this series, we will focus our attention on only two of the many problems: (1) lack of quality control, and (2) the averaging of incommensurable results.

In relation to the first problem:

Some studies in the scientific literature have clean designs and provide valid tests of the meta-analytic hypothesis. Many studies, however, do not, as they suffer from confounds, demand effects, invalid statistical analyses, reporting errors, data fraud, etc. (see, e.g., many papers that you have reviewed). In addition, some studies provide valid tests of a specific hypothesis, but not a valid test of the hypothesis being investigated in the meta-analysis...

When we average valid with invalid studies, the resulting average is invalid.

And in relation to the second problem:

Averaging results from very similar studies – e.g., studies with identical operationalizations of the independent and dependent variables – may yield a meaningful (and more precise) estimate of the effect size of interest. But in some literatures the studies are quite different, with different manipulations, populations, dependent variables and even research questions. What is the meaning of an average effect in such cases? What is being estimated? 

Both of these problems had me quite concerned, not least because I currently have a PhD student working on a meta-analysis of the factors associated with demand for healthcare in developing countries. However, I've been reflecting on this over the last couple of weeks, and I'm feeling a bit more relaxed now.

That's because the first problem is relatively easily addressed. The initial step in a meta-analysis is to identify all of the studies that could be included in the meta-analysis. That might include some invalid studies. However, if as a second step we subject all the identified studies to a quality check, and exclude studies that do not meet a minimum quality standard, I think we probably eliminate most of the problems. Of course, some studies with reporting errors, or outright fraudulent results, might sneak through, but poorly designed studies, which fail on basic statistical or sampling criteria, will be excluded. That is the approach that my PhD student has adopted.

The second problem may not be as bad as Simonsohn et al. suggest. Their example relates to experimental research, where the experimental treatment varies across studies, such that averaging the effect of the treatment makes little sense. However, not all meta-analyses are focused on experimental treatments. Some are combining the results of many observational or quasi-experimental studies, where the variable of interest is much more similar. For instance, looking at the effect of income on health services demand, we need to worry about how income (and health services demand) are measured. However, if we use standardised effects in the meta-analysis (so that all estimates are measuring the effect of a one-standard-deviation change in income on health services demand, measured in standard deviations), then I think we deal with most problems here as well. Again, that is the approach that my PhD student has adopted.

None of this is to say that all (or most, or even perhaps many) meta-analyses are bulletproof. It's just that the critiques of Simonsohn et al. may be overplayed. However, it is important to keep these issues in mind. I recommend reading the other posts in the DataColada series on this topic, of which there is just one so far, on meta-analyses of  'nudging', with a promise of more to come.

In the meantime, I think my PhD student can rest a little uneasily, but secure that so far their work is addressing these critiques. But with more to come, I reserve the right to change my mind. Simonsohn et al. are usually quite persuasive, so I'm awaiting a stronger case against meta-analysis from them in the rest of the series. On the other hand, I'm hoping not!

[HT: David McKenzie at Development Impact, among others]

Sunday, 20 November 2022

A 30-year waitlist for Kobe beef croquettes, and counting

The next time you find yourself on the waitlist for some product or service, spare a thought for the Japanese consumers waiting 30 years for Kobe beef croquettes. As reported on CNN last week:

If you order a box of frozen Kobe beef croquettes from Asahiya, a family-run butcher shop in Takasago City in western Japan's Hyogo Prefecture, it'll take another 30 years before you receive your order.

That isn't a typo. Thirty. Years.

Founded in 1926, Asahiya sold meat products from Hyogo prefecture -- Kobe beef included -- for decades before adding beef croquettes to the shelf in the years following WWII.

But it wasn't until the early 2000s that these deep-fried potato and beef dumplings became an internet sensation, resulting in the ridiculously long wait buyers now face.

When there is a shortage of some good or service, we usually expect the price to go up (for example, see here). Not only is that not happening in the case of Asahiya beef croquettes, the low price that leads to the shortage is a purposeful business strategy:

"We sold Extreme Croquettes at the price of JPY270 ($1.8) per piece... The beef in them alone costs about JPY400 ($2.7) per piece," says Nitta.

"We made affordable and tasty croquettes that demonstrate the concept of our shop as a strategy to have customers enjoy the croquettes and then hope that they would buy our Kobe beef after the first try."

To limit the financial loss in the beginning, Asahiya only produced 200 croquettes in their own kitchen next to their shop each week.

So, not only are the croquettes being sold at a price that generates a shortage of them, they are being sold at below the cost of production. There are a number of reasons why firms may sell some of their products at below cost, but generally it is because they are a loss leader - the firm sells that product at a loss, and uses it to generate additional customers who then buy other products, which are more profitable. It appears that is the strategy that Asahiya has adopted:

"We hear that we should hire more people and make croquettes more quickly, but I think there is no shop owner who hires employees and produce more to make more deficit... I feel sorry for having them wait. I do want to make croquettes quickly and send them as soon as possible, but if I do, the shop will go bankrupt."

Fortunately, [the business owner, Shigeru] Nitta says that about half of the people who try the croquettes end up ordering their Kobe beef, so it's a sound marketing strategy.

Is it a sound marketing strategy though? You can generate a lot of buzz about your products without creating such a shortage that your customers are waiting 30 years for their purchase. I mean, even now:

Customers receiving croquettes these days placed their orders about 10 years ago.

Surely, they could reduce the size of the waiting list for croquettes from 30 years back to 10 years, or back to one year, and rapidly increase profits? On the other hand, perhaps that would entail a loss of quality, as the article notes:

The cheap price tag of the Extreme Croquettes flies in the face of the quality of the ingredients. They're made fresh daily with no preservatives. Ingredients include three-year-old female A5-ranked Kobe beef and potatoes sourced from a local ranch.

I guess there is a limit to how much you can ramp up production when you use very specific ingredients. But surely, there is scope for the ranch to increase beef and potato production? While loss leading can be a very profitable strategy, there is just so much scope for increasing profits in this case that I'm not convinced that this is a profit maximising business strategy at all. It even makes me wonder, how many other profit-maximisation failures are contributing to the decades of underperformance of the Japanese economy?

Friday, 18 November 2022

Audiobooks and supplier lock-in

In my ECONS101 class, we cover customer lock-in - where firms lock their customers into buying from them, because the costs of switching to an alternative supplier are high. We don't usually discuss the opposite effect - supplier lock-in. So, I was interested to read this recent article in The Conversation by Rebecca Giblin (University of Melbourne) and Cory Doctorow (Open University):

Amazon openly admits to doing everything it can to lock in its customers. That’s why Audible encourages book returns: its generous offer only applies to ongoing subscribers. Audible wants the money from monthly subscribers and wants the fact that they are subscribed to prevent them from shopping elsewhere...

Another way Audible locks customers in is by ensuring the books it sells are protected by digital rights management (DRM) which means they are encrypted, and can only be read by software with the decryption key...

Once customers are locked in, suppliers (authors and publishers) are locked in too. It’s incredibly difficult to reach audiobook buyers unless you’re on Audible. When the suppliers are locked in, they can be shaken down for an ever-greater share of what the buyers hand over.

Notice that there is lock-in on both sides of this market. That can be a characteristic of platform markets, of which Audible provides a key example. Amazon (through Audible) provides a platform where creators and readers connect. Creators want their books to be read (and, importantly, purchased), and they know that readers will look on Audible for audiobooks. Readers want to read audiobooks, and they know that creators put their audiobooks on Audible. Everyone wins. Creators don't want to go elsewhere, because they know that the readers are looking on Audible. Readers don't want to go elsewhere, because they know that the audiobooks are on Audible. Amazon's power as a middleman, due to controlling the Audible platform, gives them an immense amount of market power. 

Giblin and Doctorow refer to Amazon as a 'chokepoint':

The problem isn’t with middlemen as such: book shops, record labels, book and music publishers, agents and myriad others provide valuable services that help keep creative wheels turning.

The problem arises when these middlemen grow powerful enough to bend markets into hourglass shapes, with audiences at one end, masses of creators at the other, and themselves operating as a chokepoint in the middle.

Since everyone has to go through them, they’re able to control the terms on which creative goods and services are exchanged - and extract more than their fair share of value.

A platform provider (Amazon) could conceivably charge both sides of the market for access to the platform. In this case, Amazon charges readers for every audiobook they buy (and don't return - for details on this, read Giblin and Doctorow's article). They don't directly charge creators for making their audiobooks available, but nevertheless Amazon can exploit its market power to reduce the share of the profits that goes to creators. And it appears that that is exactly what they have been doing (and in particularly shady ways, such as encouraging readers to return audiobooks for a refund after they have been read).

Giblin and Doctorow have a new book out, Chokepoint Capitalism, which is the point of their article. I'm looking forward to reading it, as they highlight in the article that:

The whole second half is devoted to detailed proposals for widening these chokepoints out – such as transparency rights, among others...

And we need reforms to contract law to level the playing field in negotiations, interoperability rights to prevent lock-in to platforms, copyrights being better secured to creators rather than publishers, and minimum wages for creative work.

It will be interesting to see how they justify those policy proposals, and how workable they may be. You can expect a book review some time in the future (but given the backlog of my reading, it may be a while!). 

Thursday, 17 November 2022

Cultural distance matters more for the spread of democracy than geographical distance

Over the last year, I've been working towards some exciting new projects using measures of cultural distance. So, I was interested to read this new article by Thanos Kyritsis (University of Auckland), Luke Matthews (RAND Corporation), David Welch and Quentin Atkinson (both University of Auckland), published in the journal Evolutionary Human Sciences (open access, with a non-technical summary available on The Conversation). In the article, Kyritsis et al. look at how differences in democracy between countries are related to the cultural and geographical distance between them, and whether changes in democracy over time in one country are related to the level of democracy in other countries that are culturally close and geographically close to the first country.

Kyritsis et al. develop measures of cultural distance based on linguistic distance and religious distance, and use three different measures of democracy: Polity 5 (covering 1800-2018), Vanhanen’s Index of Democracy (covering 1810-2012), and the Freedom in the World index (covering 1972-2020). Overall, their dataset covers 220 years, with 221 modern and historical nations, and 41,638 observations in total. They also distinguish between different waves of democracy:

...an initial ‘slow’ wave beginning in the US and culminating in the emergence of several European democracies at the end of the First World War (1828–1926); a second wave linked to the process of decolonisation following the end of the Second World War (1945–1962); and a third wave comprising a succession of transitions in Western Europe, Latin America, the Pacific, Eastern Europe after the fall of communism, and sub-Saharan Africa (1974 to present)...

In the first part of their analysis, Kyritsis et al. regress the difference in democracy between two countries on the distances between them (linguistic, religious, and geographical). They run this analysis cross-sectionally for each year in their dataset, and find that there were:

...independent effects of all three predictors on each democracy indicator over the time period covered by our data... While the effects of linguistic and religious ancestry were slightly attenuated in this combined model, when present, they remained generally better predictors of all three democracy indicators than geography. Over the last 50 years, linguistic and religious ancestry accounts for up to 12.3% and 17.4% of variance in pairwise differences in democracy, respectively, compared with 1.2% for geographical proximity...

Looking at how the cross-sectional relationships have changed over time, they find that:

...across all three democracy indicators, linguistic ancestry is an increasingly important predictor beginning mid-way through the second wave (circa 1955) and plateauing (or, in the case of Polity 5, declining somewhat) in the third wave from about 1990 to the present. Likewise, religious ancestry becomes an increasingly important predictor of similarity in all three democracy measures from approximately the beginning of the third wave, circa 1975, plateauing and then declining somewhat from circa 2000 to the present... 

The relationships between geographical distance and differences in democracy were not as large as for the two measures of cultural distance, and the relationship was inconsistent over time. Overall, this suggests a stronger effect of cultural distance on democracy than the effect of geographical distance.

Turning to their second analysis, of changes in democracy over time, Kyritsis et al. find that:

The democratic status of nations’ linguistic relatives is the only effect to show a consistently positive trend across all three democratic outcome measures for the duration of the time series. The language ancestry effect is strongest, and statistically significant for most of the third wave of democratisation across all outcome measures. Unsurprisingly, since democratic status tends to persist, most of the variation in nations’ democracy indicators at T2 is explained by their democracy at T1, but linguistic ancestry accounts for a non-trivial component of the remaining variation, explaining up to 17.4% of variation across outcome measures in the third wave. The democratic status of nations’ religious relatives shows no consistent effect until the third wave, when we see a sustained positive trend across all outcome measures, consistent with our cross-sectional analyses, with religious ancestry explaining up to 11.1% of the variation in democratic outcomes during this period. Also in accordance with our cross-sectional analyses, the effects of a nation’s geographical neighbours on its democratic outcomes tend to be positive, although these geographical effects show more variation through time and across outcome measures.

Essentially, the longitudinal analysis results confirm what they found in the cross-sectional analysis over time. Democracy appears to diffuse more readily between cultural 'neighbours' than between geographical neighbours. My future research will build on similar themes, where cultural distance appears to matter more than geographical distance. I look forward to sharing some of that in a future post.

[HT: The Conversation]

Wednesday, 16 November 2022

The consequences of the dungeon master shortage

As I noted in this post earlier this week, when the price of a good or service is below the equilibrium price, then the quantity of the good or service demanded will exceed the quantity of the good or service supplied, and there will be a shortage. But what if the usual price of the good or service is zero? In that case, there is often a shortage. An example I use in my classes is the 'market' for kidneys for human transplant. I put 'market' in inverted commas because in most countries there is no market. You can't buy or sell kidneys for transplant. This is effectively the same as mandating a zero price for kidneys. The result is a predictable shortage of kidneys for transplant, that kills over 40,000 people per year in the US alone. Fortunately, not all goods or services with a zero price have such fatal consequences.

As I noted in the earlier post, when there is a shortage, we should expect the price of the good or service to increase. That is unless, like the 'market' for donor kidneys, the price is mandated to be zero. For some markets that start with a zero price though, a shortage can be the impetus to introduce a new (non-zero) market price. As an example, take the market for dungeon masters, as reported in this article published on Hell Gate last week:

Playing the role of Dungeon Master can be a rewarding job but it is sometimes thankless, and always taxing. D&D can be overwhelming to any new player; this is especially true for a DM, who needs to know all the rules, adjudicate them, create or manage the story, plan logistics for their group, and cater the experience to what each player wants. The amount of effort involved makes it inaccessible for new players and difficult for experienced ones to sustain long-term.

All of which has conspired to make it harder to find people to actually run the spiking number of campaigns. "I think a lot of DMs just want to sit back and let other people run a game," one Dungeon Master on hiatus from running campaigns told me...

The shortage has made it difficult for many players to find games, especially ones that are high quality and in-person. On websites like Lex and Reddit, posts of players in the city looking for DMs outnumber the opposite significantly, with the latter consistently getting more traction. For Hex&Co.'s program alone, there are nine hundred on the email recruiting list to join one of their organized campaigns...

One solution that has emerged to this problem are players paying for a professional DM. In New York City, some of these DMs are functionally gig workers, contracting with a service like Hex&Co.'s where players pay the store $90 per month for four sessions, and the proceeds are split between the store and DM...

Some have managed to make a full career out of organizing bespoke games for a significantly higher fee. Charging upwards of $100 per hour, they'll create campaigns for a group of players tailored to their interests, experience levels and playing styles, providing a suite of game terrains and miniatures they'll tote to players' homes.

The new market 'price' for a dungeon master (DM) will likely raise the number of amateur DMs taking on paid gigs, reducing the shortage. However, are there likely to be some unintended consequences of this?

Uri Gneezy and Aldo Rustichini ran a famous experiment in Israeli day-care centres (described in Gneezy's book co-authored with John List, The Why Axis, which I reviewed here) where day-care centres began fining parents who showed up late to pick up their children. In theory, the higher price of a late pickup (because of the fine) should have induced fewer late pick-ups. However, this new system replaced the existing norm of picking up children on time, and actually resulted in more late pick-ups.

In the context of DMs, the previous norm was that DMs were unpaid, creating and running campaigns or gaming sessions for the love of the game. Some DMs would spent countless unpaid hours developing their game world, dungeons, main antagonists, and so on. What happens when the norm of those unpaid hours (and the labour of love they were associated with) become unpaid development time for paid gaming sessions? Perhaps would-be paid DMs put more time into development, leading to higher quality gaming sessions (the last paragraph quoted above suggests that). On the other hand, perhaps DMs would reduce their efforts, if they perceive the 'new' market price as unworthy of significant preparation time.

I guess we will have to see how this all plays out.

[HT: Marginal Revolution]

Tuesday, 15 November 2022

Fuel price controls vs. climate change

Sometimes, government policy just makes little sense. And sometimes, the economic model that you have in your head doesn't help. Take the example of fuel price controls, which Timothy Welch (University of Auckland) wrote about in this article in The Conversation last week:

The government announcement that the Commerce Commission will soon have the power to regulate wholesale petrol and diesel prices might be good news for cash-strapped motorists, but it’s arguably a retrograde step in the fight against climate change.

While there is some scepticism about whether the commission will ever act to enforce fuel price caps, any move to make carbon-emitting vehicles more affordable must come at the expense of efforts to encourage people out of cars and into more sustainable modes of transport...

Aside from being counter to other plans to mitigate climate change, there is plenty of evidence that price caps can often cause outcomes opposite to those intended. Sometimes, leaving it to the market can be the better option. 

Let's look at this. If the government puts a price control on a perfectly competitive market, we can illustrate its effect with the supply and demand model, as shown in the diagram below. The equilibrium price of petrol is equal to P0, and Q0 petrol is traded. The government thinks that price is too high, so (through the Commerce Commission) they implement a price ceiling (a legal maximum price) of PMAX, which is below P0. The consequence is that the quantity of petrol demanded increases to QD, but the quantity of petrol supplied decreases to QS. There is a shortage of petrol, and only QS petrol is traded.

So, with the price control, less petrol is traded than without the price control. That seems like a win-win for consumers and the climate, and would suggest that we should not be concerned. However, there are two problems here. First, many consumers would be missing out on petrol (there is a shortage at the price ceiling of PMAX). So, it doesn't make all consumers better off. However, the second problem is more fundamental. The market for petrol is not perfectly competitive. While I have argued before (for example, here) that the supply and demand model is usually robust to situations where the market is not perfectly competitive, government intervention in the market is an exception. The firms in the market for petrol have some market power, because they differentiate themselves (on the basis of branding, and the location of their outlets).

A more correct model is shown in the diagram below. The firm with market power operates at the profit maximising quantity, which is the quantity where marginal revenue is exactly equal to marginal cost. That is the quantity Q0, and in order to sell Q0, the firm charges a price of P0 (because with a price of P0, consumers will demand exactly Q0 units of petrol, which is the quantity that maximises profits). When the government implements its price control at PMAX in this market, the price falls, and the quantity of petrol traded increases to Q1. Unlike in the perfectly competitive market, a firm with market power is willing to satisfy the additional consumer demand at the lower price by selling more. So, if the market for petrol has some market power (which we know it does - it is an oligopoly), a binding price control would induce consumers to buy more, with greater impacts on the climate.

However, that isn't the end of the story. The government isn't proposing a price control on retail petrol, but instead on the wholesale price. The analysis above doesn't quite capture that. So, instead of a price control on a firm with market power, we should be showing what happens when a firm with market power has lower costs (because a lower wholesale price of petrol would lower the retail petrol outlet's costs). This is shown in the diagram below. Without the price control, the firm's costs are shown by the line MC0=AC0. The firm profit maximises with a price of P0, and sells Q0 petrol. After the price control is introduced, the firm's costs decrease to the line MC1=AC1. The new profit-maximising quantity is Q1, and the new profit-maximising price is P1. The firm with market power passes on some of the cost savings to consumers in the form of a lower price of petrol (which is what the government intends), and the consumers respond by buying more.

For a government that has stated that climate change is this generation's nuclear free moment, this seems like a very odd policy choice. However, understanding why relies on having the right economic model in mind. And that would be very important if Welch got his way and we had:

...some robust debate about whether the new Commerce Commission powers are necessary. That will involve asking whether making fossil fuels more affordable runs counter to our climate change goals, and whether we are trading planetary health for short-term economic relief.

Monday, 14 November 2022

Book review: The Premonition

I've avoided reading any books about the pandemic up until now. Truth be told, like most people I did far too much doomscrolling during the pandemic lockdowns, and I wasn't in the mood to re-traumatise myself by reading an account of what we all went through. However, I have seen several people recommend Michael Lewis' The Premonition, so I worked up the courage to crack it open last week. And I'm glad I did.

Lewis' narrative style is engaging, and he makes the characters come alive. Lewis highlights the stories of a number of under-recognised people in the US efforts against the coronavirus pandemic. However, the underlying story is the general incompetence of 'the government', rather than necessarily the people involved in the pandemic response. For instance, at the start of Chapter Four, Lewis writes:

One day some historian will look back and say how remarkable it was that these strange folk who called themselves "Americans" ever governed themselves at all, given how they went about it. Inside the United States government were all these little boxes. The boxes had been created to address specific problems as they arose. "How to ensure our good is safe to ear," for instance, or "how to avoid a run on the banks," or "how to prevent another terrorist attack." Each box was given to people with knowledge and talent and expertise useful to its assigned problem, and, over time, those people created a culture around the problem, distinct from the cultures in the other little boxes. Each box became its own small, frozen world, with little ability to adapt and little interest in whatever might be going on inside the other boxes... One box might contain the solution to a problem in another box, or the person who might find that solution, and that second box would never know about it.

The book is centred on the American response, with precious little reference to other countries. For instance, the American response was certainly a contrast to New Zealand's 'go hard and go early' approach. What this book demonstrates is just how dangerous the game of wait-and-see is, when a crisis is unfolding. As Lewis writes:

By the time people realized that their house was on fire, they needed more than a fire extinguisher.

Despite my initial caution, I really enjoyed this book. It has convinced me that I should really read more of Michael Lewis' books. In this one, the central characters are not those that you may expect before reading. Reading this book is a bit like watching a train crash unfold in slow motion. Fortunately (or unfortunately), we know what the ending is. Lewis does a great job of keeping it interesting along the way. Highly recommended!

Sunday, 13 November 2022

When there is a shortage, you pay more one way or another

When there is a shortage, the quantity of a good or service demanded is greater than the quantity of that good or service supplied. There isn't enough of the good or service to satisfy all consumers at the market price. For example, see this post from September last year about the market for shearing services. As I wrote in that post, when there is a shortage the market prices tends to move upwards. However, sometimes the sellers are reluctant to push up prices. Perhaps sellers want to ensure that their services remain affordable for everyone (who can access them, given the shortage).

However, one way or another, the buyers are going to end up paying more. As an example, take this New Zealand Herald story from earlier this week:

Whakarewarewa Village resident Kathy Warbrick would have moved heaven and earth for her dog, Tutu.

So when she arrived home from Auckland one night in August to find the 14-year-old fox terrier missing, she grabbed a torch and went searching in the pouring rain.

"I found him huddled on the neighbour's property. He wasn't responding to me."

Warbrick brought him inside and rang her vet, only to be told it was no longer able to provide after-hours services due to staffing shortages...

Warbrick said she was concerned that other pet owners in her position would have found the cost of a drive to Tauranga on top of vet costs too prohibitive and animals would suffer as a result.

Clearly, there is a shortage of after-hours veterinarian services available. Often, shortages are managed by waiting list. If a pet owner has to wait a long time to access services, then that waiting time has a cost (for example, the pet may be suffering while waiting for treatment). The price of services may not be going up, but the cost of accessing services is. If a pet owner has to drive an hour, from Rotorua to Tauranga, to access pet services, the 'full cost' of after-hours veterinarian services (made up of the price paid for the services, plus the additional cost of accessing the services) has increased.

So, while a shortage may not always cause the price of goods or services to increase, the 'full cost' of accessing those goods or services will increase for the consumer. Either way, it ends up costing consumers more.

Thursday, 10 November 2022

It's not just Gib delivery workers benefiting from greater relative bargaining power, but it may not last

As I noted back in September, a low unemployment rate benefits workers (in that case it was Gib delivery workers). Not just because they are more likely to be employed, but also because it raises their relative bargaining power in negotiations with employers, increasing the likelihood of higher wages and better working conditions. It's not just Gib delivery workers though, as the New Zealand Herald reported last week:

After an unsettling two and a half years, people’s working habits are changing fast. Experts are calling it an “employees’ market”, with job seekers not afraid to lay out their expectations from employers.

Seek NZ country manager Rob Clark said the script had been flipped on its head.

“It’s really competitive out there. Companies and organisations are having to think quite differently about how to attract talent.

“Pre-pandemic it was probably a case of ‘it’s a privilege for you as a job seeker to come and work for me as an organisation’, and that’s now flipped on its head. Organisations are really having to work a lot harder to attract that talent because it’s just more competitive.”

Clark said it comes down to simple supply and demand.

“The employment landscape is still very much a candidate-short one, and by that we mean the number of jobs has increased significantly and at a much faster rate than we’ve seen the number of candidates available.

“The outcome of that is we’re seeing fewer applications per job. It’s a market where there’s a very high demand for candidates and there’s just a relatively short supply of them compared to what we’ve been used to.”

It doesn't really come down to supply and demand. It's better explained by a search model of the labour market. As I explained in my post in September:

 In a search model of the labour market, each match between a worker and an employer creates a surplus, which is then shared between the worker and the employer. The share of the surplus (and hence, the wage for the job) will depend on the relative bargaining power of the worker and the employer. If the worker has relatively more bargaining power, then they will receive a higher share of the surplus, in the form of a higher wage...

What has changed is two things. First, the unemployment rate is low. Low unemployment increases the relative bargaining power of workers, because if a worker leaves their job (or refuses an employment offer), the employer then has to start the process of searching for a new worker all over again. The employer would face the search costs of the time, money, and effort spent searching for a worker and evaluating potential matches.

Workers can use their relatively high bargaining power in a number of ways. They can bargain for higher wages, or better working conditions. The Herald article talks about workers demanding greater flexibility, a continuation of the conditions that many (but not all) of us experienced through the Covid lockdowns.

However, workers had better bank those higher wages and better working conditions fast. The Reserve Bank is raising interest rates, and as I explained in The Conversation earlier this week, that will lead to higher unemployment. And as the unemployment rate increases, workers' relative bargaining power falls, and employers' relative bargaining power rises. Once that happens, it will be interesting to see how many employers are willing to entertain their workers' demands for greater flexibility.

Read more:

Wednesday, 9 November 2022

Twitter's blue tick is losing its value as a signal

One of Elon Musk's first actions as the new owner of Twitter was to announce a change to Twitter's 'verified status'. Previously restricted to verified real people (and usually to those with some celebrity status), users would now be able to get their own blue tick for just US$8 per month (or equivalent in other countries). That change comes with immediate problems, as outlined in this article in The Conversation by Timothy Graham (Queensland University of Technology):

...Musk’s US$8 blue tick proposal is not only misguided but, ironically, likely to produce even more inauthenticity and harm on the platform.

A fatal flaw stems from the fact that “payment verification” is not, in fact, verification...

Although Twitter’s verification system is by no means perfect and is far from transparent, it did at least aspire to the kinds of verification practices journalists and researchers use to distinguish fact from fiction, and authenticity from fraud. It takes time and effort. You can’t just buy it.

Despite its flaws, the verification process largely succeeded in rooting out a sizable chunk of illegitimate activity on the platform, and highlighted notable accounts in the public interest. In contrast, Musk’s payment verification only verifies that a person has US$8.

Payment verification can’t guarantee the system won’t be exploited for social harm. For example, we already saw that conspiracy theory influencers such as “QAnon John” are at risk of becoming legitimised through the purchase of a blue tick.

Allow me to put an economics lens on the problems here. It relates to asymmetric information, adverse selection, and signalling.

First, there is asymmetric information on Twitter. Each Twitter user knows whether they are an authentic user and not a bot, a troll, or a scammer. However, each Twitter user doesn't know which other users are bots, trolls, or scammers. That leads to a problem of adverse selection. Each Twitter user, knowing that there are lots of bots, trolls, and scammers, doesn't know for sure if any other account is authentic or not. Whether any Twitter user is a bot, troll, or scammer, or not, is private information (known only to the user themselves, and not to others - that's why it is called asymmetric information). To avoid being trolled or scammed, a Twitter user's best (risk averse) option is to assume that every other account is a bot, a troll, or a scammer. This is what we refer to as a pooling equilibrium (because all other users are pooled together in the Twitter user's mind, as if they are all the same, and low quality). Since Twitter users don't want to engage with bots, trolls, or scammers, if they are assuming that every other account is like that, there is little point being on Twitter. Authentic Twitter users start to drop off the platform, and eventually the only 'users' left are bots, trolls, and scammers. This is what we call an adverse selection problem - each Twitter user wants to engage with other authentic users, but all they find are bots, trolls, and scammers.

Of course, Twitter hasn't collapsed as a platform, so it must have found a way to deal with this adverse selection problem. One way is through the blue tick (verified user) status, granted only to authentic users. The blue tick is a signal to other users that the user with the tick is authentic. In order for a signal to be effective though, it needs to meet two conditions. First, a signal must be costly. The blue tick was previously difficult to obtain, as users had to go through an authentication process (including verifying their identity). So, while there was no monetary cost, there was a cost in terms of time and effort. Second, a signal must be costly in such a way that those with low-quality attributes would not attempt it. Since the authentication process required identity verification, this was a process that bots, trolls, and scammers would be unlikely to attempt. So, Twitter's blue tick seems to meet the conditions of being an effective signal that users are authentic (despite some counter-examples). So, Twitter users could be fairly sure that they were interacting with authentic users, if those users had the blue tick. This is a separating equilibrium (because Twitter users are able to separate the authentic accounts that they want to interact with, from the bots, trolls, and scammers, that they don't want to interact with).

That is all about to change. As Graham's article in The Conversation noted, under the new regime all that it will take for a user to obtain Twitter's blue tick is the payment of US$8 per month. While that meets the first condition of an effective signal (costly), it fails on the second condition, because almost any bot, troll, or scammer with US$8 per month would be willing to pay for the tick. The blue tick will cease to be a signal of an authentic account.

Is that the end of Twitter though? Signalling is only one way to overcome the adverse selection problem. The alternative is screening - where the Twitter user themselves tries to reveal whether another account is authentic or not. That requires a bit of detective work on the part of each Twitter user, and is going to be far from perfect. Perhaps each Twitter user is best only interacting with people that they know personally, or people they have heard of and can be fairly sure are not fake accounts. Avoiding interacting with new accounts, that have few followers, and tweet mostly junk, has always been a good strategy, but will become even more important once the blue tick loses its value as a signal.

Twitter probably won't die as a result of the changes to the blue tick. But it's certainly not going to be as user friendly as before.

Tuesday, 8 November 2022

Rotorua emergency housing, advertising and incentives

What happens when the government creates a system that generously rewards accommodation providers for providing emergency accommodation? You get this, as reported by the New Zealand Herald yesterday:

Some motel owners providing emergency housing in Rotorua have directly targeted potential out-of-town clients through social media.

A document shared by RotoruaNZ with Rotorua Lakes Council - aimed at informing “messaging” to the Government in March this year - shows examples of emergency housing motel advertisements on Facebook directly targeted at people in Tauranga and Whakatāne.

The advertising in Tauranga and Whakatāne was live at the time the document was produced.

Titles for some ads included “emergency Winz motel”, and “motel room for emergency accommodation” and listed their price as free.

Firms respond to incentives. For an accommodation provider, most of the operation costs are fixed, aside from cleaning. So, maximising profits is broadly consistent with maximising occupancy. That is true regardless of whether the accommodation provider is providing short-term tourist accommodation or long-term emergency housing. So, if a provider has converted their motel to emergency housing, they will want to ensure that they have as many emergency housing tenants as possible. After all, they've probably rendered their motel less appealing to the short-term tourist market, so their best option is to maximise from the emergency housing market. If there aren't enough emergency housing tenants in the local market, then they will try to get them in from elsewhere. And to do that, they need those potential tenants to know that the provider has housing available. At that point, advertising to those potential tenants is a no-brainer.

It's probably not a good thing for the emergency housing tenants though. They're already in a precarious situation, but moving to a new city where they may lack social connections and networks will make a dire situation even worse. So, it is reasonable for government to be thinking about how to reduce this problem. But, not this way:

On Wednesday last week, [Bay of Plenty regional commissioner Mike] Bryant told Local Democracy Reporting that as recently as November 2 MSD had contacted a Rotorua motel about a Facebook post advertising emergency housing to out-of-towners and “asked that they remind their staff not to do it”.

“When we know a Rotorua motel is advertising emergency housing in out-of-town social media groups, we reach out and ask them to stop.”

That may be the weakest response ever: "Please sir, stop advertising to get more tenants for your emergency housing motel". If the government wants a provider to stop advertising for emergency housing tenants, then the government should cancel the provider's emergency housing funding if they don't. Problem solved. Firms respond to incentives. If the incentives create negative consequences the government doesn't like, they need to change the incentives.

Sunday, 6 November 2022

Rent control and vacant properties in India

Across the street from my home is a vacant house. It's been vacant since at least mid-2019. In the middle of a housing crisis, the house remains vacant. Various people in the neighbourhood have wondered why the owner doesn't rent the property out. It made one of our neighbours incredibly angry. They wanted to buy a house (in 2019), but they couldn't find that was affordable. And yet, the house next to their rented home was vacant.

Why is the house vacant? Why won't the owner rent just it out? If you look at it, you realise that there are a lot of impediments to becoming a landlord. On 1 July 2019 (around about the time that the house was vacated by its owner), the government introduced new 'healthy homes' standards, that all rental properties would eventually need to meet. The house would need to be insulated, and meet heating and ventilation standards, along with some other conditions. If that would require expensive upgrading of the house (and that seems entirely plausible), then the landlord might have decided it would not be worth the hassle, and has since kept the property vacant. [*]

The healthy homes standards are not the worst policy the government could have enacted that would have led to vacant houses. Thankfully they have never followed through on early indications that they were considering rent controls. It is well known (to economists, at least) that rent controls lead to a worsening of the quality of rental housing (to the extent that rent controlled housing is literally killing people in Mumbai). But rent controls also increase the number of vacant houses.

A good examination of why vacancy rates are higher when rent controls are in place was provided by this recent article, by Sahil Gandhi (University of Manchester), Richard Green (University of Southern California), and Shaonlee Patranabis (London School of Economics), published in the Journal of Urban Economics (open access). Gandhi hypothesise that rent controls and lack of state capacity for legal enforcement of contracts both reduce the security of property rights, and that leads landlords to leave their properties vacant:

Two phenomena could create uncertainty in this allocation of rights of ownership between the landlord and the tenant. First, rent control, whose aim is to protect tenants from rent increases and evictions, alters the allocation of ownership in favor of the tenant. Second, if courts take long to resolve disputes, the ownership of the property could de-facto belong to the tenant for this duration and thus increase the risks for the landlord... The presence of either of these two conditions reduces ex-ante incentives for the landlord to engage in a rental contract. High vacancy rates are a natural consequence of reducing the benefits and raising the costs to a landlord of renting.

The problem of vacancies is particularly acute in India, where:

...the vacant stock of 11.1 million units could house almost 50 million people or around 13% of the urban Indian population.

Gandhi et al. use district-level data from the 2001 and 2011 Indian Censuses, essentially comparing the proportion of vacant properties between districts with and without rent controls. They also look at the relationship between vacant properties and state capacity for contract enforcement, measured as the number of judges per 1000 people. They have panel data for 456 districts across 24 states (for rent control) and cross-sectional data for 580 districts across 29 states (for state capacity). In their analyses, they find that:

...a pro-landlord policy move that relaxes rent revisions could potentially reduce housing vacancy by 2.8 to 3.1 percentage points and lead to a net welfare gain...

...a one to two standard deviation increase in judges per 1000 persons (urban) could reduce vacancy by 0.43 to 0.86 percentage points...

In other words, both rent controls and a lack of state capacity for contract enforcement lead landlords to leave properties vacant rather than renting them out. Gandhi et al. conclude that:

...rent control reform and judicial capacity are two areas in need of urgent attention from policymakers. The Model Tenancy Act, approved in June 2021 by the Government of India, aims to address both issues. It allows for setting rents at market rates and requires separate fast track courts to resolve disputes between tenants and landlords. If states adopt this Act then our findings suggest that vacant housing will decline.

Note that introducing rent control, and making it more difficult for landlords to evict bad tenants, would tend to shift things in the opposite direction. Both are policies that the current New Zealand government has actively considered. The consequences are clear.

[HT: Eric Crampton at Offsetting Behaviour]

*****

[*] In the last two years, things have gotten even worse for the house. A pipe burst in 2020 and flooded underneath the house. The owner didn't do anything. A large silk tree in the front yard rotted, then finally collapsed. Still no sign of the owner. The house is virtually abandoned at this point. I suspect it is not only un-rentable (given the healthy homes standards), but is probably unsaleable as well.

Read more: