Sunday 31 January 2021

Fresh water is still not a public good

Earlier in the month, this New Zealand Herald article caught my attention:

Will 2021 be the year the world really values water?

If Wall Street sets the tone, it will be.

For almost 230 years, agricultural commodities have been bought and sold in New York's finance district.

And now the Nasdaq stock exchange, which celebrates 50 years of activity next month, has put a price on our most vital substance.

Water contracts for five water districts in drought-prone California are being bought and sold.

The new water futures contract allows buyers and sellers to barter a fixed price for the delivery of a fixed quantity of water at a future date.

In December, for the first time, water futures for drought-hit California districts are also being traded on the floor of the world's second-biggest market.

The concept, which has been mooted for decades, finally came about in December.

That all seems pretty sensible so far. A futures contract allows the water user to 'lock in' a future price for water, reducing the risk that they will be caught out by unanticipated future price rises. However, then we get to this bit:

The move was quickly criticised by public health specialists.

Pedro Arrojo-Agudo, the United Nations' special rapporteur on the human rights to safe drinking water and sanitation, was direct in his opposition.

"You can't put a value on water as you do with other traded commodities.

"Water belongs to everyone and is a public good. It is closely tied to all of our lives and livelihoods, and is an essential component to public health.

"Water is already under extreme threat from a growing population, increasing demands and grave pollution from agriculture and mining industry in the context of worsening impact of climate change."

There are several problems with this line of reasoning. First, water is not a public good. As I noted in this 2017 post on the same topic, by definition a public good is a good that is non-rival (where one person using the good doesn’t reduce the amount of the good that is available for everyone else) and non-excludable (where the good is available to everyone if they are available to anyone). Unless you live in Ankh-Morpork [*], the first condition clearly doesn't hold - one person using fresh water leaves less available for everyone else. Fresh water is rival, not non-rival. A good that is rival and non-excludable is a common resource. However, that isn't the case for California water, which is allocated through water rights. You must have water rights to draw water, making water excludable. A good that is rival and excludable is a private good. Fresh water in California is, by definition, a private good.

Second, although the availability of fresh water may be under threat from increasing demand, putting a price on water is a solution to that problem, not something that exacerbates the problem. With a price on water, the price will dictate how much water people use. In times of drought, the price of water should rise (unless the price is controlled by the government, like it is in Auckland), and people will use less water. Goods that are scarcer have higher prices - as water gets scarcer, the price will rise. This is a means of better managing scarce fresh water supplies, not some nefarious plot to take water out of the hands of the people.

Third, water already has a price in California. To draw water, you need water rights, and those water rights cost money. The only thing changing is that a futures contract has been introduced, so that water users can better manage the future uncertainty of water prices. If a drought is expected in the future, then the price of the futures contract will rise. Water users will have an incentive to act now to ensure that their future water use will be lower. That probably makes water use more efficient.

You don't have to be a market fundamentalist to realise that prices can actually help. In the case of fresh water, the alternative is a free-for-all, where the water supplies will almost certainly be depleted faster.

*****

[*] Terry Pratchett noted that the water in the Discworld's largest city must be very pure, because of the number of kidneys it had already passed through.

Read more:

    Monday 25 January 2021

    Climate change and population control

    I had my first ever article published in The Conversation this morning, on population control and climate change. I encourage you to read it.

    The process of writing and getting the article published has been an interesting experience. It started as a request from the University's communications team to write something 'for the University website'. Once I had written a draft though, it became clear that I was being subjected to what I am sure was a pretty unsophisticated bait-and-switch, when the comms people recommended I pitch the article to The Conversation. The article appears to be doing what they intended though, and my email has been pinging constantly all day as the article attracts comments (129 comments so far, and counting - for context, that is more comments than all other articles by University of Waikato staff have attracted in the last month).

    Anyway, there isn't much space to write in depth about the issues in the 800 words that The Conversation allows, and the original 1200 words I drafted got pared back substantially. The main issue I wanted to highlight is that, when some people argue that we should curb population growth to reduce carbon emissions (or climate damage more generally), there are a number of key issues that need to be considered. First, there is an ethical or moral issue, as Ross Douthat discussed in this article back in November. As I noted in my article:

    If our concern about climate change arises because we want to ensure a liveable future world for our grandchildren, is it ethical to ensure that pathway is achieved by preventing some grandchildren from ever seeing that world because they are never born?

    Second, it's not clear that government-enforced population control is even necessary, because:

    All high-income countries currently already have below-replacement fertility, with fewer children being born than are necessary to maintain a constant population.

    What does that mean? I took my starting point as the Kaya Identity, which breaks carbon emissions exactly into four components:

    1. Population
    2. GDP (or production) per capita;
    3. Energy use per unit of GDP (or production); and
    4. Carbon emissions per unit of energy use
    To reduce emissions, we would need to reduce one or more of those components. Taking into account that the first two components are growing, if we can't easily reduce population and we are unwilling to reduce economic growth, then we need to reduce the energy intensity of the economy or the carbon intensity of energy production. And if we can't do either of those things (and, to be honest, we haven't done a good enough job so far), then we would have to undertake some challenging conversations about the first two.

    Anyway, against my better judgement I have been following along the discussion in the comments, and a couple of very valid points have been raised. So, I thought I would make some comments about them here.

    First, the moral or ethical issue that I raised has been challenged. Some people have missed the point that it is a philosophical question and so, while the answer might seem obvious on the surface, the reasoning is less so. However, aside from the philosophical question, the moral issue I raised does take a human-centred view of the world, whereas if you take a planet-centred view there is no such issue. That is, would we save the planet for our grandchildren at all, or would we save it for the planet itself? That is an excellent point.

    Second, I raised China as being the only country to have undertaken a successful policy of population control. Some commenters raised family planning programmes, but I would argue that family planning doesn't have at its heart a goal of population control - it is about empowering parents (particularly women) in their choices about family size. It's still a good point though, and there is a huge unmet need for family planning (particularly in developing countries and in underserved communities in developed countries) that, if resources were applied, could reduce future population. Increasing female education also has the effect of lowering fertility rates but again, it's not a policy a government implements explicitly for its population control effects.

    Sunday 24 January 2021

    The minimum wage and teenage birth rates

    The disemployment effects of minimum wages continue to be debated. If a minimum wage does reduce employment though, we would expect it to have the greatest effect on young people and workers with low human capital. So, when a minimum wage is increased, we might expect some teenagers to be disemployed, and those that are not will receive a higher income.

    In a new article published in the journal Economics Letters (sorry, I don't see an ungated version), Otto Lenhart (University of Strathclyde) looks at the effects of the minimum wage on teenage birth rates. As Lenhart notes, there are a number of reasons to believe that the minimum wage might have an effect on teenage pregnancy, but not all of the potential effects work in the same direction. Lower time spent working frees up time for 'leisure activities', but if it also lowers income it reduces the resources available for raising children. On the other hand, higher income for those that remain employed provides better access to health resources, including health messages and contraception. It also raises the opportunity cost of time that could be employed in raising children, or in 'leisure activities'. So, theory doesn't provide a definitive answer as to whether a higher minimum wage would decrease, or increase, teenage pregnancy.

    Lenhart uses U.S. state-level data on teenage (15-19 years) birth rates and minimum wages over the period from 1995-2017. The minimum wage is measured as the ratio to the state-level median (hourly) wage. Using a difference-in-differences approach, he finds that: 

    ...increases in minimum wages are associated with reductions in teen birth rates. Using one-year lagged minimum wages, I find that a $1 increase in the effective minimum wage reduces state teen birth rates by 3.43 percent (p<0.05). While slightly smaller in magnitude, the negative and statistically significant effect remains when including state-specific time trends.

    Interestingly, the effect is similar but smaller for women aged 20-24 years, and smaller again for each five-year age group above that (becoming statistically insignificant for women aged 40-44). Also interesting is that the effect is only statistically significant for states that have an earned income tax credit (EITC).

    Lenhart doesn't really explore this latter finding too much, but I think he should have. It's not clear to me that is helps us understand the mechanisms through which this change occurs. EITCs increase employment and income, while minimum wages increase income (for some), but reduce employment (at least, that's still my overall conclusion on the evidence in the long-running minimum wage debate). If a higher minimum wage leads to lower teenage pregnancy, but only in states with an EITC, then it is tempting to say that this is all a result of higher income. However, in the U.S., I believe that the EITC is only paid to working parents. So, the higher income as a result of the EITC only occurs in the case of having a child, so if teenagers are having less children, then they won't benefit from the EITC. A higher minimum wage reducing teenage birth rates only in states that have an EITC doesn't make a whole lot of sense to me. Unless I'm really not understanding this.

    Clearly, there is more work to do on this topic. It isn't as clear as the effect of minimum wages and EITCs on criminal recidivism.

    Saturday 23 January 2021

    Is Aaron A. Aardvark an exporter?

    Following on from my post earlier this week about solving alphabetic discrimination in economics, it appears that economics isn't the only place where such discrimination exists. A recent article by Hua Cheng (Peking University), Cui Hu (Central University of Finance and Economics, China), and Ben Li (Boston College), published in the Journal of International Economics (no ungated version, but there are some presentation slides available here), looks at the case of exporters in China. This is a really nice paper because of the way they tried to identify the alphabetic effects, by exploiting the fact that Chinese companies have two names and hence two different alphabetic rankings - one in English and one in Chinese. As Cheng et al. explain:

    The first character has a character rank determined by the Chinese lexicography (number of strokes), and a romanization rank determined by the English lexicography (modern English alphabet). By interacting (i) the character rank with every foreign market's language proximity to Chinese, and (ii) the romanization rank with every foreign market's language proximity to English, we formulate a dual difference-in-differences strategy to estimate how a lexicographically earlier (according to either rank) firm name generates advantages in foreign markets that speak more proximate (to either language) languages.

    Cheng et al. hypothesise that firms with names earlier in the Roman alphabet will have an advantage in exporting to countries with greater language proximity to English, and firms with names earlier in the Chinese lexicography will have an advantage in exporting to countries with greater language proximity to Chinese. Using Chinese customs data from all export transactions over the period from 2004 to 2013, they find that:

    ...firms with lexicographically earlier character names export more to destination countries that have greater language proximities to Chinese, while firms with lexicographically earlier romanized names export more to destination countries that have greater language proximities to English. Such lexicographic biases are especially strong among exporters with low visibility in their businesses. Quantitatively, when an exporter moves down the lexicographic ranks from the 25th percentile to the 75th percentile, and meanwhile the foreign market's language proximity increases from the 25th percentile to the 75th percentile, its export volume decreases by one to 3%.

    The effects are not huge, but they are statistically significant - there is 'alphabetic' discrimination for exporting firms. Why would that be the case? As the above quote notes, the effects are especially strong for firms with low visibility in their industry. Cheng et al. note that:

    Lexicographic ranks supposedly matter more for obscure firms. The most established firms in a market are unlikely to be harmed by having an initial Y or Z in their names. A visual “first come, first served” seems to be a natural routine for importers who search obscure firms.

    Do importers really search for exporting firms from an alphabetical list? While Cheng et al. do find some evidence that supports this, it would be interesting to see more support for this. In the meantime though, it appears that not only is Aaron A. Aardvark becoming a well-respected economist, but Aardvark Enterprises is becoming a top exporter (to English-speaking countries, at least).

    Thursday 21 January 2021

    Revisiting the Aaron A. Aardvark problem

    Back in 2017, I posted about alphabetic discrimination in economics. Essentially, because the default in economics is for authors to be listed alphabetically, and there is some advantage to being the first-named author on a co-authored paper, authors with surnames earlier in the alphabet have an advantage. If your name was Aaron A. Aardvark, you're going to be significantly advantaged over Zachary Z. Zebra.

    I recently read a 2018 paper by Debraj Ray (New York University) ® Arthur Robson (Simon Fraser University) that offers an alternative, which they term 'certified random order'. As they explain:

    Here is a simple variant of the randomization scheme which will set it apart from private randomization. Suppose that any randomized name order is presented with the symbol ® between the names: e.g., Ray ® Robson (2018) is the appropriate reference for this paper. Suppose, moreover, that such a symbol is certified by the American Economic Association, for example, simply acknowledging that this alternative is available.

    The paper itself is quite mathematical and not for the faint of heart. Ray ® Robson show that:

    ...® can be introduced not as a requirement but as a nudge, because our results predict that it will invade alphabetical order in a decentralized way. It may provide a gain in efficiency. But, more important, it is fairer. Random order distributes the gain from first authorship evenly over the alphabet. Moreover, it allows “outlier contributions” to be recognized in both directions: that is, given the convention that puts “Austen ® Byron” (or “Byron ® Austen”) on center-stage, both “Austen and Byron” and “Byron and Austen” would acquire entirely symmetric meanings.

    The latter bit of that quote is important, and solves a problem I had not considered before. Under the current convention, if a paper is Austen and Byron, that conveys no information about which author did most of the work (if one of them did), but a deviation from the convention (i.e. Byron and Austen) does. In the case of Byron and Austen, it tells you that Byron did most of the work. There is no way for Austen to receive that credit if they did most of the work, since Austen and Byron would be seen as simply following convention. The certified random order approach (denoted by ®) solves that problem.

    You would think that authors with surnames early in the alphabet would be against anything that disrupts their dominance. But the 'Byron and Austen' problem may be enough to encourage Aaron A. Aardvark to accept certified randomness.

    Despite the mathematical advantages, it doesn't seem like many other authors have taken up the certified random order approach though, in the two-plus years since publication of the article. In fact, I still see some authors noting random order of their names by a footnote on the first page of the article. It can take some time for new conventions to take hold - I wonder if this one will ever do so.

    Read more:

    Tuesday 19 January 2021

    There is no magic money tree

    In The Conversation earlier this week, Jonathan Barrett (Victoria University) argued that:

    ...both reputable media and politicians of every stripe invariably use the phrase “taxpayer money” to describe government funds, despite the phrase having no constitutional or legal basis.

    On a constitutional or legal basis, that is no doubt true. As Barrett notes, once a taxpayer has paid tax to the government, they have no residual claim on the funds. The taxpayer can't direct how their taxes are spent by the government, except through the pretty coarse means of voting on who is to be in government. In his article, Barrett essentially argues that there is a certain amount of harm in referring to government spending as 'taxpayer money', because it creates the illusion that the taxpayer has a residual claim to it. Essentially it panders to the unhinged 'tax is theft' crowd.

    However, the alternative may be just as bad. If the government is just spending 'government money', that gives the impression that the spending comes from a magical money tree that can be harvested any time the public wants money to be spent on whatever is the policy du jour. In every country except Cloud Cuckoo Land (where modern monetary theory works), every dollar that the government spends must come from some taxpayer. Either the government spends money it receives from current taxpayers, or, if the government is spending more than it receives in taxation, then it is future taxpayers that are going to have to repay the deficit. When people ignore that taxpayers are the ultimate source of government funds (and sometimes even when they don't), then we get crazy and unaffordable policy suggestions. 

    Referring to government spending taxpayer money doesn't have to lead to an argument for small government (although some people will want to go there). Governments of all sizes have to be funded by taxpayers. A sensible government is one that doesn't radically overspend its ability to tax the public (interestingly, the current pandemic will have certainly tested how far governments can go in terms of spending more than they receive in taxes).

    Barrett has overstated the problem here. Referring to governments spending 'taxpayer money' is a useful reminder that there is no magic money tree.

    Monday 18 January 2021

    How will the coronavirus pandemic affect trust in science, and in scientists?

    In a new and very interesting article published in the Journal of Public Economics (ungated earlier version here), Barry Eichengreen (University of California, Berkeley), Cevat Giray Aksoy (European Bank for Reconstruction and Development), and Orkun Saka (University of Sussex) looked at the relationship between exposure to past epidemics and trust in science and in scientists. They use data from the 2018 Wellcome Global Monitor, which included several questions about how much people trust government and corporate scientists, and how much they trust science in general. They also measure each survey participant's exposure to past epidemics while they were aged 18-25 (what is termed the 'impressionable years'), using the EM-DAT disaster database (and the article itself has an exhaustive table of all of the epidemics they included in the appendix), and measuring exposure as the number of people in the country as a share of population in each year.

    They find that:

    ...such exposure is negatively associated with trust in scientists and, specifically, with views of their integrity and trustworthiness. Specifically, an individual with the highest exposure to an epidemic (relative to zero exposure) is 11 percentage points less likely to have trust in the scientist (the respective average of this variable in our sample is 76%).

    Interestingly, they also find that:

    The effect we find is not a general decline in trust in science, but only in scientists.

    It also isn't a decline in trust of elites, because there is no similar negative relationship with trust in doctors and nurses, or traditional healers. So, it really is just a loss of trust in scientists. Eichengreen et al. posit that this arises because:

    Members of the public who are not familiar with the scientific process may interpret the conflicting views of scientists and criticism of some studies by the authors of others as signs of bias or dishonesty. This paper cannot analyze the first argument, due to lack of data on scientific communication during past epidemics. But we provide suggestive evidence for the second, showing that individuals with little scientific training drive the negative relationship between past epidemic exposure and trust in scientists.

    Do the results matter though? It turns out that they might be quite consequential:

    ...we show that past epidemic exposure negatively shapes respondents’ long-term attitudes towards vaccination and reduces the likelihood that their children are vaccinated against childhood diseases.

    The danger here is that we are in a world-changing pandemic right now. The generation in their 'impressionable years' right now are not only being exposed to a greater risk of harm, but they are also far more exposed to the scientific debate than ever before, due to pervasive social media. If the Eichengreen et al. paper results can be extrapolated, then this generation is going to have an incredibly negative view of scientists (but not of science more generally). The way that scientific debate has been played out over Twitter has not done any scientists any favours. It is difficult to see how this can be mitigated. At the very least, as Eichengreen et al. conclude:

    Addressing concerns about corporate agendas, personal bias and disagreement in scientific communication is even more important in this light. Our results suggest that it is especially important to tailor any such response to the concerns expressed by members of the generation (‘‘Generation Z”) currently in their impressionable years.

    Saturday 16 January 2021

    Free riding in the RePEc rankings

    The RePEC (Research Papers in Economics) rankings are probably the most widely respected ranking of academic economists worldwide. You can see the New Zealand ranking (based on the last ten years) here. The University of Waikato is well represented - as I am writing, we have three of the top ten, and six of the top twenty, economists in the country. You have a scroll a little way down to find me (at #38).

    The ranking is based purely on research publications and their quantity and quality, but only considers publications in economics journals (which in part explains why I'm not ranked higher - so many of my publications are in public health and interdisciplinary journals). Anyway, I'm sure there are ways to game the rankings, if one digs into the algorithm. And given that most economists watch these rankings, there is no doubt benefits to gaming the system if you know how to do so.

    According to this 2018 article published in the journal Applied Economics Letters (ungated earlier version here), by Justus Meyer and Klaus Wohlrabe (both University of Munich), it is possible to free ride to some extent in the RePEc rankings. As they explain:

    The ongoing procedure of measuring an author’s overall ranking is to adjust backwards for the publishing journal’s quality. RePEc currently adjusts by weighing an author’s publication by the publishing journal’s current impact factor (IF). Impact factors, however, change over time... The reweighting of old articles by current impact factors raises the chance of involuntary free-riding. Authors benefit through previous publications in journals that climb the quality ladder over time.

    Does the re-weighting of journal quality make much difference? It turns out that yes, it does. Meyer and Wohlrabe re-rank all 45,000 authors in RePEc, re-weighting the quality of their journal publications by the impact factor during the year in which the article was published. They find that:

    ...the average change for the top 100 is −12 places and for the top 1000 is −40 place. Among the top 1000 authors ordered by the initial RePEc ranking, the biggest gain is an improvement of 510 places while another author drops by 1600 places.

    Drilling down to the top 20 economists (by the ranking at the time of this research), the big gainers are Robert Barro (up 9 places to 11th) and Timothy Besley (up 8 places to 9th), while the big losers are John Siegfried (down 23 places to 34th) and John List (down 17 places to 26th).

    Have Meyer and Wohlrabe identified a real problem though? It seems unlikely. Notice that this 'free riding' that they document involves economists anticipating which top journals will rise in impact factor in the future, and publishing in those journals. It seems unlikely that anyone has the perfect foresight necessary to free ride purposefully. So, I wouldn't read too much into the changes as a result of their re-weighting. It's probably not accurate to label future Nobel Prize winner John List a free rider just yet.

    Thursday 14 January 2021

    The health impacts of criminalising prostitution

    I've previously written about the positive impacts of decriminalising prostitution (see here and here). However, most studies on this have been conducted in developed countries. In a new article published in the Quarterly Journal of Economics (possibly ungated earlier version here), Lisa Cameron (University of Melbourne, and no relation to me), Jennifer Seager (George Washington University), and Manisha Shah (UCLA) look at a peculiar case in Indonesia. As they explain:

    The study area encompasses the districts of Malang, Pasuruan, and Batu in East Java, Indonesia... As is common throughout Indonesia, sex work in East Java occurs in both formal worksites (i.e., brothels) and informal worksites (i.e., the street)...

    On July 11, 2014, the Malang district government announced that on November 28, 2014, it would close all formal sex worksites within the district as a “birthday present” to Malang... 

    The announcement of the worksite closures was unanticipated. To the best of our knowledge, when we conducted baseline surveys in February–March 2014, there was no expectation of the closures. In fact, we had considered conducting the research (which was originally planned to be a randomized controlled trial offering micro-savings products to sex workers) in Surabaya but had been advised by the community-based organization we were working with, whose main mission is to work with sex workers in the Malang area, that worksite closures were possible in Surabaya. We specifically selected Malang as our study site because worksite closures were not anticipated.

    I guess this was a case of the researchers making the best of a bad situation. Cameron et al. started out intending to research one thing, but ended up researching something completely different (incidentally, there's been a lot of that over the last year, due to coronavirus lockdowns or just the pandemic generally).

    Anyway, Cameron et al. had collected some baseline data before the criminalisation, and collected data after the criminalisation, allowing them to apply a difference-in-differences analysis. Essentially, this involves comparing the difference between Malang and the other two districts before criminalisation, with the difference between Malang and the other two districts after criminalisation. They find that:

    ...criminalizing sex work increases STI rates among sex workers (measured using biological test results) by 27.3 percentage points, or 58%, from baseline. Using data from both clients and sex workers, we show that the main mechanism driving the increase in STI [sexually transmitted infection] rates is a decrease in access to condoms, an increase in condom prices, and an increase in noncondom sex. Sex workers are more than 50 percentage points less likely to be able to produce a condom when asked by survey enumerators at endline, and clients report a 61 percentage point increase in noncondom sex.

    None of that is good, and it extends to those that left sex work as well, and their children:

    Using data obtained from tracking women who left sex work postcriminalization, we show that those who leave sex work because of criminalization have lower earnings than those who leave by choice. In addition, children of women from criminalized worksites are adversely affected—they have less money for school and are more likely to work to supplement household income.

    The criminalisation also impacts the general population:

    ...there is a statistically significant... increase in female reports of experiencing STI symptoms in the past three months. This is consistent with a scenario in which increased STI rates among sex workers at the criminalized worksites translate into higher STI rates among clients, who then pass these STIs on to their sexual partners.

    The sex market was smaller as a result of criminalisation, which was the intention of the policy. However, the unintended consequences are severe. As Cameron et al. conclude:

    ...from a health perspective, criminalization of sex work is likely to be counterproductive.

    Indeed.

    [HT: Marginal Revolution, for the working paper version last year]

    Read more:

    Wednesday 13 January 2021

    Book review: The Instant Economist

    What are the key economic concepts that a manager needs to understand? That is the question that is answered in The Instant Economist, a 1985 book by John Charles Pool and Ross La Roe. This is a pop-economics book from a time when pop-economics didn't really exist as a concept. Nevertheless, it has aged really well, and is not to be confused with the 2012 book of the same title by Timothy Taylor (which I reviewed here a few years ago).

    Pool and La Roe cover concepts from macroeconomics, microeconomics, and international economics, pulling out the key things that a manager needs to understand in each area, and explaining them in a quite straightforward way. The narrative style is the story of an MBA graduate whose father has sent him to talk to an economics professor because, while he understands all of the maths, he doesn't understand any of the economic intuition from his business degree. Regular readers of this blog will probably realise that is a setup I have a lot of sympathy for. Clearly though, the book was written for a different time - the professor's lunch consists only of two martinis at the staff club, just before he heads in to teach a graduate managerial economics class!

    I found the microeconomics section to be particularly useful and still current. The macroeconomics is a bit dated and even the simple models of macroeconomics have moved on a bit from how Pool and La Roe write about it. The 'bathtub model' of macroeconomics was an interesting device, but probably wouldn't stand up to much scrutiny now. The international economics section is more-or-less limited to exchange rates, international investment flows, and the gains from trade.

    This was a really good book, and I enjoyed it a lot. It also gave me some ideas on framing the importance of elasticities in particular for management students. Recommended!

    Tuesday 12 January 2021

    No, economic growth won't save us from the increasing fiscal costs of population ageing

    Last week I wrote a post about ageing and creating tax incentives for older people to work longer. The impetus for the tax incentives is the projected increase in the older population, and reductions in support ratios (the number of working people for each older person). However, the issue may not be as bad as assumed by most people (including me).

    This new article by Ian McDonald (University of Melbourne), published in the journal Australian Economic Review (sorry I don't see an ungated version online) tells a different story. McDonald first outlines the problem:

    The likely prospect of an ageing population, that is, an increase in the share of old people in the population, will put upward pressure on the level of government expenditure in the future. High government expenditures per old person multiplied by the increase in the proportion of old people in the population will drive an increase in government expenditure. This is a major fiscal challenge which we are starting to experience.

    He then goes on to summarise projections of government spending, based on assumptions about population growth, and an assumption of unchanged government policy (which is a standard assumption - we can't easily forecast what future government policy may be). He finds that various projections (by McDonald himself, and others):

    ...suggest that an increase in government spending due to the ageing population somewhere in the range of 4.9–7.8 percentage points of GDP over the 40‐year period seems to be a reasonable projection assuming unchanged government policy.

    So, essentially the government would need to either increase spending by 4.9-7.8 percentage points of GDP. McDonald seems to suggest this is not a big deal, but given that government spending in Australia is around 40 percent of GDP, that would entail a 12-20% increase in government spending. That means taxes would need to be 12-20% higher than currently, or government services (or service quality) would need to be cut to compensate for the extra spending.

    McDonald's argument that the costs are not prohibitive rests on this:

    The prospect of an increasing proportion of old people raises the spectre that our continuing support will be impossible. However, this fear ignores the fact that because of the continuing growth of labour productivity, we who will finance this support will be better off than we are today and will indeed be well able to support older people.

    Specifically, he finds that the increase in government spending required for the ageing population is dwarfed by the increase in GDP itself. I don't find this argument entirely persuasive, because if taxpayers were happy to give up some proportion of their income growth in higher taxes, the government could do that right now. In fact, governments tend to be moving in the opposite direction, decreasing taxes even though income per capita is increasing. That suggests that there is already an unwillingness, either by taxpayers or by government, to increase taxes to offset increased costs due to ageing.

    I don't think you can just wave your hands, cite 'economic growth', and 'poof!' - all problems relating to the increasing costs of an ageing population disappear. And that appears to be what McDonald is doing. Which is disappointing - I usually like the 'For the Student' section of the Australian Economic Review, but this is one article that falls short of the mark.

    Monday 11 January 2021

    Stock market falls and fatal road accidents

    It is pretty well established that distracted driving is dangerous - according to the NHTSA, it claimed nearly 3000 lives in the US in 2018. There are many things that can distract a driver - mobile phones, eating or drinking, or talking to passengers. But how about emotions? The Reduce the Risk website notes that "83% of drivers think about something other than their driving when behind the wheel", and that could be a distraction.

    So, I was interested to see this recent paper by Corrado Giulietti (University of Southampton), Mirco Tonin (Free University of Bozen-Bolzano), and Michael Vlassopoulos (University of Southampton), published in the Journal of Health Economics (ungated earlier version here). In the paper, Giulietti et al. look at the relationship between stock market returns and fatal car accidents in the U.S. Specifically, they use data on daily stock returns (S&P500 Index, and some other measures) and daily numbers of fatal vehicle accidents from the Fatality Analysis Reporting System (see here) over the period from 1990 to 2015. They find that:

    ...a one standard deviation reduction in daily stock market returns increases the number of fatal accidents by 0.6% (that is, by 0.23 accidents over an average of 37.4 daily accidents occurring after the stock market opens).

    The relationship is statistically significant, but notice that the size of the effect is pretty small. The standard deviation of the daily stock market returns is 1.04 percent, with an average of 0.04 percent. So, if the stock market loses one percent on a particular day, these results suggest that there would be 0.23 more fatal road accidents that day in the U.S. 

    Of course, that attributes a causal interpretation to these results, which are essentially correlations. Giulietti et al. do undertake some interesting robustness checks and falsification tests on their results, as they explain:

    In the first set of tests, we exploit the timing of accidents. If the relationship that we find is due to uncontrolled-for events affecting both stock market valuation and driving behavior, we would also expect the relationship to be present for accidents happening before the opening of the stock market. However, we find no relationship in this part of the day, thus providing support for a causal interpretation of the link between stock market returns and accidents. With a similar logic, we show that there is no relationship between car accidents and lead stock market returns...

     In the second set of falsification tests, we pursue multiple approaches to compare the effect of the stock market on groups of drivers with different likelihoods of owning stocks... One approach to isolate drivers who are unlikely to hold stocks is to zoom in on accidents involving only people aged 25 or under. For this group, we do not find a statistically significant relationship between accidents and stock market performance, while we see the effect on accidents involving at least one driver older than 25... In another approach, we exploit differences in the geographical distribution of income, with the idea that people with a higher income are more likely to invest in the stock market. We consider average income in both the county of the accident and the drivers’ zip code. In both cases, we find no relationship between stock market and accidents for the lower tercile of income, while there is a strong significant relationship for the upper tercile.

    Those results provide some confidence that the results aren't spurious, but still fall short of demonstrating definitive causality (and falls short of convincing Andrew Gelman as well). Underlying moods affect the stock market ("animal spirits", as John Maynard Keynes termed them), as well as affecting driving. If moods are different in higher income than lower income people, and higher income people's moods affect stock prices more than lower income people's moods, then the falsification tests based on income differences are not valid.

    Probably one aspect that should be concerning is that the results appear to hold for stock market falls, but not for rises. Are people likely to be more distracted on bad stock market days than on good stock market days? I think the mechanism needs some further explanation in this respect, and until we have that, the jury is certainly out on the causal relationship between stock market returns and fatal accidents.

    [HT: Marginal Revolution, last year]

    Sunday 10 January 2021

    Book review: The Tyranny of Metrics

    I'm pretty sure that you will have heard the saying "Not everything that can be counted counts, and not everything that counts can be counted" or some variation of it, often attributed to Einstein, but actually comes from a book by William Bruce Cameron (no relation to me). Now imagine an entire book devoted to that topic. The book you are imagining is The Tyranny of Metrics, by historian Jerry Muller. Aside from the Cameron quote above, the premise can also be summarised as:

    There are things that can be measured. There are things that are worth measuring. But what can be measured is not always what is worth measuring; what gets measured may have no relationship to what we really want to know. The costs of measuring may be greater than the benefits. The things that get measured may draw effort away from the things we really care about. And measurement may provide us with distorted knowledge - knowledge that seems solid but is actually deceptive.

    The book is well written and easy to read. Muller first lays out his critique of measurement and 'metric fixation' (as he terms it), then moves on to providing case studies demonstrating the evils of a fixation on metrics in many fields: colleges and universities, schools, medicine, policing, the military, business and finance, and philanthropy and foreign aid. The case studies are mostly good and illustrate the overall point well. For instance, take this bit on the unintended consequences of metric fixation in higher education:

    A mushroom-like growth of administrative staff has occurred in other countries that have adopted similar systems of performance measurement, such as Australia. In most such systems, metrics has [sic] diverted time and resources away from doing and toward documenting, and from those who teach and research to those who gather and disseminate the data for the Research Assessment Exercise and its counterparts.

    Anyone in a western university can relate to that, and that section of the book could be read alongside the late David Graeber's excellent book, Bullshit Jobs (which I reviewed here). However, not all of the case studies offered the same clarity of illustration of unintended consequences. In particular, I felt like the military and philanthropy sections were a little strained.

    After the case studies, Muller moves onto a more general digression arguing that transparency is not always the best approach. I thought that section diverged a bit too much from the message of the book and wasn't really necessary. The conclusion brought things together nicely though:

    As we've seen time and again, measurement is not an alternative to judgment: measurement demands judgment: judgment about whether to measure, what to measure, how to evaluate the significance of what's been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available.

    As you can see, the book is not simply a polemic against measurement and metrics in all their forms. Muller is arguing for a more sensible approach to measurement and the use of metrics, one that recognises their limitations and the potential pitfalls that their use entails. Anyone involved in business or policy formulation must recognise that the use of particular metrics will create incentives. And we should always keep Goodhart's Law in mind: "When a measure becomes a target, it ceases to be a good measure".

    Notwithstanding the few gripes I have, I really enjoyed this book, and recommend it to anyone who is thinking about implementing metrics, or anyone who is looking to craft an argument against their implementation.

    Tuesday 5 January 2021

    Is there a signalling explanation for the high cost of article re-formatting?

    My post yesterday highlighted the high cost of re-formatting papers for submission to academic journals, with a total cost estimated at US$1.1 billion per year. I was reflecting on this today, and perhaps there is a reason for this high cost, aside from each journal's publishers wanting to maintain a particular style that distinguishes the journal from other journals.

    Perhaps the time cost of formatting (and re-formatting) is a way of dealing with asymmetric information, specifically adverse selection. At the time of submission (i.e. before peer review, and before the editor has even read the abstract), the quality of a paper submitted to a journal is known to the authors (presumably), but not to the publisher. The quality is therefore private information. Since the publisher doesn't know whether any particular article submission is high quality or not, their best option (aside from editorial and peer review, which I will come to in a moment) is to assume that every submission is low quality. This is a pooling equilibrium - all article submissions are pooled together as if they are the same quality. The publisher may as well pick randomly from this pool of submissions of unknown quality, leading to a journal that gains a reputation for low quality articles (this is basically the business model of some publishers that offer 'pay-to-publish'). Authors with high quality articles would avoid those journals, lowering the quality of the article submissions further. Eventually, only the lowest quality articles get submitted, and published. This is a problem of adverse selection, because authors want their high quality articles to be published, but in the end, only low quality articles get published.

    The way to solve an adverse selection problem is to reveal the private information - in this case, to identify which article submissions are high quality, and which are low quality. That would lead to a separating equilibrium, where low quality articles are rejected and high quality articles are accepted and published. The publisher can reveal this information through editorial and peer review. The editor reads the abstract (and perhaps the paper), and decides whether it is worthwhile sending for review, and if not the submission is desk-rejected. If the paper is sent out for peer review, its quality is judged by the peer reviewers. These processes are a form of screening - where the uninformed party (the publisher) tries to reveal the private information (about the quality of the article submission).

    However, screening is not the only way to deal with an adverse selection problem. And the problem with screening through editorial and peer review is that it takes up a lot of time. The editor has to spend time reading and making decisions, and the peer reviewers have to spend time reading and writing reports. The alternative to screening is signalling - where the informed party (the authors) reveal the private information themselves.

    Now, of course, if you simply ask the authors whether their paper is high quality or not, every author (even those with low quality articles) would respond that their paper is high quality. In order for a signal to credibly reveal the private information and be effective, it needs to meet two important conditions: (1) it needs to be costly; and (2) it needs to be more costly in a way that makes it unattractive for those with the low quality attributes to attempt. One way that signals could meet the second condition is if they are more costly to the authors of low quality articles.

    By having idiosyncratic formatting requirements, it is clear that submitting to a journal is costly, so it meets the first condition. What about the second condition? I can see two ways that we could argue that costly re-formatting is more costly for authors of low-quality articles than for authors of high-quality articles.

    First, authors of low-quality articles will realise that their submission has a lower chance of acceptance than a high-quality article does. That means that they can anticipate having to go through the re-formatting and submission process more than once, leading to a higher cost. To avoid this higher cost, they may avoid submitting to high quality journals (thereby revealing that their paper is low quality). Authors of high-quality articles know their submission is high quality and has a higher chance of being accepted, so they know they face a lower cost of re-formatting, and will be more likely to submit to a high-quality journal.

    Second, authors of high-quality articles are more likely to be high-quality academics, who are well supported by their institutions, have research grants, and may have research assistants who can handle the re-formatting at relatively low (salary, or monetary) cost. Authors of low-quality articles are less likely to have this support, and have to handle the re-formatting themselves, at (presumably) higher salary (or monetary) cost.

    So, perhaps the high cost of re-formatting journal articles for submission to journals is a signalling mechanism that acts as a way of sifting out the low-quality journal submissions before they start tying up editor (and peer reviewer) time? Would moving to a system where the authors can submit in any sensible format for the initial submission actually be an improvement, or would it tie up more resources in unnecessary editorial and peer review? It would be interesting to see some analysis of the experience of journals that have adopted a more open formatting approach for first submission, in terms of the quality of submissions (and the quality of published articles).

    Monday 4 January 2021

    The high cost of re-formatting papers for submission to journals

    One of the worst tasks in research is formatting papers ready for submission to an academic journal. Every journal has its own idiosyncratic requirements, in terms of formatting, referencing, word limits, abstract length, keywords, and so on. None of the time spent on formatting is productive time, and most of it is time wasted, given that more than half of the time your article submission is going to be rejected (and will need to be formatted again for submission to the next journal).

    How much time is wasted on the task of re-formatting? This 2019 paper by Yan Jiang (Stanford University) and co-authors collected data from authors of articles in:

    twelve journals from the InCites Journal Citation Reports (JCR) database in each of eight broad scientific (biology, biochemistry and molecular biology, microbiology, immunology, and cell biology) and clinical fields (cardiology, gastroenterology, oncology).

    In total, they had 206 responses (out of the 288 authors they approached). They found that:

    When asked how much time was needed for reformatting to all journals to which the paper was resubmitted to, the majority of authors (77/118, 65%...) reported that they spent 1–3 days or more (one day of effort was defined to the respondent as meaning eight hours). This did not include time spent on improving the scientific content or waiting for reviewer comments. Time spent on reformatting alone delayed resubmissions by over two weeks in most instances (60/118, 51%...).

    I'd suggest that 1-3 days on re-formatting is probably an overestimate, based on my experience. One day perhaps, but only if the journal you are submitting to is incredibly idiosyncratic in terms of referencing (e.g. there are some journals that require the first names of authors in the reference list, and that take forever to compile). Anyway, based on the survey responses, Jiang et al. estimate the total cost of time spent on re-formatting:

    Based on our data of 57.3% of articles needing resubmission, the time spent on reformatting... and prior data of 2.3 million annual scientific articles published... we estimate that first or corresponding authors spend about 23.8 million hours reformatting worldwide every year. Using the average first year postdoctoral researcher salary of $48,432... we roughly estimate costs of reformatting to be around $550 million dollars yearly worldwide for the first or corresponding author. When taking into account the time spent by the entire research team... the costs are estimated to be $1.1 billion dollars.

    Yikes! The formatting requirements of journals cost US$1.1 billion per year. We need more journals to adopt a process where the authors can submit in any sensible format for the initial submission. Jiang et al. note how rare this is in their sample:

    At the time of our review, only 4/96 (4%) of journals offered fully format-free initial submission.

    Finally, I found this bit from the start of the introduction kind of quaint:

    The process of publishing peer-reviewed research can be slow and onerous... It is not uncommon for manuscript reviews to take three months and the overall time from submission to publication to take between seven to nine months...

    I think there would be plenty of economists who would dream of a process that takes seven to nine months from submission to publication, rather than periods up to several years at some top journals.

    [HT: Marginal Revolution, back in 2019]

    Sunday 3 January 2021

    Tax incentives can encourage older people to delay retirement and work longer

    Developed countries are facing a problem. Increasing life expectancy, coupled with low fertility, is leading to a rapidly ageing population. Countries that have publicly funded old age pensions are likely going to face challenges to their continuing affordability, because there will be fewer working age taxpayers for each pension recipient (what economists refer to as a lower 'support ratio'). The options available to policy makers include increasing the age of eligibility for pensions (as several countries have done in recent years), decreasing the real value of pensions (such as by not adjusting them for inflation), or shifting from universal pensions to means-tested pensions (where older people with high income or wealth would not be eligible to receive the pension).

    All of these changes are politically tricky to implement, because as the population ages, older people (and those soon to become eligible for the pension) become an even larger share of the voting population. Also, reducing the real value of pensions (or delaying eligibility for them) may lead to increases in poverty among older people. Another alternative that may reduce these poverty concerns, is to encourage older people to delay retirement, working until they are older and, depending on the pension rules, potentially delaying their receipt of pension benefits (even when the age of eligibility has not changed). One way to encourage people to work more is to allow them to keep more of their labour earnings, such as by lowering the tax rate on labour income.

    A reasonable question, then, is how much difference can a tax change make to the labour market behaviour of older people? This 2017 article by Lisa Laun (Institute for Evaluation of Labour Market and Education Policy, Sweden), published in the Journal of Public Economics (open access) provides some indication. Laun uses linked Swedish data from the "Income and Tax Register (IoT), the Longitudinal Database on Education, Income and Employment (LOUISE) and the Employment Register", which allows her to track nearly 190,000 people who turned 65 years old within three months either side of the year end, between 2001 and 2010. Importantly, there were two changes in the tax regime that occurred at the start of 2007, as Laun explains:

    The first labor tax credit studied in this paper is an earned income tax credit that reduced the personal income tax on labor income only. It was introduced on 1 January 2007 for workers of all ages, with the purpose of increasing the returns from working relative to collecting public transfers. Motivated by the particular importance of encouraging older workers to remain in the labor force, the tax credit is substantially larger for workers aged 65 or above at the beginning of the tax year...

    The second labor tax credit studied in this paper is a payroll tax credit for workers aged 65 or above at the beginning of the tax year. Like the earned income tax credit, it was introduced on 1 January 2007... The payroll tax rate for workers above age 65... was reduced from 26.37% in 2006 to 10.21% in 2007. Since then, it only includes pension contributions. The payroll tax credit thus reduced the payroll tax rate for older workers by 16.16 percentage points.

    Laun evaluates the effect of the combination of these two tax rate changes on the labour market participation of older people. Specifically, she looks at the impact on the 'extensive margin' -whether older people work or not (as opposed to the 'intensive margin' - how many hours they work, if they are working). She essentially compares workers who are aged similarly, but on either side of the January date on which their tax rate changes. She finds that there is:

    ...a participation elasticity with respect to the net-of-participation-tax rate of about 0.22 for individuals who were working four years earlier.

    In other words, a one percentage point decrease in the tax rate increases labour force participation by 0.22 percentage points. Given that the employment rate just before age 65 appears to be about 63 percent, and the tax rates dropped by around 20 percentage points, the effect of the Swedish tax change amounts to about 4.4 percentage points of additional labour force participation, or an increase of about 7 percent. The results are robust to various other specifications, and are similar to results from other countries in other contexts not related to retirement. Laun also shows that the retirement hazard (essentially similar to the probability of retirement) decreases by a statistically significant amount as a result of the tax change.

    However, pension receipt does not change - people are just as likely to receive the pension after the tax change as before. Interestingly, in Sweden pension receipt is voluntary (but universal and not tied to  whether or not an older person is working or to their earnings, similar to the case in New Zealand), and delaying the pension allows a higher amount to be claimed later (a feature of pension systems that many countries have, but New Zealand does not). So, if working longer led to a delay in eligibility for pensions, you can bet that the effect of the tax change would be much smaller (and potentially zero).

    The take-away from this paper is that incentives do matter. It is possible to incentivise older people to work longer, even when they remain eligible for the old age pension. However, this sort of change isn't going to make pensions any more affordable unless the value of pensions in real terms is reduced as well. If people are working more, then the pension could potentially be less generous without substantially increasing poverty among older people. However, that doesn't make any changes in this space any easier to introduce politically.