Friday 30 November 2018

Beer prices and STIs

Risky sexual behaviour is, by definition, sexual behaviour that increases a person's risk of contracting a sexually transmitted infection (STI). Risky sex is more likely to happen when the participants are affected by alcohol. So, if there is less alcohol consumption, it seems reasonable to assume that there will be less risky sex. And if alcohol is more expensive, people will drink less (economists refer to that as the Law of Demand). Putting those four sentences together, we have a causal chain that suggests that when alcohol prices are higher, the incidence of sexually transmitted infections should be lower. But how much lower? And, could we argue that alcohol taxes are a good intervention to reduce STI incidence?

A 2008 article by Anindya Sen (University of Waterloo, in Canada) and May Luong (Statistics Canada), published in the journal Contemporary Economic Policy (ungated here) provides some useful evidence. Sen and Luong used provincial data from Canada for the period from 1981 to 1999, and looked at the relationship between beer prices and gonorrhea and chlamydia rates. Interestingly, over that time beer prices had increased by 10%, while gonorrhea incidence had decreased by 93% and chlamydia incidence had decreased by 28%.  They find that:
...higher beer prices are significantly correlated with lower gonorrhea and chlamydia rates and beer price elasticities within a range of -0.7 to -0.9.
In other words, a one percent increase in beer prices is associated with a 0.7 to 0.9 percent decrease in gonorrhea and chlamydia rates. So, if the excise tax on beer increased, then the incidence rate of STIs would decrease. However, it is worth noting that the effect of a tax change will be much less than that implied by the elasticities above. According to Beer Canada, about half of the cost of a beer is excise tax (although that calculation is disputed, I'll use it because it is simple). So, a 1% increase in beer tax would increase the price of beer by 0.5%, halving the effect on STIs to a decrease of 0.35 to 0.45 percent. Still substantial.

Of course, that assumes that Sen and Luong's results are causal, which they aren't (although they do include some analysis based on an instrumental variables approach, which supports their results and has an interpretation that is closer to causality). However, in weighing up the optimal tax on alcohol, the impact on STI incidence is a valid consideration.

Thursday 29 November 2018

The economic impact of the 2010 World Cup in South Africa

The empirical lack of economic impact of mega sports events is reasonably well established. Andrew Zimbalist has a whole book on the topic, titled Circus Maximus: The Economic Gamble behind Hosting the Olympics and the World Cup (which I reviewed here; see also this 2016 post). So, I was interested to read a new study on the 2010 FIFA World Cup in South Africa that purported to find significant impacts.

This new article, by Gregor Pfeifer, Fabian Wahl, and Martyna Marczak (all University of Hohenheim, in Germany) was published in the Journal of Regional Science (ungated earlier version here). World-Cup-related infrastructure spending in South Africa between 2004 (when their hosting rights were announced) and 2010 was:
...estimated to have totaled about $14 billion (roughly 3.7% of South Africa’s GDP in 2010) out of which $11.4 billion have been spent on transportation...
Unsurprisingly, the spending was concentrated in particular cities, which were to host the football matches. To measure economic impact, Pfeifer et al. use night lights as a proxy. They explain that:
...[d]ata on night lights are collected by satellites and are available for the whole globe at a high level of geographical precision. The economic literature using high‐precision satellite data, also on other outcomes than night lights, is growing... The usefulness of high‐precision night light data as an economic proxy is of particular relevance in the case of developing countries, where administrative data on GDP or other economic indicators are often of bad quality, not given for a longer time span, and/or not provided at a desired subnational level.
They find that:
Based on the average World Cup venue on municipality level, we find a significant and positive short‐run impact between 2004 and 2009, that is equivalent to a 1.3 percentage points decrease in the unemployment rate or an increase of around $335 GDP per capita. Taking the costs of the investments into account, we derive a net benefit of $217 GDP per capita. Starting in 2010, the average effect becomes insignificant...
That is pretty well demonstrated in the following figure. Notice that the bold line (the treated municipalities) sits above the dashed line (the synthetic control, see below) only from 2004 up to 2010, where they come back together.


They also find that:
...the average picture obscures heterogeneity related to the sources of economic activity and the locations within the treated municipalities. More specifically, we demonstrate that around and after 2010, there has been a positive, longer‐run economic effect stemming from new and upgraded transport infrastructure. These positive gains are particularly evident for smaller towns, which can be explained with a regional catch‐up towards bigger cities... Contrarily, the effect of stadiums is generally less significant and no longer‐lasting economic benefits are attributed to the construction or upgrade of the football arenas. Those are merely evident throughout the pre‐2010 period. Taken together, our findings underline the importance of investments in transport infrastructure, particularly in rural areas, for longer‐run economic prosperity and regional catch‐up processes.
In other words, the core expenditure on the tournament itself, such as stadiums, had no economic impact after construction ended (which is consistent with the broader literature), while the expenditure on transport infrastructure did. South Africa would have gotten the same effect by simply building the transport infrastructure without the stadiums.

There were a couple of elements of the study that troubled me. They used a synthetic control method. You want to compare the 'treated' municipalities (i.e. those where new transport infrastructure or stadiums were built) with 'control' municipalities (where no infrastructure was built, but which are otherwise identical to the treatment municipalities). The problem is that control municipalities that are identical to the treatment municipalities is all-but-impossible. So, instead you construct a 'synthetic control' as a weighted average of several other municipalities, so that the weighted synthetic control looks very similar to the treated municipality. This is an approach that is increasingly being used in economics.

However, in this case basically all of the large cities in South Africa were treated in some way. So, the synthetic control is made up of much smaller municipalities. In fact, the synthetic control is 80.8% weighted to uMhlathuze municipality (which is essentially the town of Richards Bay, northeast of Durban). So, effectively they were comparing the change in night lights in areas with infrastructure development with the change in night lights for Richards Bay (and the surrounding municipality).

Second, they drill down to look at the impacts of individual projects, and find that some of the projects have significant positive effects that last beyond 2010 (unlike the overall analysis, which finds nothing after 2010). Given the overall null effect after 2010, that suggests that there must be some other projects that had negative economic impacts after 2010. Those negative projects are never identified.

The economic non-impact of mega sports events is not under threat from this study. The best you could say is that hosting the FIFA World Cup induced South Africa to invest in transport infrastructure that might not have otherwise happened. Of course, we will never know.

Wednesday 28 November 2018

How many zombies are there in New Zealand?

Let's say there is some rare group of people and that you want to know how many people there are in the group. Say, people who own fifteen or more cats, or avid fans of curling. Conducting a population survey isn't going to help much, because if you survey 10,000 people and three belong to the group that doesn't tell you very much. Now, let's say that not only is the group rare, but people don't want to admit (even in a survey) that they belong to the group. Say, people who enjoyed the movie Green Lantern, or secret agents, or aliens, or vampires, or zombies. How do you get a measure of the size of those populations?

One way that you might be able to achieve this is an indirect method. If you survey a random sample of people, and you know how many people they know (that is, how many people are in their social network), you could simply ask each person in your survey how many Green Lantern lovers, or how many zombies, they know. You could then extrapolate from that how many there are in the population as a whole, if you make some assumptions about the overlaps between the networks of the people you surveyed.

It's not a totally crazy idea, but is sufficiently lampooned by Andrew Gelman (Columbia University) in this article published on ArXiv:
Zombies are believed to have very low rates of telephone usage and in any case may be reluctant to identify themselves as such to a researcher. Face-to-face surveying involves too much risk to the interviewers, and internet surveys, although they originally were believed to have much promise, have recently had to be abandoned in this area because of the potential for zombie infection via computer virus...
Zheng, Salganik, and Gelman (2006) discuss how to learn about groups that are not directly sampled in a survey. The basic idea is to ask respondents questions such as, "How many people do you know named Stephen/Margaret/etc." to learn the sizes of their social networks, questions such as "How many lawyers/teachers/police officers/etc. do you know," to learn about the properties of these networks, and questions such as "How many prisoners do you know" to learn about groups that are hard to reach in a sample survey. Zheng et al. report that, on average, each respondent knows 750 people; thus, a survey of 1500 Americans can give us indirect information on about a million people.
If you're interested, the Zheng et al. paper is open access and available here. So, how many zombies are there in New Zealand? To find out, someone first needs to do a random survey asking people how many zombies they know.

Read more:


Sunday 25 November 2018

The law and economics (and game theory) of Survivor

As I mentioned in a post last year, I really love the reality show Survivor. One day I might even collate some of the cool economics-related examples from the show - comparative advantage, asymmetric information, risk and uncertainty, public goods, common resources, and lots and lots of game theory (coalitions, prisoners' dilemmas, repeated games, coordination games, and so on).

I recently ran across this 2000 working paper by Kimberley Mason and Maxwell Stearns (both George Mason University) on the law and economics of Survivor. It would probably be more correct if the title said it was about the game theory of Survivor, which is what it is. It was written soon after the conclusion of the first season of Survivor (which is currently showing its 37th season - David vs. Goliath). The paper is interesting in that it traces out all the strategic decision-making in the first season, and relates it to game theory and rational decision-making. Mason and Stearns also highlight the masterstroke of the eventual winner, Richard Hatch, in bowing out of the last immunity challenge:
The strenuous nature of the competition helped Richard to justify a decision that was ultimately a well disguised defection from his suballiance with Rudy. Recall that Richard withdrew from the competition, claiming that he knew he would not win. If one construes the Richard/Rudy suballiance as a commitment to do whatever they can to ensure that they emerge as the finalists... then by withdrawing, Richard defected. To see why consider how the game was necessarily played as a result of Richard’s decision. Had Rudy won the competition, he would have voted to keep Richard on as a finalist, consistent with his commitment to the suballiance. Because Kelly preferred Rudy to Richard (as shown in her first vote in cycle 13), this would have risked a 4 to 3 vote for Rudy by the jury. (This assumes that the remaining six jurors vote as they did.). But if Kelly won the game, then she would choose between Rudy and Richard. She knew that either of them would vote for the other as a juror. The only question from her perspective was who was more popular with the remaining jurors. As Richard likely knew, Rudy was more popular, meaning that if Kelly won, Richard would still be selected as a finalist. In contrast, if Richard stayed in the immunity contest and won, he faced another Catch-22. If he voted to keep Rudy, then Kelly would vote for Rudy as a juror, and as a result, Richard would lose (again assuming the other jurors voted as they did). And if he voted for Kelly, then he would violate the express terms of the suballiance with Rudy, and risk Rudy’s retribution. If Rudy also defected, then Kelly would win. The only way that Richard could reduce the likelihood of this result was to withdraw from the game. While he would remain a finalist regardless of whether Rudy or Kelly won, he hoped that Kelly would win because she would eliminate his toughest final competitor.
Kelly won the challenge, and Richard duly won Survivor by a vote of 4-3. Mason and Stearns conclude:
At the beginning of this essay, we posited that Survivor was played in a manner that was consistent with the predictions of rational choice theory. We certainly do not suggest that every player played in a manner that optimized his or her prospects for winning. Indeed, that is largely the point. At each step in the game, those who best positioned themselves to win were the ones who played in a rational and strategic manner.
Interestingly, the paper also contains a discussion of the optimal size of an alliance, based on theory from Gordon Tullock and Nobel Prize winner James Buchanan, which should be familiar to my ECONS102 students:
Professors Buchanan and Tullock present an optimal size legislature as a function of two costs, agency costs, which are negatively correlated with the number of representatives, and decision costs, which are positively correlated with the number of representatives... The optimum point, according to Buchanan and Tullock, is that which minimizes the sum of agency and decision costs...
These two conflicting costs, which are both a function of coalition size, pit the benefits of safety in numbers against the risks of disclosure to non-alliance members... As the size of the coalition increases, the members are increasingly protected against the risk that a member will defect in favor of an alternative coalition. Conversely, as coalition size increases, the members face an increased risk of disclosure, which could lead to a coalition breakdown.
The optimal size of an alliance is one that correctly balances the benefits of being large enough to be safe from the non-allied players voting you off (the marginal benefit of adding one more person to the alliance decreases as the alliance gets larger), against the costs of the alliance being revealed to all players (the marginal cost of adding one more person to the alliance increases as the alliance gets larger). The cost of having a large alliance also relates to the chance of defection - the chance that one or more members of the alliance switch sides and blindside someone. It is easier to maintain trust and cohesion in a smaller alliance.

Survivor is a great example of economics in action. If you aren't already a fan, you should start watching it!

Read more:


Saturday 24 November 2018

The debate over a well-cited article on online piracy

Recorded music on CDs and recorded music as digital files are substitute goods. So, when online music piracy was at its height in the 2000s, it is natural to expect that there would be some negative impact on recorded music sales. For many years, I discussed this with my ECON110 (now ECONS102) class. However, in the background, one of the most famous research articles on the topic actually found that there was essentially no statistically significant effect of online piracy on music sales.

That 2007 article was written by Felix Oberholzer-Gee (Harvard) and Koleman Strumpf (Kansas University), and published in the Journal of Political Economy (one of the Top Five journals I blogged about last week; ungated earlier version here). Oberholzer-Gee and Strumpf used 17 weeks of data from two file-sharing servers, matched to U.S. album sales. The key issue with any analysis like this is:
...the popularity of an album is likely to drive both file sharing and sales, implying that the parameter of interest Î³ will be estimated with a positive bias. The album fixed effects vi control for some aspects of popularity, but only imperfectly so because the popularity of many releases in our sample changes quite dramatically during the study period.
The standard approach for economists in this situation is to use instrumental variables (which I have discussed here). Essentially, this involves finding some variable that is expected to be related to U.S. file sharing, but shouldn’t plausibly have a direct effect on album sales in the U.S. Oberholzer-Gee and Strumpf use school holidays in Germany. Their argument is that:
German users provide about one out of every six U.S. downloads, making Germany the most important foreign supplier of songs... German school vacations produce an increase in the supply of files and make it easier for U.S. users to download music.
They then find that:
...file sharing has had only a limited effect on record sales. After we instrument for downloads, the estimated effect of file sharing on sales is not statistically distinguishable from zero. The economic effect of the point estimates is also small.... we can reject the hypothesis that file sharing cost the industry more than 24.1 million albums annually (3 percent of sales and less than one-third of the observed decline in 2002).
Surprisingly, this 2007 article has been a recent target for criticism (although, to be fair, it was also a target for criticism at the time it was published). Stan Liebowitz (University of Texas at Dallas) wrote a strongly worded critique, which was published in the open access Econ Journal Watch in September 2016. Liebowitz criticises the 2007 paper for a number of things, not least of which is the choice of instrument. It is worth quoting from Liebowitz's introduction at length:
First, I demonstrate that the OS measurement of piracy—derived from their never-released dataset—appears to be of dubious quality since the aggregated weekly numbers vary by implausibly large amounts not found in other measures of piracy and are inconsistent with consumer behavior in related markets. Second, the average value of NGSV (German K–12 students on vacation) reported by OS is shown to be mismeasured by a factor of four, making its use in the later econometrics highly suspicious. Relatedly, the coefficient on NGSV in their first-stage regression is shown to be too large to possibly be correct: Its size implies that American piracy is effectively dominated by German school holidays, which is a rather farfetched proposition. Then, I demonstrate that the aggregate relationship between German school holidays and American downloading (as measured by OS) has the opposite sign of the one hypothesized by OS and supposedly supported by their implausibly large first-stage regression results.
After pointing out these questionable results, I examine OS’s chosen method. A detailed factual analysis of the impact of German school holidays on German files available to Americans leads to the conclusion that the extra files available to Americans from German school holidays made up less than two-tenths of one percent of all files available to Americans. This result means that it is essentially impossible for the impact of German school holidays to rise above the background noise in any regression analysis of American piracy.
I leave it to you to read the full critique, if you are interested. Oberholzer-Gee and Strumpf were invited to reply in Econ Journal Watch. However, instead they published a response in the journal Information Economics and Policy (sorry, I don't see an ungated version online) the following year.  However, the response is a great example of how not to respond to a critique of your research. They essentially ignored the key elements of Liebowitz's critique, and he responded in Econ Journal Watch again in the May 2017 issue:
Comparing their IEP article to my original EJW article reveals that their IEP article often did not respond to my actual criticisms but instead responded, in a cursorily plausible manner, to straw men of their own creation. Further, they made numerous factual assertions that are clearly refuted by the data, when tested.
In the latest critique, Liebowitz notes an additional possible error in Oberholzer-Gee and Strumpf's data. It seems to me that the data error is unlikely (it is more likely that the figure that represents the data is wrong), but since they haven't made their data available to anyone, it is impossible to know either way.

Overall, this debate is a lesson in two things. First, it demonstrates how not to respond to reasonable criticism - that is, by avoiding the real questions and answering some straw man arguments instead. Related to that is making your data available. Restricting access to the data (except in cases where the data are protected by confidentiality requirements) makes it seem as if you have something to hide! In this case, the raw data might have been confidential, but the weekly data used in the analysis are derivative and may not be. Second, as Leibowitz notes in his first critique, most journal editors are simply not interested in publishing comments on articles published in their journal, where the comments might draw attention to flaws in the original articles. I've struck that myself with Applied Economics, and ended up writing a shortened version of a comment on this blog instead (see here). It isn't always the case though, and I had a comment published in Education Sciences a couple of months ago. The obstructiveness of authors and journal editors to debate on published articles is a serious flaw in the current peer reviewed research system.

In the case of Oberholzer-Gee and Strumpf's online piracy article, I think it needs to be seriously down-weighted. At least until they are willing to allow their data and results to be carefully scrutinised.

Thursday 22 November 2018

Drinking increases the chances of coming to an agreement?

Experimental economics allows researchers to abstract from the real world, presenting research participants with choices where the impact of individual features of the choice can be investigated. Or sometimes the context in which choices are made can be manipulated to test their effects on behaviour.

In a 2016 article published in the Journal of Economic Behavior & Organization (I don't see an ungated version, but it looks like it might be open access anyway), Pak Hung Au (Nanyang Technological University) and Jipeng Zhang (Southwestern University of Finance and Economics) look at the effect of drinking alcohol on bargaining.

They essentially ran three experiments, where participants drank either: (1) a can of 8.8% alcohol-by-volume beer; (2) a can of non-alcoholic beer; or (3) a can of non-alcoholic beer (and where they were told it was non-alcoholic). They then got participants to play a bargaining game where each participant was given a random endowment of between SGD1 and SGD10, then paired up with another and:
...chose whether to participate in a joint project or not. If either one subject in the matched pair declined to participate in the joint project, they kept their respective endowments. If both of them agreed to start the joint project, the total output of the project was equally divided between them. The project’s total output was determined by the following formula: 1.2 × sum of the paired subjects’ endowment.
So, if Participant X had an endowment of 4 and Participant Y had an endowment of 8, and they both chose to participate in the joint project, they would each receive $7.20 (12 * 1.2 / 2). So, it should be easy to see that participants with a low endowment are more likely to be better off participating than participants with a high endowment (who will be made worse off, like Participant Y in the example above, unless the person they are paired with also has a high endowment). Au and Zhang show that participating in the joint project should only be chosen by those with an endowment of 3 or less.

Using data from 114 NTU students, they then find that:
...[w]hen the endowment is between 1 and 4 dollars, almost all subjects participate in the alcohol treatment and a large proportion (around 90–95%) of the subjects join the project in the two nonalcohol treatments. When the endowment is above 8 dollars, the participation ratio drops to around 0.2...
...subjects in the alcohol treatment are more likely to join the project at almost all endowment levels, compared with their counterparts in the two nonalcohol treatments.
Importantly, they show that drinking alcohol has no effect on altruism or risk aversion, ruling out those as explanations for the effect, leading them to conclude that:
...in settings in which skepticism can lead to a breakdown in negotiation, alcohol consumption can make people drop their guard for each others’ actions, thus facilitating reaching an agreement.
Drinking makes people more likely to come to an agreement. I wonder how you could reconcile those results with the well-known effects of alcohol on violent behaviour? Perhaps alcohol makes people more likely to agree to engage in violent confrontation?

Read more:


Wednesday 21 November 2018

Tauranga's beggar ban, and the economics of panhandling

Tauranga City Council has just passed a ban on begging. The New Zealand Herald reports:
The council voted 6-5 to ban begging and rough sleeping within five metres of the public entrances to retail or hospitality premises in the Tauranga City, Greerton and Mount Maunganui CBDs.
The bans will become law on April 1, 2019, as part of the council's revised Street Use and Public Places Bylaw...
[Councillor Terry] Molloy said the measures of success would be a marked reduction in beggars and rough sleepers in the targeted areas, the community feeling a higher level of comfort with their security in those areas, happier retailers and no proof the problem had moved elsewhere.
Becker's rational model of crime applies here. Panhandlers (or beggars) weigh up the costs and benefits of panhandling when they decide when (and where) to panhandle. If you increase the costs of panhandling (for example, by penalising or punishing panhandlers), you can expect there to be less of it (at least, in the areas where the penalties are being enforced). How much less? I'll get to that in a moment.

You may doubt that panhandlers are rational decision-makers. However, a new paper by Peter Leeson and August Hardy (both George Mason University) shows that panhandlers do act in rational ways. Using data from 258 panhandlers, observed on 242 trips to Washington D.C. Metrorail stations, Leeson and Hardy find that:
Panhandlers solicit more actively when they have more human capital, when passersby are more responsive to solicitation, and when passersby are more numerous. Panhandlers solicit less actively when they encounter more panhandling competition. Female panhandlers also solicit less actively...
Panhandlers are attracted to Metro stations where passersby are more responsive to solicitation and to stations where passersby are more numerous. Panhandlers are also attracted to Metro stations that are near a homeless shuttle-stop and are more numerous at stations that are near a homeless service.
In other words, if the benefits of panhandling increase (because passersby are more responsive, or more numerous), there will be more panhandling. If the costs of panhandling are higher (because the Metro station is further to travel to), there will be less panhandling. This is exactly what you would expect from rational panhandlers. As Leeson and Hardy note:
...panhandlers behave as homo economicus would behave if homo economicus were a street person who solicited donations from passersby in public spaces.
Interestingly, Leeson and Hardy also explain in economic terms how panhandling works:
...panhandler solicitation is generally regarded as a nuisance; it threatens to create “psychological discomfort…in pedestrians,” such as guilt, awkwardness, shame, even fear (Ellickson 1996: 1181; Skogan 1990; Burns 1992).5 Third, pedestrians are willing to pay a modest price to avoid this discomfort...
By threatening more discomfort, more active panhandler solicitation extracts larger payments through two channels. First, it makes passersby who would have felt the threat of some discomfort and thus paid the panhandler something even if he had solicited less actively feel a still greater threat and thus pay him more. Second, it makes passersby who would not have felt the threat of any discomfort and thus not paid the panhandler anything if he had solicited less actively feel a threat and thus pay him a positive amount.
An interesting question arises though. Panhandlers have imperfect information about how willing passersby are to pay to avoid this discomfort. They find this out through soliciting. So, in areas where passersby pay more (or more often), that might encourage panhandlers to be more active. I'm sure I'm not the only one who has been told not to pay panhandlers, because you just encourage more of the activity. It seems obvious. But is it true?

A new article by Gwendolyn Dordick (City College New York) and co-authors, published in the Journal of Urban Economics (ungated earlier version here), suggests probably not. They collected data from 154 walks through downtown Manhattan (centred around Broadway) in 2014 and 2015, where they observed the location and numbers of panhandlers. Importantly, their data collection occurred before and after several significant changes downtown, including the opening of One World Trade Center and its observation deck, and the increase in foot traffic associated with the September 11 Museum. This allowed them to evaluate how an increase in potential earnings (through more passersby, especially tourists) affects panhandling. They find that:
...the increase in panhandling was small and possibly zero (although our confidence intervals are not narrow enough to rule out relatively large changes). Panhandling moved from around Broadway toward areas where tourists were more common... We tentatively conclude that the supply elasticity of panhandling is low...
The moral hazard involved in giving to panhandlers seems to be small. More generally, the incidence of policy changes in places like Downtown Manhattan is likely to be pretty simple: a fairly constant group of panhandlers gains or loses; there is no “reserve army of panhandlers ”to eliminate any rise in returns by flooding in, and no shadowy “panhandling boss ”behind the scenes to soak up any gains by asking more money for right to panhandle in various locations (since control over even the best location is not worth much because substitute lo- cations are often vacant). Giving to panhandlers does not to any great extent encourage panhandling or a fortiori homelessness. 
In other words, when the expected earnings from panhandling increase, there isn't a sudden influx of new panhandlers, and existing panhandlers don't spend more time soliciting. Interestingly, they also find that:
Because the number of people who want to panhandle, even at the best times and places, is small, space is effectively free. Supply at zero price exceeds demand. Because space is free, so is courtesy, and so is abiding by norms.
There was little fighting among the panhandlers, because space was abundant. The main constraint to panhandling in Manhattan appeared to be that people didn't want to panhandle, not a lack of space to do so. This may also be because the panhandlers are 'target earners' - they only panhandle for long enough to earn what they wanted to for that day, so if the earnings are good they don't have to panhandle for very long (although Dordick et al.'s results cast some doubt on whether panhandlers actually are target earners).

What can Tauranga learn from the results of these two papers? Greater penalties on panhandlers will reduce panhandling, but it might also simply move it elsewhere. In these other locations, panhandlers will have to work even harder (perhaps being even more aggressive), and for longer, to achieve their target earnings. And because the best spaces for panhandling have been taken away from them, there may be even more conflict over the remaining spaces. I guess we'll see in due course.

[HT for the Leeson and Hardy paper: Marginal Revolution]

Tuesday 20 November 2018

The value (or lack thereof) of low-ranked journals in economics

Last week I posted about the emphasis in economics on publishing in the 'Top Five' journals. Not everyone can publish there, obviously, due to space constraints. And even top economists can't publish every paper in the Top Five. So, what is the relative value of lower-ranked journals?

In a 2017 IZA Discussion Paper, Nattavudh Powdthavee (Warwick Business School), Yohanes Riyanto (Nanyang Technological University), and Jack Knetsch (Simon Fraser University) take that question a step further. They ask whether publishing in lower ranked journals might have negative value. That is, publishing in low-ranked journals might lower other economists' perceptions of the quality of an academic CV.

This idea harks back to a 1998 paper by Christopher Hsee (ungated version here), which showed that (quote is from Powdthavee et al.):
...people shown a set of dinnerware having 24 pieces in good condition, were willing to pay significantly more for these than another group of people were willing to pay for a set that contained 28 pieces in good condition but with another 11 that were broken.
Using survey data from 378 economists from economics departments in the top ten percent worldwide, Powdthavee et al. find something similar:
...it appears likely that the inclusion of lower ranked journals on an individual’s publication list will have a negative impact on the assessment of such lists by other economists. We found statistically significant differences between the higher average 1 to 10 rating that respondents gave to both lists having only eight higher ranked journals, and the lower average rating that other subsamples gave to lists containing all of the same eight higher ranked journals plus six more lower ranked ones.
Specifically, adding lower-ranked journals to a list that included two Top Five journals lowered the average rating from 8.1 out of 10 to 7.6 out of 10. Similarly, adding lower-ranked journals to a list that didn't include any Top Five journals lowered the average rating from 7.0 to 6.3 out of 10. However, it's not all bad news. When CVs including and excluding the lower-ranked journals were rated jointly (that is, the survey respondents saw both CVs), the effect went away. So, when CVs are being compared, the negative effect disappears. However, Powdthavee et al. note that:
There are, of course, occasions in which is it the results of joint valuations that will matter to final outcomes. Perhaps most easily imagined are comparisons between candidates for a position or honour – Candidate X vs. Candidate Y. But most others, such as those involving promotion, tenure, and selection of consultants and other experts, seem to be ones more likely to turn on results of single valuations. Further, even in cases of Candidates X and Y competition over a position, it is largely the results of single valuations that determine whether a person becomes a Candidate X or a Candidate Y.
It makes me wonder though, how publications outside of economics journals affect the perception of a CV. As someone with an appreciable number of publications outside of economics, of course I have a vested interest in knowing the answer to that question.

Read more:


Monday 19 November 2018

The gender gap and question time in academic seminars

The sources of the gender gap in STEM (and economics) are probably many and varied. However, one potential source that has been highlighted is the lack of role models (see my post on the topic here). If female senior students don't have female academic role models, then they be less likely to pursue an academic career, and if female junior faculty don't have female senior faculty role models, they may be less likely to succeed.

A recent paper by Alecia Carter (University of Cambridge), Alyssa Croft (University of Arizona), Dieter Lukas (University of Cambridge) and Gillian Sandstrom (University of Essex) looks at one example of role modelling - the asking of questions in academic seminars (with a nice non-technical summary on the LSE Impact Blog). Specifically, they collected survey data from 509 academics (mostly in biology or psychology), as well as observational data from "247 seminars, from 42 departments of 35 institutions in 10 countries".

They found that:
...a given question after a departmental seminar was more than 2.5 times more likely to be asked by a male than a female audience member, significantly misrepresenting the gender-ratio of the audience which was, on average, equal.
So, even after controlling for the gender make-up of the audience, and the hosting institution, and the gender of the speaker, men were more than twice as likely to ask questions as women. They conclude that:
[i]n the case of academic seminars, then, the fact that our data show women asking disproportionately fewer questions than men necessarily means that junior scholars are encountering fewer visible female role models in the field.
An interesting aspect of the paper is that they tried an intervention to increase the number of questions from female audience members. They first noted that there were more questions (and more questions from women) when the time allowed for questions was longer. They then manipulated some seminar lengths to allow for more time for questions. However, the intervention had no effect on the number of questions asked by female audience members.

Having read the paper though, there was an alternative intervention they could have tried. The number of questions from female audience members also appears to be higher when a woman asks the first question (even if you exclude the first question from the analysis). So, a simpler intervention would have been to ask the seminar moderator or chair to ensure that a female audience member asks the first question (if possible).

I've just gotten back from a conference in the U.S. I can't say I noticed a gender disparity in question asking there, but then again I wasn't looking. I'll definitely be paying more attention during departmental seminars and conference sessions in future.

[HT: eSocSci]

Saturday 17 November 2018

Book review: On the Third Hand

In the 19th Century, Thomas Carlyle dubbed economics "the dismal science". Unfortunately, the reputation of economics for being humourless has little improved since then. But that reputation is undeserved. Caroline Postelle Clotfelder's 1996 book, On the Third Hand: Humor in the Dismal Science, an Anthology, collects the best examples of economics humour across the time since Adam Smith (and up to the mid-1990s, obviously).

Most of the book is comprised of excerpts, and a lot of it is satire. Some of it has not aged well, and without better understanding the context of the time, I'm afraid the humour was lost on me. Other parts remain hilarious, such as Alan Blinder on the economics of brushing teeth (which can also be found here) and Arthur Levine on the benefits of government-sponsored sex. George Stigler's arguments against textbook publishers constantly bringing out new editions and the associated costs to society remain relevant today.

Some particularly famous pieces make it into the book. On such is the "Petition of the manufacturers of candles, wax lights, lamps, candlesticks, street lamps, snuffers, extinguishers, and of the producers of oil, tallow, rosin, alcohol, and, generally, of everything connected with lighting" (by Federic Bastiat, available online here), where the petitioners argue that they should be protected from "the intolerable competition" from the sun. Another, though some argument could be made as to whether it counts as economics, is Jonathan Swift's "A modest proposal" (available online here).

A particular highlight for me was Joan Beck's suggestions on how to reduce healthcare costs, which included do-it-yourself options for patients, end-of-year closeout sales on discontinued therapies, frequent-flyer-like programs where patients accumulate points, seasonal specials, his-and-her operations, family rates, and so on.

The anthology is a great collection of the humour and wit of economists past. Before life gets too dismal, it might be worth tracking down a copy of the book.

Friday 16 November 2018

Intergenerational transmission of gender bias and the gender gap in STEM

More male students than female students choose to study STEM subjects (and economics) at university. In a recent post, I discussed two papers that could be interpreted as suggesting comparative advantage as an explanation. That is:
...even though female students may be better than male students in STEM subjects at school, we may see fewer of them studying those subjects at university (let alone taking them as a career), because female students are also better in non-STEM subjects at school, and they are better by more in non-STEM than in STEM, compared with male students. Economists refer to this as the students following their comparative advantage. Female students have a comparative advantage in non-STEM, and male students have a comparative advantage in STEM subjects.
However, there are other competing explanations. Students' choices are affected by their parents and by their peers. So, if their parents hold biased views about whether studying a STEM subject is suitable for female students (or if their peers do), then female students may be less likely to study STEM subjects. A recent working paper by Alex Eble (Columbia University) and Feng Hu (University of Science and Technology Beijing) looks specifically at the intergenerational transmission of bias (from parents to children).

Eble and Hu use data from 8,912 Chinese students in 215 seventh and ninth grade classrooms, who were routinely randomly assigned to classrooms. That means that each student's peer group (within their classroom) was randomly assigned. The key variables that Eble and Hu are interested in are the bias in the views of each student's own parents, and similar bias in the views of each student's peers' parents. They then look at how those two biases are related to students' own bias, aspirations, and academic performance. They find that:
...a one standard deviation increase in peer parents’ bias causes a 4.2 percentage point (8%) increase in the likelihood that a child will hold the bias... This transmission of bias appears to occur for both girls and boys in roughly the same manner... we calculate that going from roughly 25 percent of peers’ parents being biased to 75 percent of peers’ parents being biased generates an 18.9 percentage point (34%) change in the likelihood that a child will also hold that bias...
Children whose parents hold anti-girl bias are 29 percentage points (52%) more likely to also hold that bias, and again the transmission appears to hold equally for boys and for girls...
Ok, so there is intergenerational transmission of bias. How does that affect the gender gap? Eble and Hu find that:
...an increase in peer parent bias increases the gender gap... in girls’ perceived difficulty of math relative to boys’ by two percentage points, or 28 percent of the 7.2 percentage point gap between boys and girls in this variable. This pattern also holds for own parents’ bias, and the estimates are again more stark: our estimated coefficient of own parent’s bias on the gender gap is a 15.0 percentage point increase in the likelihood that a girl perceives math to be difficult, relative to the likelihood for boys. For boys, own parent’s bias is associated with a 6.1 percentage point decrease in the likelihood the child will perceive math to be difficult; the “total effect” for girls is an 8.9 percentage point increase in this likelihood...
As with perceived difficulty, our estimates of the effect of peer parents’ bias on boys’ and girls’ test scores, respectively, differ in sign. Boys appear to gain slightly (a statistically insignificant 0.05 SD increase) from a one standard deviation increase in the proportion of peers whose parents believe that boys are superior to girls in learning math. For girls, on the other hand, we estimate that a one SD increase in peer parent bias increases the gender gap - that is, reduces girls’ performance relative to boys’ - by a statistically significant 0.063 SD...
Here again the correlation between own parent’s bias and performance is much larger in magnitude - the scores of boys whose parents believe that boys are better than girls at learning math are 0.16 SD higher than for boys whose parents do not believe this, and for girls, having a parent who holds this bias pushes the child’s test score down, relative to boys’ scores, by 0.28 SD...
In other words, they find evidence that the gender gap in perceived difficulty of maths and in maths test scores can be partially explained by own parents' bias and peer parents' bias. Unsurprisingly, the effect of own parents' bias is the larger of the two. Interestingly, they also find that girl peers' parents have a bigger effect on girls than boy peers' parents do. However, in amongst the results they also find a potential partial solution, at least in terms of peer parents' bias:
These negative effects, however, decrease with the number of friends the child has in her class; a child with five close friends in her class appears to be entirely immune to the negative effects of peer parent bias that we have shown throughout this study...
So, it's not all bad news. If you want to give your daughter the best chance for success in STEM, start by minimising the bias in your own views. If you're worried about the biased views of other parents, to the extent you can you should make sure your daughter has many close friends in her class.

[HT: Development Impact]

Wednesday 14 November 2018

The emphasis on publishing in the top five economics journals

James Heckman and Sidharth Moktan (both University of Chicago) have a new NBER Working Paper (ungated version here) on publication in the "Top Five" economics journals. The top five (T5) journals are generally considered to be the American Economic Review, Econometrica, the Journal of Political Economy, the Quarterly Journal of Economics, and the Review of Economic Studies (although some would quibble over the latter, preferring the Review of Economics and Statistics). Why worry about this topic? Heckman and Moktan explain:
Publication in the T5 journals has become a professional standard. Its pursuit shapes research agendas. For many young economists, if a paper on any topic cannot be published in a T5 outlet, the topic is not worth pursuing. Papers published in non-T5 journals are commonly assumed to have descended into their "mediocre" resting places through a process of trial and failure at the T5s and are discounted accordingly... Pursuit of the T5 has become a way of life for experienced economists as well. Falling out of the T5 is a sign of professional decline. Decisions about promotions, recognitions, and even salaries are tied to publication counts in the T5.
There are many points of interest in the working paper, where Heckman and Moktan show that:
...publishing three T5 articles is associated with a 370% increase in the rate of receiving tenure, compared to candidates with similar levels of publications who do not place any in the T5. Candidates with one or two T5 articles are estimated to experience increases in the rate of receiving tenure of 90% and 260% respectively, compared to those with the same number of non-T5 publications.
Their results are based on data on the job and publication histories of tenure-track faculty hired by the top 35 U.S. economics departments between the years 1996 and 2010. This makes them of less relevance to non-U.S. PhD students going into the job market (although publishing in T5 journals will still be a strong signal of quality for those students).

Of interest to many are the results by gender, which make for disturbing reading. In particular, Figure 9 shows the probability of moving to tenure for different numbers of T5 publications, by gender:


Based on that figure, it is no surprise to find that:
...male faculty reap greater rewards for T5 publication - the same quantity of T5 publications is associated with greater reductions in time-to-tenure for male faculty compared to their female counterparts. Gender differences in T5 rewards are not attributable to gender differences in the quality of T5 articles.
The last point is particularly troublesome. Female faculty producing the same quality of research are clearly not being treated the same (not a unique finding - see some of my earlier posts on this point here and here).

The figure above clearly shows that, regardless of gender, T5 publications are valuable. Heckman and Moktan stop short of calculating how valuable they are though. A 2014 article published in the journal Economic Inquiry (ungated earlier version here), by Arthur Attema, Werner Brouwer, and Job van Exel (all Erasmus University Rotterdam), provides a partial answer to this. Based on survey data from 85 authors of economics journal articles, they used a contingent valuation survey to answer the question of whether economists would be willing to 'give their right arm' for a publication in the American Economic Review. They found that:
...sacrificing half a right thumb appears to be a better approximation of the strength of preference for a publication in the AER than sacrificing a right arm.
Given the lower payoff from a T5 publication, presumably women would be willing to give up much less than half a thumb.

[HT: Marginal Revolution]

Sunday 11 November 2018

Inequality, policy and some cause for optimism

The causes underlying inequality are extraordinarily complex. If the causes were simple to tease apart, the policy prescription to address inequality would also be simple to identify. In my ECONS102 class, we devote a substantial amount of time just to list a number of the most high-profile causes of inequality. Teasing out which causes contribute the most to current inequality seems like an impossible task.

However, it may be possible to look at differences in inequality over time, or between countries, and identify some of the contributions to those differences. For instance, in New Zealand there was a big jump in inequality in the late 1980s and early 1990s, as shown here (which is Figure D.14 from this report, and shows the changes in two measures of inequality for New Zealand from 1981 to 2017):


Since then, there really hasn't been much change, despite any media rhetoric to the contrary (a point made many times by Eric Crampton - see here and here and here for example - or see this post of mine from last year). In a recent article in Scientific American, Nobel Prize winner Joseph Stiglitz looks at the case of the U.S. His article is worth a complete read, but I'll pull out the most relevant bits:
Since the mid-1970s the rules of the economic game have been rewritten, both globally and nationally, in ways that advantage the rich and disadvantage the rest. And they have been rewritten further in this perverse direction in the U.S. than in other developed countries—even though the rules in the U.S. were already less favorable to workers. From this perspective, increasing inequality is a matter of choice: a consequence of our policies, laws and regulations.
In the U.S., the market power of large corporations, which was greater than in most other advanced countries to begin with, has increased even more than elsewhere. On the other hand, the market power of workers, which started out less than in most other advanced countries, has fallen further than elsewhere. This is not only because of the shift to a service-sector economy—it is because of the rigged rules of the game, rules set in a political system that is itself rigged through gerrymandering, voter suppression and the influence of money. A vicious spiral has formed: economic inequality translates into political inequality, which leads to rules that favor the wealthy, which in turn reinforces economic inequality...
Political scientists have documented the ways in which money influences politics in certain political systems, converting higher economic inequality into greater political inequality. Political inequality, in its turn, gives rise to more economic inequality as the rich use their political power to shape the rules of the game in ways that favor them—for instance, by softening antitrust laws and weakening unions...
Rigged rules also explain why the impact of globalization may have been worse in the U.S. A concerted attack on unions has almost halved the fraction of unionized workers in the nation, to about 11 percent. (In Scandinavia, it is roughly 70 percent.) Weaker unions provide workers less protection against the efforts of firms to drive down wages or worsen working conditions...
Many other changes to our norms, laws, rules and regulations have contributed to inequality. Weak corporate governance laws have allowed chief executives in the U.S. to compensate themselves 361 times more than the average worker, far more than in other developed countries. Financial liberalization—the stripping away of regulations designed to prevent the financial sector from imposing harms, such as the 2008 economic crisis, on the rest of society—has enabled the finance industry to grow in size and profitability and has increased its opportunities to exploit everyone else...
Other means of so-called rent extraction—the withdrawal of income from the national pie that is incommensurate with societal contribution—abound. For example, a legal provision enacted in 2003 prohibited the government from negotiating drug prices for Medicare—a gift of some $50 billion a year or more to the pharmaceutical industry. Special favors, such as extractive industries' obtaining public resources such as oil at below fair-market value or banks' getting funds from the Federal Reserve at near-zero interest rates (which they relend at high interest rates), also amount to rent extraction...
Notice the similarity in some of the arguments Stiglitz is making, to the market-based reforms that happened in New Zealand in the late 1980s to early 1990s. Many economists argue that there is a trade-off between market efficiency (the size of the economic pie) and equity (how evenly the pie is distributed). The reforms were designed to increase the size of the pie, so that all groups could have more, even if the shares were less evenly distributed. Stiglitz takes on those arguments as well:
Some economists have argued that we can lessen inequality only by giving up on growth and efficiency. But recent research, such as work done by Jonathan Ostry and others at the International Monetary Fund, suggests that economies with greater equality perform better, with higher growth, better average standards of living and greater stability. Inequality in the extremes observed in the U.S. and in the manner generated there actually damages the economy. The exploitation of market power and the variety of other distortions I have described, for instance, makes markets less efficient, leading to underproduction of valuable goods such as basic research and overproduction of others, such as exploitative financial products.
Stiglitz's policy prescription has many facets to it. Of most relevance to New Zealand are reform of political financing to reduce the influence of lobby groups, progressive taxation, increased access to high-quality education, modern competition laws, labour laws that better protect workers (see here for a related post of mine), and reforms to corporate governance laws and affordable housing. Much of that sounds like the policy goals of the current New Zealand government. So, if Stiglitz is right, then there may be cause for optimism for those who would prefer that inequality was lower in New Zealand.

Wednesday 7 November 2018

Legalised prostitution and crime

Earlier in the year, I wrote a post about red light districts and house prices. Based on data from Amsterdam, Erasmo Giambona and Rafael Ribas found in a working paper that:
...homes next to prostitution windows are sold at a discount as high as 24%, compared to similar properties outside the RLD...
And they found that half or more of the price discount related to crime. The argument is that red light districts attract crime, and that crime reduces local property values. An interesting related question is, if crime is displaced and concentrated in the red light district, what happens to crime overall in the city?

In a 2017 article published the American Economic Journal: Economic Policy, Paul Bisschop (SEO Economisch Onderzoek), Stephen Kastoryano (University of Mannheim), and Bas van der Klaauw (VU University Amsterdam) look at what happened to city-level crime in the 25 largest cities in the Netherlands when tippelzones were introduced. To be clear:
[a] tippelzone is a designated legal street prostitution zone where soliciting and purchasing sex is tolerated between strict opening and closing hours at night.
Using data from 1994 to 2011, and accounting for the fact that nine Dutch cities opened tippelzones between 1983 and 2004 (and three closed their tippelzones in that time), they find that:
...opening a tippelzone reduces sexual abuse and rape. These results are mainly driven by a 30–40 percent reduction in the first two years after opening the tippelzone... For tippelzones with a licensing system, we additionally find long-term decreases in sexual assaults and a 25 percent decrease in drug-related crime, which persists in the medium to long run.
This accords with a theory that prostitution and rape (or sexual abuse) are substitutes for some men. This theory is not new - it actually dates to Thomas Aquinas (according to this paper - sorry, I don't see an ungated version anywhere).

If the effect (a 30-40 percent reduction in rape and sexual abuse) seems large, consider the results of a second article, by Scott Cunningham (Baylor University) and Manisha Shah (UCLA), published in the journal Review of Economic Studies earlier this year (ungated version here). Cunningham and Shah look at the surprising case of Rhode Island, where indoor prostitution (e.g. massage parlours) was accidentally legalised in 1983 (as part of a reform of prostitution laws), but this fact wasn't picked up until a judge's ruling in 2003. After that, it took six years for the Rhode Island legislature to re-criminalise indoor prostitution. In the meantime, indoor prostitution was legal in the state.

Using data from 1999 to 2009, they showed that unsurprisingly:
[m]assage provision by RI sex workers increases by over 200% after decriminalization. Transaction prices decrease 33% between 2004 and 2009, which is what economic theory would predict given the increase in supply. Both results are statistically significant at conventional levels.
The more important results though relate to crime and public health effects. For crime, they found that:
[d]ecriminalization reduces rape offences 31–34% from 2004–09. From 1999–2003 reported rape offences in the U.S. are 34 per 100,000 and 40 per 100,000 in RI. From 2004–09, rape rates decrease to 27.7 per 100,000 in RI while the U.S. remains the same at 34.1 per 100,000.
Notice the similarity in the size of effects with the Bisschop et al. study from the Netherlands. Legalisation of prostitution reduces rape by 30-40 percent in both studies. Cunningham and Shah have some good explanations for possible mechanisms explaining the results. In addition to the substitution theory noted above, they posit that:

...decriminalization of indoor prostitution could allow police resources to be reallocated away from indoor arrests towards other crimes. The freeing up of police personnel and equipment to other areas could ultimately cause other crime rates like rape to decrease...
In terms of public health, they find that:
...decriminalization decreases gonorrhoea incidence 47% from 2004–09. From 1999–2003 gonorrhoea incidence in the U.S. was 113.4 per 100,000 females compared to 81.4 per 100,000 females in Rhode Island. From 2004–09, the rate in the U.S. stays similar at 108.4 per 100,000 females but Rhode Island declines to 43.1 per 100,000 females.
The potential mechanism is interesting here too. Legalisation of indoor prostitution induces women to enter the industry, and the types of women entering the industry are lower risk (have lower gonorrhoea incidence). Also, indoor prostitution generally involves less risky sex acts (more manual stimulation, less anal sex).

Cunningham and Shah's study also covers the period after indoor prostitution was re-criminalised in Rhode Island. You would expect to see effects that are the opposite of those for when it was legalised. Unfortunately, that isn't the case, and the authors argue that:
...this is likely due to anticipatory effects and the short time period of data. Re-criminalization was anticipated, unlike the initial judicial decision that caused decriminalization; the push to re-criminalize started as early as 2006. Some claim that massage parlour owners and workers started leaving even before re-criminalization occurred, as they knew it was inevitable.
They also only had two years of data after the change, which would make quantitatively identifying any effects difficult.

Overall, the results of these two papers suggest that legalising prostitution, even if in a particular part of a city, may be a cost-effective way to reduce overall crime and improve public health. However, if it is to be legalised in only part of a city, the distributional effects need to also be considered, since crime would now be concentrated in a particular part of the city, which as we know from my earlier post, has a cost in terms of lower house values.

[HT: Marginal Revolution, in October last year]

Read more:


Monday 5 November 2018

J.S. Bach vs. the cost disease

William Baumol argued that the cost of many services was destined to increase over time, including health care and education. When Baumol first described this 'cost disease' (that economists now refer to as 'Baumol's cost disease'), the example he used was a string quartet. A string quartet playing a piece of music in 2018 requires the same number of players (four) playing for the same amount of time, as a string quartet playing the same piece of music would have in 1918, or in 1718. In other words, the string quartet is no more productive now than a string quartet three centuries ago. Baumol argued that the only way to increase productivity of the string quartet (their output per unit of time) would be for the quartet to play the music faster, and that would reduce the quality of the musical output of the quartet.

So, because productivity of the string quartet would not increase over time, but their wages would increase (in line with the wages for manufacturing workers, who are becoming more productive over time), the cost per minute (or per hour) of the string quartet must increase over time. Baumol then extended this explanation to the case of health care (doctors can't easily be made more productive, i.e. seeing more patients in the same amount of time, without reducing the quality of care), and education (teachers can't easily be made more productive, i.e. teaching more students in the same amount of time, without reducing the quality of education). The 'cost disease' would lead to an increase in the cost of health care and education (and many other services) over time.

However, maybe the central tenet of Baumol's cost disease - the inability to increase productivity without reducing the quality of service - is now under fire, and in the particular case (classical music) that Baumol first used to illustrate it. Rolling Stone reports:
Pop and rap aren’t the only two genres speeding up in tempo in the breakneck music-streaming era: The quickening of pace seems to be affecting even the oldest forms of the art. Per research this weekend from two record labels, classical music performances of J.S. Bach have also gotten faster, speeding up as much as 30 percent in the last half century.
Universal-owned Deutsche Grammaphon and Decca conducted a study into multiple recordings of Bach’s famed Double Violin Concerto in celebration of the release of Bach 333, a box set marking the 333rd anniversary of the German composer’s birth. The labels found that modern recordings of the work have shaved off one-third of the length of recordings from 50 years ago, quickening by about a minute per decade.
In health care and education, technology has a potential role to play in reducing the impact of the cost disease (as I've blogged about before). However, in this case, the increased tempo increases the productivity of the orchestra, without reducing the quality of their output (from the audience's perspective), and may even increase the quality of output (if the audience prefers the faster tempo).

This development probably isn't fatal for the idea of the cost disease. Obviously, there are limits to the productivity gains from playing J.S. Bach pieces at a quicker tempo. After all, who would want to hear the Brandenburg Concertos (running times 10 to 22 minutes) played in three minutes?

[HT: Marginal Revolution]

Thursday 1 November 2018

Does the media help the general public understand inflation?

The media has an important role in helping us to recognise and understand what is going on in the world around us. One of the main goals of my ECONS102 paper is to have students recognise how economic concepts and theories apply to things that are discussed in the media. So, when I read the abstract to this new article, by David-Jan Jansen (De Nederlandsche Bank) and Matthias Neuenkirch (University of Trier, Germany), published in the Oxford Bulletin of Economics and Statistics (I don't see an ungated version, but it might be open access in any case), I took note.

Jansen and Neuenkirch use data from around 1800 people in the Netherlands, over the period 2014 to 2017, and investigate whether their engagement with print media affected the accuracy of their perceptions of current and future inflation. They find:
...no real support for the idea that more-often informed members of the general public do better in understanding inflation. In fact, more frequent readership of some types of newspapers is associated with slightly less accurate inflation perceptions... Also, a set of cross-sectional regressions using data collected in 2017 finds no evidence that non-print media outlets, such as television or Internet, help in understanding inflation. Overall, our paper casts further doubt on the idea that media usage contributes much to knowledge on economic developments...
In other words, the authors argue that the media plays no informative role in helping the general public to understand inflation. However, I disagree for two reasons.

First, the paper is clear that their measure of media use is simply the proportion of newspapers (by type: 'quality' or 'popular') that each person in the sample reported that they used 'frequently' or 'very frequently'. This is hardly a measure of media engagement. It is a measure that mixes up the diversity of news sources the person engages with, with the intensity of that engagement. For instance, there is only one news source that I would report that I engage with 'frequently', so I would show up as a low media user in their sample (in fact, given that it is print media and I don't read print newspapers, I would actually show up as a non-user).

It isn't clear to me what a relationship between incorrect inflation perceptions and the proportion of print newspapers that people read frequently even tells us. It certainly doesn't tell us anything about whether the media helps the general public to understand inflation.

The second issue I have with the paper is their measurement of error in inflation perceptions (and in future inflation expectations). They look at the absolute value of the difference between actual and perceived inflation (or expected future inflation). Actual inflation was taken from the Dutch statistics agency, and perceived inflation was what the respondents thought inflation was in that year (and in the following year, for inflation expectations). The problem here is the use of the absolute value of the error. Let's say that paying attention to media sources makes the general public overestimate inflation. Some people would overestimate inflation, but would overestimate by more if they paid attention to the media. However, other people would underestimate inflation, but by less if they paid attention to the media. In the paper, it appears that most people overestimate inflation, but the conflation of these two groups still potentially creates a serious problem for the analysis.

This is a paper that looks at an interesting research question, but does so in such a way that it doesn't actually answer the research question. It would be reasonably straightforward to replicate this analysis in a more sensible way though, if the authors were willing to share their data (the Dutch DHS household data is freely available online, but the authors supplemented that with their own data collection).

And if the answer is that the print media doesn't help the public to understand economic developments, we could conclude that the public should spend more time reading economics blogs instead.