Friday, 15 December 2017

Uber drivers taking advantage of their riders (again)

Earlier this week, I wrote a review of Brad Stone's book, The Upstarts, about Airbnb and Uber. I'ver blogged about Uber several times before, including this post about Uber drivers gaming the system by logging off in order to induce surge pricing. It turns out that is not the only way that Uber drivers can game the system, as Quartz reported last month:
Some Uber drivers in Lagos have been using a fake GPS itinerary app to illicitly bump up fares for local riders.
Initially created for developers to “test geofencing-based apps,” Lockito, an Android app that lets your phone follow a fake GPS itinerary, is being used by Uber drivers in Lagos to inflate the cost of their trips.
The drivers claim that they use the Lockito app in order to make up for Uber slashing fares earlier in the year:
Williams*, an Uber driver who asked his real name not to be used, says he heard about Lockito a while ago but initially had no interest in using it. “Uber was sweet, until they slashed the price,” he says. “They did not bring back their price up, so the work started getting tough and tougher.”
“When the thing was just getting tougher, I had no choice but to go on Lockito.”...
The funny thing is that Uber is clearly aware of Lockito, but allows drivers to continue using it:
Perhaps most surprisingly, drivers accuse Uber of not only knowing about app, but purposely not doing anything about it because they still want to maximize their profits.
“If you’re using Lockito [with] Uber [it] will tell you “fake location detected”…they will tell you [the driver],” says Williams. “Sometimes when I run it [Lockito], Uber will tell me, “your map of your location…is fake,” you’ll now click OK…and still yet, I take my money…”
I guess that way, Uber can claim that their fares are low and it is the actions of the drivers, not Uber, that results in high fares for passengers. If Uber raised their fares, it seems unlikely that drivers would now stop using Lockito. They've discovered a way to raise their incomes at essentially no cost to themselves, in a similar way to drivers in London and New York who were gaming the surge pricing algorithm.

As we note in the very first topic of ECON110, no individual or government will ever be as smart as all the people out there scheming to take advantage of an incentive plan [*]. This is just another example.


[*] I've borrowed this point from the Steven Levitt and Stephen Dubner book, Think Like a Freak, which I reviewed here.

Wednesday, 13 December 2017

Running the gravity model in reverse to find lost ancient cities

The gravity model of trade or migration (which I have written about before here) must be one of the consistently best-performing empirical regression models (in terms of in-sample prediction, at least). The model is really simple too. In its simplest form, a gravity model suggests that the migration (or trade) flow between two regions is positively related to the 'economic mass' (proxied by population in the case of migration, or by GDP in the case of trade) of the origin and the economic mass of the destination, and negatively related to the distance between the two places. So really, you don't need a whole lot of data in order to run a gravity model (though you do need data on trade or migration flows).

The standard gravity model is based on known data such as the distances between countries (or regions, or cities). But what if you didn't know where the cities were (as might be the case for lost ancient cities), but you did know the size of the trade flows? Could you use the gravity model to triangulate the likely location of those lost cities, by estimating the distance from their trade partners? It turns out that yes, you can.

In what might be the most ingenious use of the gravity model I've ever seen, a new NBER Working Paper (ungated version here) by Gojko Barjamovic (Harvard), Thomas Chaney (Sciences Po), Kerem A. Coşar (University of Virginia), and Ali Hortaçsu (University of Chicago) does almost exactly that. The authors use a dataset of over 12,000 Assyrian tablets from 1930-1775 BCE, 2,806 of which contain mentions of mentions of multiple cities in Anatolia (modern-day Turkey). Of those tablets, 198 contain merchants' itineraries for 227 itineraries relating to travel between 26 cities (15 of which are known, and 11 of which are 'lost'). The authors explain the difference between known and lost cities, as:
‘Known’ cities are either cities for which a place name has been unambiguously associated with an archaeological site, or cities for which a strong consensus among historians exists, such that different historians agree on a likely set of locations that are very close to one another. ‘Lost’ cities on the other hand are identified in the corpus of texts, but their location remains uncertain, with no definitive answer from archaeological evidence. From the analysis of textual evidence and the topography of the region, historians have developed competing hypotheses for the potential location of some of those.
So, the authors use the data from the itineraries to construct a dataset of trade between known cities, and between known and lost cities. Using that dataset they then estimate a gravity model of migration, which provides an estimate of the distance elasticity of trade of about 3.8. That means that each 1 percent increase in distance between two cities reduced trade by about 3.8 percent. This is much higher than modern models of trade where the elasticity is usually about one, but given ancient trade was mostly by road (or by coastal shipping) and the roads were not high quality, that doesn't seem too unusual.

Next comes the really cool bit. They then use the distance elasticity measure to 'back out' estimates of the location of the lost cities. Their method even gives confidence bounds around the estimated point location of each lost city. They conclude that:
...[f]or a majority of the lost cities, our quantitative estimates come remarkably close to the qualitative conjectures produced by historians, corroborating both such historical models and our purely quantitative method. Moreover, in some cases where historians disagree on the likely location of a lost city, our quantitative method supports the conjecture of some historians and rejects that of others.
Eyeballing the results from the maps though, the estimated location of the lost cities doesn't appear (to me) to be particularly close to the historians' qualitative estimates. However in spite of that, this is a very cool paper using the gravity model in a very novel way. Hopefully we see more of this in the future.

[HT: Marginal Revolution]

Tuesday, 12 December 2017

How not to measure sexual risk aversion

Risk aversion seems like such a simple concept - it is how much people want to avoid risk. Conventionally, economists measure the degree of risk aversion of a person by how much of an expected payoff they are willing to give up for a payoff that is more certain (or entirely certain). If you're willing to give up a lot, you are very risk averse, and if you are not willing to give up a lot, you are not very risk averse. But notice that the measure of risk aversion is all about behaviour, either as a stated preference (what you say you would do when faced with a choice between a more certain outcome, and a less certain outcome that has a higher payoff on average) or a revealed preference (what you actually do when faced with that same choice).

So, I was interested to read this recent paper by Stephen Whyte, Esther Lau, Lisa Nissen, and Benno Torgler (all from Queensland University of Technology), published in the journal Applied Economics Letters (sorry, I don't see an ungated version). In the paper, the authors claim to be comparing "risk attitudes towards unplanned pregnancy and sexually transmitted diseases (STDs)" between health students and other students. It is an interesting research question, since you might expect health students to be better informed about the actual risks of sexual behaviour.

However, when you look at the measure they used for risk attitudes, it becomes immediately clear that there is a problem:
To assess participant perceptions of the safety of different forms of contraception and sexual contact in relation to unplanned pregnancy and STDs, they were asked to rate, on a seven-point scale from 0% safe to 100% safe, the level of safety of each of six options. The six responses were then summed and divided by the number of responses to create a measure of average individual attitudes towards the specific risk.
The six options for risk of unplanned pregnancy were condoms; contraceptive pill; sex during menstruation; intrauterine devices; withdrawal method; and contraceptive implant; and the six options for risk of STDs were oral sex; physical contact; kissing; digital penetration; anal penetration; and vaginal penetration. At least, I think that's the case, as it was a little unclear from the paper.

However, their measure is clearly not a measure of risk attitudes (or risk aversion) at all. It is a measure of 'perceptions of safety'. Notice that the measure doesn't ask about students' sexual behaviour at all, and doesn't ask about a trade-off decision. So, it won't tell you much at all about risk aversion. In order to turn it into a measure of (sexual) risk aversion, you would at the very least need to ask the students to choose between two (or more) of the options, with different levels of risk and different levels of either 'beneficial payoff' or (more likely) cost.

Perceptions of safety of the different options is one component of the decision of which option to engage in (or to engage in none of them), but alone it does not tell you about risk aversion. A student might report that they believe the options convey a low degree of safety, but that doesn't mean that the student is risk averse. It just means that they believe that the options presented to them are high risk (low safety). Similarly, a student who reports that the options convey a high degree of safety is not necessarily less risk averse than a student who reports that the options convey a low degree of safety.

How would we expect health students to be different from other students? You might expect health students to be better informed about the actual safety associated with the different options (at least, you'd hope that they would learn this in their health studies!). In other words, you might expect other (non-health) students to over- or under-estimate the degree of safety of the different options to a greater extent than health students. Let's say that non-health students are more likely to over-estimate safety. They are more likely to take risks with their sexual health and in terms of unplanned pregnancy than are health students, because the health students are better informed about the real levels of safety of each option. This would manifest in higher measures of 'perception of safety' among non-health students than among health students. And these authors would interpret this as greater risk aversion among health students, when in fact it is entirely driven by the non-health students being misinformed relative to the health students.

Notice also that the measure of 'perceptions of safety' increases if students believe that oral sex is safer (in terms of avoiding risk of STDs), or if kissing is safer, or if vaginal sex is safer, with no consideration of the actual level of risk associated with each option. It would have been better to evaluate some of the options separately, rather than all together, since evaluating them all together really turns their measure into a general measure of 'perceptions of safety of sexual activity'.

That latter problem aside, the results of the paper are still interesting, provided you interpret them (correctly) in terms of 'perceptions of safety of sexual activity'. Students who reported as virgins had lower perceptions of safety (which might explain in part why they are still virgins). Older students had lower perceptions of safety (I guess, you learn from your mistakes, or your friends' mistakes?). Male students had higher perceptions of safety in terms of STDs, but not in terms of unplanned pregnancy (this one was a bit or a surprise, as I would have expected the opposite). Non-religious students (which the authors label as atheists) had lower perceptions of safety in terms of STDs, but higher perceptions of safety in terms of unplanned pregnancy (I guess the religious students are more worried about pregnancy, which can't easily be hidden from their peers and family, than they are about STDs, which can?).

Anyway, even though the results are interesting, it doesn't change the fact that this is not the way to measure risk aversion.

Monday, 11 December 2017

Book Review: The Upstarts

I've been following the development of Uber for a long time, including writing a few posts (see here, and here, and here, and here, and here, and here, and here). I haven't followed the development of Airbnb nearly as closely (and only mentioned them once on the blog, and then only in passing), but I have used the service. So, I was interested to read The Upstarts, by Brad Stone. The subtitle is "How Uber, Airbnb, and the killer companies of the new Silicon Valley are changing the world".

In the book, Stone does an excellent job of chronicling the history of Uber, Airbnb, and to a lesser extent Lyft and the other minor players in those industries. The book covers their origin stories (both real and imagined), their growth and fights with both their competitors and regulators (especially in the U.S. and Europe), their missteps (like 'ransackgate', and Uber's failure in China), and the story so far up to about the end of 2016 (which is to be expected, from a book published in 2017). Stone seems to mostly get the inside story from the key players involved, but overall the book is more of a highlights (and/or lowlights, depending on your perspective) package than a deep dive into each company and their history. Stone himself is pretty upfront about this in the introduction:
It is not a comprehensive account of either company, since their extraordinary stories are still unfolding. It is instead a book about a pivotal moment in the century-long emergence of a technological society. It's about a crucial era during which old regimes fell, new leaders emerged, new social contracts were forged between strangers, the topography of cities changes, and the upstarts roamed the earth.
I guess we'll have to wait and see if the old regimes actually fall, or whether they just stumble a little bit and continue to be propped up by regulators. This is exemplified in the final section, where Stone asks Travis Kalanick (the CEO of Uber):
When would Uber get to profitability?
Kalanick's response was pretty evasive (teaser - you'll have to read the book to find out!).

The book has a lot to offer the student of economics, with interesting discussions about Uber's introduction of surge pricing (which I hadn't realised was introduced relatively late, in 2012), Uber's pricing experiments in Boston, the price elasticity of demand for Uber (which I've written about before here), and the impact of Uber on the value of a taxi medallion in New York (which fell by more than half between 2013 and 2016, from a starting value of US$1.2 million). I realise that all of those examples relate to Uber and I'd say that is either a fair reflection of the book or my prejudices. The chapters alternate between the stories of Uber and Airbnb, but I found the Uber story much more engaging. The Airbnb story is interesting, but it doesn't have quite the same drama or the same depth of content.

The Boston pricing experiments in particular are interesting, since they directly led to the rollout of surge pricing everywhere:
...[Uber's general manager in Boston, Michael] Pao started running experiments. For a week he held fares steady for passengers but increased payments to drivers at night. In response, more drivers held their noses and stuck around for closing hours. It turned out drivers were in fact highly elastic and motivated by fare increases. Pao then spent a second week confirming the thesis by breaking Boston's Uber drivers into two groups. Some saw surging rates at night while others did not. Again, drivers with higher fares during peak times stayed in the system longer and completed more trips.
Pao now had something that the previous unfocused tests of surge pricing hadn't yielded - conclusive math... Kalanick was convinced, and surge pricing became orthodoxy inside Uber...
Overall, if you're looking for an easy read and to understand how Uber and Airbnb became the monsters they are today, this book is a good place to start. I enjoyed it, and I'm sure you would too.

Saturday, 9 December 2017

STEM-readiness and the gender gap in STEM at university

Consider the proportion of university enrolments that are in STEM (Science, Technology, Engineering, and Mathematics). In contrast with enrolment in the social sciences (for example), the pre-requisite learning at high school is higher for STEM subjects in university, since there are minimum levels of prior mathematics and (often) basic sciences. So, we can think about the proportion of university of enrolments in STEM as represented by the following equation:

That is, the proportion of all enrolled university students (ENR) that are STEM students is equal to  the proportion of students who meet the pre-requisites (READY) who enrol in STEM, multiplied by the proportion of all enrolled students who meet the pre-requisites. This is an over-simplification, but it helps us to understand the difference in the gender enrolment rate in STEM, where typically a higher proportion of male university enrolments are in STEM. This gender difference might arise because more male students who meet the pre-requisites enrol in STEM (compared with female students who meet the pre-requisites), or because more male students who enrol in university meet the pre-requisites (compared with female students who enrol). Knowing the source of this gender gap has important policy implications if you want a similar proportion of enrolments in STEM for each gender, since it tells you whether you need to increase female participation in the high school pre-requisite courses (the second term in the equation), or if you need to encourage female enrolment in STEM for those that meet the pre-requisites (the first term in the equation).

A recent NBER working paper (alternative ungated version here) by David Card (University of California, Berkeley) and Abigail Payne (University of Melbourne) performs a similar disaggregation to the one above, using data on 413,656 university entrants in Ontario over the period 2004 to 2012. First, there is a clear gender gap in the data in terms of STEM enrolment:
Overall, 30.3% of females (a total of 72,033 women) and 42.5% of males (a total of 74,763 men) enrolled in a STEM program. Note that the gap in the proportion of students within each gender group who register in STEM is large (12 percentage points) despite the fact that nearly half (49%) of STEM registrants are female. This reflects the much larger fraction of females than males who enter university in the province.
They find that the difference in the second term in the equation above dominates. Their preferred decomposition results show a:
...13.2 percentage point gender gap in the fraction of newly entering university students who enroll in a STEM program. Overall, 2.1 percentage points are attributable to a lower rate of entering a STEM major by STEM ready females than males...; 1.7 percentage points are attributable to the slightly lower fraction of females than males who are STEM ready at the end of high school and the slightly lower fraction of STEM ready females who enter university...; and 9.4 percentage points are attributable to the higher fraction of non‐STEM ready females who finish high school with enough qualifying classes to be university ready.
That last bit is not surprising in one sense, that female university students are less likely to be STEM-ready (that is, to have the pre-requisites for enrolling in STEM). However, it is surprising in the sense that the reason for that difference is the much greater numbers of female students enrolling who are not STEM-ready, compared with male students who are not STEM-ready. So, the gender gap could be reduced by encouraging more female students to take the pre-requisite courses in high school, or by encouraging more male students who are not STEM-ready into university courses in non-STEM disciplines.

The one potential negative about the research is that the STEM subjects include:
engineering, physical sciences, natural sciences, math, computer science, nursing, environmental science, architecture, and agriculture.
They do make a good case for why nursing is included, but I also wonder about agriculture, and whether excluding those two subjects would make much of a difference to the results.

However, overall the results are interesting and help us to better understand the gender gap. I would be really interested to see if something similar holds for New Zealand.

[HT: Alex Tabarrok at Marginal Revolution, back in September]

Friday, 8 December 2017

The overstated business case for having more female managers

Westpac released a report this week on gender representation in business leadership positions. It's an important topic, and rightly received a lot of media attention (see here and here, for example). However, the media focused attention on the one part of the report that is not worth the (virtual) paper it was written on:
New Zealand's economy has a nearly $900 million annual economic hole because of low numbers of women in management roles, new research suggests.
Let me break that $900 million down, and if at the end of this post you still believe it, I know some Nigerian princes with millions of dollars that they can't get out of their country that you can help.

First, the $900 million is actually $881 million, and it comes from two sources. From the report, (emphasis is theirs):
First, having more women in leadership positions can change perceptions about female competency and skills, and this effect can increase female labour force participation. We estimate that increased female participation at manager level and above would be worth an additional $196 million (or 0.07% of GDP).
Second, women in leadership roles tend to be more supportive of flexible working policies, which in turn also increases labour force participation.
We estimate that if all New Zealand businesses were to achieve gender parity in leadership, it is likely to lead to an increase in the number of businesses offering flexible work policies. The associated benefit resulting from more businesses offering flexible working policies is an additional $685 million (or 0.26% of GDP).
Let's look at where they get each of those two numbers from, starting with the $196 million. They first estimate a model that relates female labour force participation to the proportion of employees that are female managers. They use OECD data for four years only (2011-2014). The results of that analysis suggests that:
...a 1% increase in the share of female employees who are managers is associated with a 0.09% uplift in the female labour force participation rate.
They then extrapolate that number (based on Australian data that 13.3% of male employees are managers, but 8.9% of female employees are managers) and claim that the female labour force participation rate would increase by 0.39% if gender parity in management were attained. The problems with this analysis are many. First, the model only shows a correlation, not cause-and-effect. Nothing in their analysis proves that changing the number of female managers would cause any change in the female labour force participation rate. Second, it's based on only four years of data, during which most countries percentage of female managers and female labour force participation rates will not have changed by much. So the extrapolation will likely be well outside the observed data. Third, the data on the proportion of managers is for Australia, not New Zealand. While our two countries are similar, they are not the same, as the report even notes:
Relative to Australia, New Zealand performs equal to or better in nearly every respect, including pay gaps.
So it seems unlikely that that first number can be believed.

The second number ($685 million) is even less plausible. They first run a model that relates the number of flexible working policies to the proportion of female managers. They find that the:
...marginal effect of increasing the availability of flexible working policies by 13% if the average share of female management increases from the status quo calculated in Australia (22.4%) to parity (50%).
They then use that result in an additional calculation that is summarised in the following table:

The first problem here is the model, which again shows correlation, not cause-and-effect. This model is even more problematic than the previous one though, because the causation clearly runs in both directions - having more flexible working policies will attract more female managers, as well as more female managers being more likely to pressure for implementing flexible working policies. That means that you can have very little confidence in the coefficients from the model because they are biased. Second, again they are using Australian data in part of their calculation.

Third, look at the top row of the table: "Percentage of people not in the labour force citing flexible working policies as a very important incentive to join the labour force". They then assume that 13% of the 23% of people would join the labour force if flexible working policies were increased by 13%. There is no consideration that these people have only been asked about incentives, and not about actually joining the labour force. On top of that, if you roll out flexible working policies in 13% more businesses, that doesn't mean an increase in the availability of jobs with flexible working policies of 13% (unless you also assume that the firms now offering flexible working policies have the same average size as all firms collectively). Maybe it will be larger firms that do this, or smaller firms? This hasn't been considered.

Finally, both numbers ($196 million and $685 million) assume that the increase in labour force equates to an increase in employment. That is by no means a given. If more people enter the labour force, but there are no additional jobs for them, that increase in the labour force becomes additional unemployment, with no increase in GDP at all. Alternatively, the increase in labour supply might reduce average wages, either directly or through an increase in part-time (compared with full-time) work. So, even though GDP might increase, other workers in the economy are made worse off.

There is another bit of analysis in the report that associates each one-percentage-point increase in female managers with an increase in return on assets of 0.07%, and then argues that raising female management to parity would increase return on assets by 1.5%. However, if you believe that, why would you stop at parity? If you went all the way to 100% female management, you could increase return on assets by nearly 5%!

Overall, the Westpac New Zealand Diversity Dividend Report does make some good points. However, the economic impact and business case are extraordinarily oversold as they are clearly flawed, and they do no credit to the overall argument that a more equal representation of women in management would be a desirable result.

Thursday, 7 December 2017

Female representation in economics

The RePEc (Research Papers in Economics) blog reported last month on female representation:
Thanks to the ranking of female authors in RePEc, we have long known the share of women in the RePEc sample of more than 50K authors: 19%. We now know also the shares of women economists by country, US state, field of study and PhD cohort...
European countries are doing better than the world average, especially Latin and Eastern European countries, while Anglo-Saxons are the most masculine (is it that relatively higher salaries for the profession in Anglo-Saxon universities attract the most competitive men?). Latin America is generally below average (except for Colombia and Argentina) while Asia has very low shaes [sic] of female economists, with less than 6% in Japan, China and India, and 9% in Pakistan (you can sort by column in the link).
Where does New Zealand rank? Just ahead of the United States and Australia, in 31st (out of 61 countries in the ranking) with female participation of 16.2% (compared with U.S. 16.1%; Australia 15.9%, U.K. 18.2%). However, things are not nearly so good at the top. The top 25% ranking for New Zealand economists can be found here. Of the 66 economists in that list, only five (that's 7.6% for those of you keeping score) are women (#26 Suzi Kerr; #44 Trinh Le; #58 Anna Strutt; #59 Rhema Vaithianathan; and #61 Hatice Ozer-Balli).

One other thing that's interesting about the RePEc blog post, in light of my post about the Wu and Card research earlier this week, is the difference in economists on Twitter:
While women represent 19% of the RePEc authors, they are only 14% in the Twitter subsample. Looking at the Top 25% of this list of RePEc/Twitter economists by number of followers (3rd row), the proportion of women falls to less than 13%. In fact, the total audience of these women among the top 25% is a little over 3%.
That's not at all surprising, if the online world is as hostile for female economists as the Wu and Card research seems to suggest.

Wednesday, 6 December 2017

No, bitcoin is not bigger than New Zealand

One of my pet peeves is people (especially the media) who directly compare stocks and flows. The latest example is from this Bloomberg article about bitcoin (reproduced in the New Zealand Herald yesterday, but wrongly attributed to the Washington Post):
Bitcoin's extraordinary price surge means its market capitalisation now exceeds the annual output of whole economies, and the estimated worth of some of the world's top billionaires...
Here are five things that have been eclipsed by bitcoin in terms of market capitalisation:
• New Zealand's GDP: The nation's farm-and-tourism-led economy is valued at US$185 billion (NZ$269b), according to World Bank data as of July, putting it some US$5 billion below bitcoin. The cryptocurrency's market cap is also bigger than the likes of Qatar, Kuwait and Hungary.
Comparing the market capitalisation of bitcoin (a measure of the entire stock of bitcoin) with the GDP of a country (a measure of one year's worth of output) is pointless. It doesn't really tell you anything.

If all investors are rational [*], then the market capitalisation (of a company, or of bitcoin) is equal to the discounted cash flow (of the company, or of bitcoin) for all time. How that compares with one year's economic output isn't a meaningful comparison. You would need to compare the market capitalisation of bitcoin with the discounted value of all future years of New Zealand's GDP.

So, saying that the discounted cashflow of bitcoin for all time is greater than one year's economic output of New Zealand doesn't tell you much of anything. It definitely doesn't tell you that "Bitcoin is now bigger than... New Zealand".


[*] Of course, it isn't at all clear that investors in bitcoin are rational. From what I can see, all of the 'bitcoin bulls' have significant investments in bitcoin, so it is hard to conclude anything other than there is a large amount of nest feathering going on. When you see a case where the market value of something is entirely based on the expectation that other investors will be willing to pay more for it in the future (because those investors, in turn, believe that other investors will be willing to pay yet more for it further in the future), then it's pretty likely that you're observing a bubble. And in that case, the people who yell "bitcoin is not a bubble" the loudest are those most likely to lose their shirts.

Monday, 4 December 2017

The toxic environment for women in

Back in August, Justin Wolfers argued yes in the New York Times:
A pathbreaking new study of online conversations among economists describes and quantifies a workplace culture that appears to amount to outright hostility toward women in parts of the economics profession.
Alice H. Wu, who will start her doctoral studies at Harvard next year, completed the research in an award-winning senior thesis at the University of California, Berkeley. Her paper has been making the rounds among leading economists this summer, and prompting urgent conversations...
Ms. Wu mined more than a million posts from an anonymous online message board frequented by many economists. The site, commonly known as (its full name is Economics Job Market Rumors), began as a place for economists to exchange gossip about who is hiring and being hired in the profession. Over time, it evolved into a virtual water cooler frequented by economics faculty members, graduate students and others...
Ms. Wu set up her computer to identify whether the subject of each post is a man or a woman. The simplest version involves looking for references to “she,” “her,” “herself” or “he,” “him,” “his” or “himself.”
She then adapted machine-learning techniques to ferret out the terms most uniquely associated with posts about men and about women.
The 30 words most uniquely associated with discussions of women make for uncomfortable reading.
In order, that list is: hotter, lesbian, bb (internet speak for “baby”), sexism, tits, anal, marrying, feminazi, slut, hot, vagina, boobs, pregnant, pregnancy, cute, marry, levy, gorgeous, horny, crush, beautiful, secretary, dump, shopping, date, nonprofit, intentions, sexy, dated and prostitute.
The parallel list of words associated with discussions about men reveals no similarly singular or hostile theme.
I finally read the paper (ungated) this week (by Alice Wu and David Card), and it is every bit as disturbing as advertised. For instance here's Table 1, which shows the ten words most associated with 'female' posts (those most likely to be about a woman because they contain a preponderance of terms like "her" and "she"), and 'male' posts:

That's only the beginning though. As Wolfers notes, Wu and Card then goes on to show that there are differences in the way that females and males are discussed on the forum. For example:
...on average there are 4.07 academic or job related words in each post associated with male, but 1.76 less (a significant 43.2% decrease) when it is asscoiated (sic) with female. In terms of probability, 70.6% of the "male" posts include at least one academic/work term, while 57.4% of "female" posts do.
...a "female" post on average include 1.341 terms related to personal info or physical attributes, almost three times of what occurs in an average "male" post.
In other words, posts related to females are much more likely to focus on physical appearance or personal characteristics, while posts related to males are much more likely to maintain an academic focus. When they look at threads rather than posts, they find similar findings, but also that a thread becomes more personal following a 'female' post.

Finally, Wu and Card goes on to show that top female economists receive more attention on the forum than male economists. However, based on their other results it almost goes without saying that this attention is not as focused on their academic output.

Of course, it is difficult to argue that Econjobrumors is representative of the profession as a whole, or even of young economists. I lasted all of a day or two on the site when I was a PhD student before I saw it as generally a waste of time. Hopefully, young female economists are giving it a wide berth too, because it appears it doesn't paint the best picture of the economics profession. However, Wolfers' article does end on a positive note about Wu:
She is also tenacious, and when I asked Ms. Wu whether the sexism she documented had led her to reconsider pursuing a career in economics, she said that it had not. “You see those bad things happen and you want to prove yourself,” she said.
Indeed, she told me that her research suggests “that more women should be in this field changing the landscape.”
I agree.

[HT]: Marginal Revolution, back in August.

Sunday, 3 December 2017

Lobbyists, rent seeking and deadweight losses

The rise of lobbying in New Zealand has been in the news recently, as Bryce Edwards explained in his regular Political Roundup column in the New Zealand Herald a couple of weeks ago:
Political lobbying is a growth industry in New Zealand. And lobbyists are going to be particularly busy over the next year.
Edwards charts the rise of 'hyper-partisan' lobby groups Hawker Britton and its right-wing counterpart Barton Deakin. It's an interesting read, along with the many links to other articles embedded within it.

Of course, lobbyists are ultimately being employed by firms that are seeking favourable policy settings. Perhaps they are looking for lighter-handed regulation for themselves, or more regulation of their competitors. Economists refer to this sort of activity as rent-seeking, and in ECON100 and ECON110 I discuss it as one of the key reasons that we might consider monopolies (or firms with market power more generally) to be unfavourable for society. Those firms make large profits, and therefore have a large incentive to use some of those profits to protect their market position through lobbying. If government is seeking to regulate their industry or to open it to more competition (or the firms are worried that the government might contemplate doing so), then those firms will employ lobbyists to dissuade governments from those policies that won't favour the firm.

When I was an undergraduate student, I struggled to see how rent seeking was negative for society. Obviously, it seems ethically problematic. But if you take a general equilibrium framework, then if the firm spends some of its profits on lobbyists, that simply becomes income for the lobbyists, and total welfare remains effectively the same (or maybe it even increases due to the producer surplus in the labour market for lobbyists).

However, that position forgets that the market operates across multiple periods. The firm with market power is generating a deadweight loss (for an explanation of why, see the first part of this earlier post). That deadweight loss arises because the firm with market power is able to price above marginal cost. If the government was to open the market to more competition or to regulate prices, then that would force the price down and increase total welfare in the market. Therefore, if the actions of the lobbyists prevents the regulation or the competition, then it has a cost to society that can be measured by the future deadweight losses that continue to accrue. So, lobbying does potentially have real negative consequences for society, and so as a society we should care about the actions of lobbyists and their interactions with our politicians.

Wednesday, 29 November 2017

Why Tuvaluans aren't migrating due to climate change, yet

I've been doing a lot of reading on climate change and migration lately (mostly related to responding to reviewers' comments on the paper I discussed here). The latest issue of the journal CESifo Economic Studies has a bunch of papers on the topic. The first paper in the issue is a review of the literature by Michael Berlemann and Max Friedrich Steinhardt (both of Helmut Schmidt University in Germany). I've read a number of these reviews, but theirs is the clearest and least technical review I can recall reading, so if you want an excellent overview of the available data, methods, and results from the literature (on climate and migration, as well as natural disasters and migration), then it seems that would be a good place to start.

However, in this post I want to focus on a different paper from that same journal issue (ungated earlier version here), by Ilan Noy (Victoria University of Wellington). In this paper, Noy looks at out-migration (or rather, the lack of out-migration) from Tuvalu. Noy explains that Tuvalu is of interest, because it can: many respects can serve as the canary in the mine for climate change research. If migration driven by climate change was indeed happening today, it should be found in Tuvalu, and if this migration is not happening yet, observing Tuvalu may provide us explanations for its absence. 
Noy notes that there is a distinct lack of out-migration from Tuvalu even though, as a low-lying atoll country, the population is extremely vulnerable to the effects of climate change:
To summarize, all the available evidence suggests that disaster risk in Tuvalu is likely to increase significantly in coming decades. It will increase as: (i) the hazard intensity (mostly cyclones and droughts) increase; (ii) as more people will be exposed because of population growth, urbanization and movement to the capital Funafuti, and sea-level rise; and (iii) as households will be increasingly more vulnerable, given their increasing reliance on manmade infrastructure and imported goods.
Tuvalu does have a large diaspora (relative to the size of the domestic population), but given the risks you would be right to expect that many more of them would be getting out of Funafuti than what is actually observed. Given that, something must be hampering that out-migration. Noy suggests that: of the reasons for this lack of migration is the desire by Tuvaluans to Voice. ‘Voicing’, a concept borrowed from Hirschman’s (1970) Exit, Voice, and Loyalty, is the advocacy of expressing one’s wish for change, and that Voice often is a deliberate choice that in some circumstances may be preferred to Exit (migration).
Voicing in this context can be taken as protesting against climate change and its impacts and implications, and indeed we have seen a lot of evidence of this voicing protest (e.g. see here and here), not just from Tuvalu but from Maldives (e.g. see here and here) and the Marshall Islands (e.g. see here and here).

Why spend time and effort 'Voicing' instead of Exiting (migrating)? The third component of Hirschman's theory, Loyalty, is to blame:
The last component of this is Loyalty—the attachment of people to their communities and homelands. With strong Loyalty, the cost of exiting is substantially higher, exit is therefore less likely, and consequently a strategy of Voicing is more likely to be pursued. Loyalty makes Exit less likely, and therefore gives more scope and incentive for Voice. In part, it is this Loyalty that may explain why the Tuvaluan islanders have chosen to Voice, but it is probably only an imperfect explanation, given the high degrees of previous temporary migrations away from the islands.
It's an interesting theory, and seems to explain why we don't see large out-migrations from the Pacific Islands due to climate change. Yet. It also suggests that our models of international migration, which rely on past data, will be incomplete and inaccurate because they miss the point that migration will only occur when Voice is no longer a viable option. Indeed, Noy notes that when it becomes clear that Voicing is not working, Exiting is the last resort and at that point there is likely to be a sudden and large out-migration. When that occurs will be anyone's guess.

[Update]: The New Zealand government is considering introducing a special visa category for climate change refugees (which is Green Party policy). Although, at only 100 visas per year, I expect it wouldn't make a huge difference.

Tuesday, 28 November 2017

Tim Hazeldine on economic impact studies

Last week I wrote a post about economic impact studies, specifically the Americas' Cup economic impact study. In today's New Zealand Herald, Tim Hazeldine (University of Auckland) echoed some of the same points:
...the Ministry of Business, Innovation and Employment has commissioned a report from the same consulting company responsible for the Auckland Airport impact numbers cited above.
Using the same "multiplier analysis" that the NZIER and other independent economic consultancies have by now renounced, the report produced quite spectacular numbers for the impact of the Cup: up to $1billion injected into the NZ economy, thousands of jobs created, a return of more than seven dollars on every dollar invested in new wharf facilities.
Take the last number first. Reality check: a return on investment of over $7 per dollar invested is loosely equivalent to the "double your money in three years" promise on my hypothetical hoarding on the airport road. Believe that?
As for the value-added injection and the jobs created: to a first-order approximation, the net number of new jobs created in Auckland, with its already stretched construction and tourism industries, will be about zero. The workers needed will be bid away from other jobs, or imported as new immigrants. As a result, there will be no significant real output increases, the extra spending will be soaked up in higher prices.
Higher prices are harmful for domestic New Zealand customers and travellers but beneficial to the bottom lines of New Zealand and foreign owned businesses. It's a trade-off. My expectation is that, overall, there will be net economic benefits from holding the Cup in Auckland but that they will be quite small — below the costs to which national and local government are being asked to contribute.
In the article, Hazeldine is being a little unfair to the report. While it is true that it reports economic impact analysis based on a multiplier analysis (even though, curiously, the report notes that "we do not use multipliers that are derived from IO tables" - I guess the multipliers were derived from somewhere else?), the report also includes a cost-benefit analysis. Perhaps Hazeldine was so quick to pull the trigger on his article that he didn't read all the way through to Section 5 of the report.

As I noted last week, the cost-benefit analysis is the best of the three alternatives, and at least according to the authors, follows Treasury's Social Cost Benefit Manual (which Hazeldine advocates for, and which you can read for yourself here). The cost-benefit analysis shows benefits greater than costs under the base assumptions. However, it seems to me that, like virtually every event ever, costs will be greater than expected. In that case, costs are likely to be greater than benefits. Which isn't a good reason not to support the Americas' Cup, but is a good reason not to do so on the basis of economic impact.

Read more:

Monday, 27 November 2017

CEO pay is not all about CEOs' performance, or company performance

While I was away overseas CEO pay was back in the news, mostly courtesy of the continuing fallout from Theo Spiering's $8.32 million salary-plus-benefits package announced back in September. In the New Zealand Herald, Helen Roberts (University of Otago) argues for greater transparency in CEO pay:
We are continually told seven figure sums are needed to retain top executives, without any substance or proof that it needs to be that high.
The reality is that it is the independent third-party remuneration advisers who set the expectations. Compensation consultants use median pay levels from the previous year to determine the median pay level for the current round of contracts; as pay levels increase the median pay level also goes up, driving all CEO pay levels up in that industry.
So the decisions are effectively being made based on the recommendations of only a few.
This becomes a never-ending cycle of artificially inflated salary packages, irrespective of company performance or any parity with pay for salaried workers- companies are effectively being held to ransom.
She then goes on to talk about how loosely CEO pay is related to actual company performance (read the whole article, it's interesting). However, there is a key point about CEO pay that is missed from Roberts's discussion, and also from arguments in favour of high CEO pay, such as this earlier article by Jim Rose, who focused more on superstar effects and essentially argues that if CEOs weren't earning their large salaries, they wouldn't keep their jobs.

That missing point is that the market for executives is a tournament (which I have written about earlier, also in the context of CEO pay). In tournaments the winner is not only paid for their own performance, but paid a high bonus as an incentive for those lower down (e.g. the next tier of executives, in the case of CEO pay) to work harder.

Tournament effects were first described by Sherwin Rosen and Ed Lazear in the early 1980s. In labour markets where there are significant tournament effects at play, workers are paid a 'prize' for their relative performance - maybe a raise or a promotion. The tournament 'winner' only needs to be a little bit better than the second best worker in order to 'win' the tournament, and claim the prize.

However, if winning the tournament is mostly about luck rather than good performance, then the prize needs to be very large in order to incentivise the workers to work hard to 'win' (otherwise, if the prize is small, why work hard if winning comes mostly down to luck?). The large role of luck in performance could be argued to be true of top executives (the tier below CEOs), where their performance can only be measured by metrics that they probably have only small positive influence over (and are more driven by economy-wide factors, especially in the case of large companies). [*] So, because companies want to incentivise their (non-CEO) top executives to work hard, ensuring that the CEO pay is a large step up is one way to do so. [**]

So, the focus on the lack of clear relationship between CEO pay and company performance, and calls for increasing transparency of CEO pay setting, are at least a little misplaced. Unless we first disentangle the incentive effects that are directed at other top executives.


[*] I say positive influence here, because I'm sure that a really bad executive can have considerable negative influence on a company's performance, but it isn't at all clear to me that for a broad range of competent executives, there is much to choose between them.

[**] I do wonder how vulnerable this theory is to the extent of internal vs. external appointments as CEO, since it seems to rely on internal appointments being the norm. On the other hand, the threat of external appointments could increase the incentive effects for internal top executives, since they would have to compete on performance with potential hires from outside the company.

Read more:

Thursday, 23 November 2017

The value of exams as a signal

Exams have been in the news this week for all the wrong reasons. However, last week Michael Lee (University of Auckland) wrote an article in the New Zealand Herald on the real value of exams as a teaching tool rather than just an assessment:
We use exams as an encouragement tool to compel greater engagement with the material. That is actually where the real value of an exam is. When students feel the stakes are high and are unsure of what exactly will be asked, they are incentivised to take a look at everything seriously.
That's why teachers should never tell students exactly what will be examined, because 99 per cent of students will then focus only on that material, which defeats the true purpose of the exam.
In exams and in the real world, the first step to topic mastery is a general overview of key concepts and facts with as much detail as one can remember. Clearly, exams reward students that can do these things in a relevant way to answer a specific question.
A more advanced stage of mastery is the ability to creatively apply, integrate, and challenge the knowledge you have been taught. But it is difficult to get to that level if you haven't got enough base material to work with.
I have a slightly different take on the value of tests and exams. I agree with Lee that they are useful as learning exercises, especially if organised well. I disagree that we shouldn't tell students what will be examined (although I will admit that when asked what will be examined my usual answer is "everything we have covered", which is true!). However, I see tests and exams as having another important function for students, as an important signal that students can give to future teachers and employers. This relates to solving the employers' adverse selection problem that I have written about before:
Students are engaging in a sophisticated array of signals, on multiple levels. It's not possible to avoid signalling in this case, since trying not to provide a signal is itself a signal. The problem that this signalling is trying to avoid stems from private information about the quality of the student - students know whether they are high quality (intelligent, hard working, etc.), but employers don't. Employers want to hire high-quality applicants, but they can't easily tell them apart from the low-quality applicants. This presents a problem for the high-quality applicants too, since they want to distinguish themselves from the low-quality applicants, to ensure that they get the job. In theory, this could lead the market to fail, but in reality the market has developed ways for this private information to be revealed.
One way this problem has been overcome is through job applicants credibly revealing their quality to prospective employers - that is, by job applicants providing a signal of their quality. In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality applicants, would provide the signal), and it must be costly in a way that makes it unattractive for the lower quality applicants to do so (such as being more costly for them to engage in).
Qualifications are an effective signal (they are costly, and they are more costly for lower quality students, who face having to expend more time and effort to complete the qualification). Exams are also an effective signal for exactly the same reason (though not at the same level as the whole qualification). Because exam performance is an effective signal, high quality students can use their performance in exams to separate themselves from lower quality students, because it is very difficult for lower quality students to pass themselves off as higher quality students in the exam format. The quality of the signal is much lower for other types of assessment such as take-home tests, assignments, or group projects, where lower quality students can easily get additional help (often from the high quality students!) to boost their grades.

To me, that is one of the key reasons why we shouldn't eliminate high-stakes tests and exams from student assessment. Take-home or open-book tests, online tests, group projects, and the myriad of other assessment types that are used all have their place, and can all be valuable as learning exercises if used well. But they'll never be able to provide the same quality of signal of student quality as a test or exam.

Read more:

[HT: David, one of my ECON100 tutors]

Wednesday, 22 November 2017

Beware economists bearing impact studies of the America's Cup

Auckland Council decides tomorrow about their preferred option for the location of the America's Cup bases. And right on cue, a new report has been released outlining the economic case for hosting the Cup. The New Zealand Herald reported today:
It will inject up to $1 billion into the economy; thousands of jobs will be created; it will revitalise Auckland's dilapidated downtown wharves and bring fleets of superyachts in need of multimillion-dollar repairs.
Or so has been widely reported in the frantic last few weeks of negotiations and protestations about where and how to host the 2021 America's Cup in Auckland, but some have questioned these expectations.
The Ministry of Business, Innovation and Employment (MBIE) yesterday released a glowing report about the economic benefits of hosting the regatta, concluding that every $1 invested would come back more than seven-fold by 2055.
Between $600 million and $1 billion would be injected New Zealand's economy between 2018 and 2021 - far outweighing the $200-odd-million it will cost to host the event, according to the report.
I've read the report, which can be found on the MBIE website here. Most studies of the economic impact of sports are subject to a number of severe limitations, which were usefully summarised by Andrew Zimbalist in his excellent book, Circus Maximus: The Economic Gamble behind Hosting the Olympics and the World Cup (which I reviewed here). One of the biggest problems is the lack of an appropriate counterfactual. These studies usually assume that visitors will come for the event (this assumption is of course reasonable). However, working out the economic impact of the event is not as simple as comparing the economy with the 'usual' number of visitors with the economy with the 'usual' visitors plus the additional visitors for the event ('event visitors'). This is because some of the event visitors would have come to Auckland anyway, even if the event hadn't happened. Other event visitors simply change the timing of their visit to coincide with the event, when they would have come earlier or later anyway. Including those two groups of visitors in the analysis would tend to overstate the economic impact of the event. On top of that, there will be some other visitors, who would have come to Auckland at the time of the event but, perhaps because they can't find a hotel because of the event, or because they don't want to deal with the crowds, they decide not to come at all. This latter effect is termed 'crowding out', and also leads the simple analysis to overstate the economic impact of the event. [*]

The economic impact report for the America's Cup, authored by Greg Akehurst and Lawrence McIlraith of Market Economics, adopts multiple approaches to the assessment and is to be commended for doing so. The report uses three different approaches: (1) the 'standard' economic impact assessment approach based on an input-output (IO) model; (2) a supplementary long-term assessment of impacts on the super-yacht sector, based on a computable general equilibrium (CGE) model; and (3) a cost-benefit analysis based on a comparison of the costs of infrastructure and the direct spending impacts. Each approach also looks at low, medium, and high scenarios.

The 'standard' approach shows an economic impact of $555 million (low scenario) to $977 million (high scenario). However, as we know that is subject to substantial problems and should be treated with a great deal of scepticism.

The CGE modelling approach shows a final realised impact of $123 million per year. However, one of the real problems with this analysis is that it assumes that hosting the America's Cup will have an enduring effect on the super-yacht industry (and in fact a growing impact over time in the high scenario). That assumption might be attractive, but it is unlikely to hold true. The super-yacht industry did get a boost in New Zealand when we last hosted the America's Cup, but that boost was not enduring and by 2014 super-yacht firms were shutting down (see here and here for example). If future America's Cups are not hosted in New Zealand, then at least some of the super-yacht industry is likely to move along with the Cup. So, I would take the CGE analysis with an enormous grain of salt (and it provides the greatest headline number of $7.50 of economic activity for every $1 of investment, which in spite of the authors explicitly noting that this is not a ratio of benefits to costs, the media is interpreting it as such).

The cost-benefit analysis is the best of the three alternatives. Here's the key table from the report, showing the cost-benefit ratios (CBRs) under the different scenarios, as well as some sensitivity analysis based on higher costs, or lower benefits:

Notice that the unadjusted CBRs are all over one (the benefits are greater than the costs). The medium scenario has a lower CBR than the low scenario because it assumes a higher cost (a larger event). Notice also that the cost only needs to increase by 20% (from $200 million to $240 million) in order to eliminate the net benefits of the event (the CBR falls to 1.0). If you think that any project the government is involved in is able to be completed without a serious budget blowout, then you haven't been paying attention over the past forever. A 20% cost overrun would be at the low end.

On the benefits side, it probably pays not to dig too deep into the spending assumptions as they appear to be on fairly shaky ground. For example, they make an adjustment for the proportion of international tourists who note that the America's Cup was one reason (that is, not the main reason) for coming to Auckland, but they don't make the same adjustment for domestic visitors. Domestic visitor spending is only a small proportion of the total, so this probably doesn't affect things too badly. However, the number of syndicates has the biggest effect on the estimates and the medium scenario assumes ten challenger syndicates, but the 2017 America's Cup only had seven syndicates (the low scenario assumes six, the high scenario assumes twelve). If you adjust the number of syndicates down in the medium scenario, then those CBRs are going to look a lot worse.

So, my takeaway from the report is that the benefits might outweigh the costs of hosting the America's Cup, but that relies on the costs being kept under control and the number of challenger syndicates increasing by nearly half over the previous Cup. As with most of these events, you could argue that the spending is good value for a big party, but arguing that it has an overall economic gain for the country (or for Auckland) is probably too strong a claim.


[*] Of course, some of these 'crowded out' visitors might adjust the timing of their visit, coming earlier or later than during the event.

Tuesday, 21 November 2017

Raising the minimum wage to $20

The new New Zealand government has proposed raising the minimum wage from $15.75 to $16.50 next April, and eventually to $20 by 2021. Eric Crampton at Offsetting Behaviour covered the main points on this last month:
This isn't an end of the world bad idea, but it isn't a good idea.
The government has been targeting a minimum wage of about 66.7% of the median wage. That's already very high by international standards. If we assume median hourly wage growth continues at 3.4%, then the median wage in 2021 would be $27.43. A $20 minimum wage in 2021 would be 72.9% of the median wage...
That would put New Zealand way out in front in the OECD in terms of the ratio of minimum wage to median wage. Crampton argues that Working for Families is a better option as it is better targeted at those in need (to which I would add that there are a whole lot of middle class teen hospitality workers who will benefit from the higher minimum wage, but I don't think that's who the government really wants to benefit), and it is better supported. On the latter point, Crampton explains:
The burden of minimum wage increases is shared among disemployed workers, purchasers of the goods and services produced by minimum wage workers, and owners of firms employing minimum wage workers. The burden of WFF falls heavily on households in the 8th, 9th and 10th deciles. Both versions will have negative effects on the overall economy, but spreading it through the tax system at least tries to minimise the overall deadweight costs of raising that next dollar of wage subsidy.
A higher minimum wage isn't going to result in Armageddon (but equally, in contrast to Branko Marcetic's take, it won't be all unicorns and rainbows either). I'll be interested to see how it plays out. However, I will reiterate that the latest international research (including research on youth minimum wages in Denmark, and the recent increase in the minimum wage in Seattle) suggests that minimum wages do lead to decreases in employment (see here and here). That contrasts a lot of earlier work that called into question the simple labour market supply-and-demand model.

At least though, there is some policy consistency. If you believe that higher minimum wages are a good thing (because presumably you believe that any resulting decrease in employment will be small), you should also be in favour of reducing immigration to boost the wages of unskilled or semi-skilled workers (see my post on that point here). And on that score, the new government is making the right noises (albeit with inconsistency between the Labour and New Zealand First party positions).

Read more:

Sunday, 19 November 2017

Regal Cinemas to introduce dynamic pricing for movies

I've written a couple of times before about pricing at movie theatres (see here and here). On the surface, movie theatre pricing seems to defy basic economic theory. The price of movies doesn't differ between high-demand movies (where we would expect higher prices) and low-demand movies (where we would expect lower prices).

Although, as I wrote in both those earlier posts there probably is some element of this dynamic (or variable) pricing, it just isn't obvious. It is hidden by the choices the movie theatre makes, about which movies are "no complementaries" and the size and configuration of the theatre in which each movie plays (where some have a greater proportion of premium seating). These choices allow movie theatres to ensure that the average movie ticket price differs between high-demand and low-demand movies.

Now, Regal Cinemas in the U.S. is about to make the variable pricing more obvious, as Bloomberg reported last month:
Regal Entertainment Group is testing demand-based pricing for films, potentially leading to higher prices for top hits and low prices for flops, a big change for an industry that typically uses a one-size-fits-all approach.
Working with app maker Atom Tickets LLC, which has lobbied theaters to try dynamic pricing, Regal plans to test the concept in early 2018 and see if it boosts revenue and fills more seats at non-peak times...
Industry executives are debating whether dynamic pricing will increase attendance. Some object to a system that would involve charging higher prices for hit movies and lower prices for unpopular movies.
In that last paragraph, it might seem obvious that dynamic pricing will increase attendance for the low-demand movies. However, it isn't at all clear whether it will be offset by lower attendance at high-demand movies (because these consumers could go to a cheaper low-demand movie instead, and some of them may choose to do so). Total attendance is of course made up of the higher attendance at low-demand movies that will now be cheaper, and possibly somewhat lower attendance at high-demand movies that will now be relatively more expensive (even if the price remains the same as before). Industry executives shouldn't be focusing on the effect on attendance though. Instead they should be looking at total revenue. In this case, the effect on total revenue could be negative, since more tickets at a lower price might not offset fewer tickets at the original price (although that seems unlikely, since the high-demand movies tend to also have fairly inelastic demand, since they have few substitutes). That potential for lower revenue would be grounds for objection from the executives, and I guess we will see in due course whether Regal Cinemas benefits from this change - they movie consumer almost certainly will.

[Update]: David Sims in the Atlantic points out the negatives of Regal's dynamic pricing approach.

Read more:

Friday, 17 November 2017

The economic non-impact of malaria on African development

When I was completing my PhD, there were a number of studies based on macroeconomic models that showed significant negative impacts of HIV/AIDS on economic growth and yet econometric studies based on observed HIV prevalence of GDP showed virtually no effect. Some people put the difference down to surplus labour (since AIDS deaths are concentrated among prime age adults who make up the majority of the labour force, if there is surplus labour then losing adults from that age group would have little effect on GDP), but even macroeconomic models with surplus labour tended to show some modest negative impact. So much for macroeconomic models (as we later learned during the Global Financial Crisis)?

So, I was interested to read this forthcoming paper in The Economic Journal (ungated earlier version here) by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University). In the paper, the authors do a number of really interesting things to evaluate the historical and recent economic (non-)impact of malaria. First, they construct an ingenious and deceptively simple model of malaria prevalence, which is based on the prevalence of the gene that causes sickle cell disease. The sickle cell gene provides protection against malaria deaths in childhood for those who have one copy of the gene, but is fatal for those who have two copies of the gene. So the overall prevalence of sickle cell genes can be used to evaluate the overall burden of malaria in the population. The authors estimate that malaria burden is high:
In areas of high malaria transmission, 20% of the population carry the sickle cell trait. Our estimate is that this implies that historically between 10% and 11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is roughly twice the current burden of malaria in such regions. Comparing the most affected to least affected areas, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, our estimate is that life expectancy at birth was reduced by approximately five years. In terms of its burden relative to other causes of mortality, malaria appears to have been perhaps about as important historically as it is today.
They then use their measure of malaria burden to evaluate the impact of malaria on African development historically. Strikingly, their measure is positively associated with the log of population density (as a measure of development) at the ethnic group level (for 398 ethnic groups across Africa), even after controlling for geography, access to waterways, climate, cultural clustering, suitability for agriculture, and suitability for tsetse flies. Other measures of development, such as having a large (more than 20,000 population) town in the ethnic group's homeland, complexity of the ethnic group's settlement pattern, and centralisation of power, also have a positive or no relationship with malaria burden. Even after adopting an instrumental variables approach (with malaria suitability as the instrument), they still don't find statistically significant negative effects of malaria burden on African development (see here for more on instrumental variables models), though the effects are sometimes negative and not statistically significant.

Why is there no discernible negative economic impact of malaria on African development, given the high malaria burden and the high resulting mortality? One section of the working paper version of the paper that hasn't made it into the final paper is quite interesting. [*] In that section, the authors note that:
The reason that our estimate of the effect of malaria is so small is two-fold. First, malaria deaths are concentrated at young ages, and second, consumption of young children is low relative to consumption of adults. Putting these together, most deaths from malaria do not, in this model, represent a significant loss of resources to society. In our calculation, deaths beyond age five account for only 1/3 of the reduction in life expectancy due to malaria, but for 2/3 of the economic cost of the disease.
So, because malaria mainly kills young children, society wastes relatively few resources investing in children who die from malaria (and can instead expend those resources on surviving children). So, the economic cost of malaria is relatively slight. I don't know why that part of the analysis didn't make it into the final version of the paper, but I think it is one of the more important insights from this work, as it usefully explains why we might not find any economic impact of malaria in Africa.


[*] Actually, there are a lot of substantial differences between the NBER Working Paper version of the paper and the final accepted publication, which might explain why there was a four-year time delay between the two versions of the paper. I'm glad I read the working paper version first, since otherwise I would have missed some of the greater detail.

Wednesday, 15 November 2017

What's in a (porn star) name, for identifying survey respondents?

In social science research, we usually want to maintain the confidentiality and anonymity of the respondents to our surveys and interviews. However, there are times when we will want to follow up with respondents at some later date, and if the first round of surveys was anonymous it is impossible to match up the first round respondents' responses with the later responses. So, I was interested to read this short 2011 article (open access) by Megan Lim, Anna Bowring, Judy Gold, and Margaret Hellard (all from the Burnet Institute in Melbourne), published in the journal Sexually Transmitted Diseases.

In the article, the authors discuss asking each survey respondent what their "porn star" name is. They explain:
"We trialed the uniqueness and reliability of a novel identifying characteristic: first pet's name and first street - colloquially known as a "porn star name".
The authors provide a table of examples of porn star names, of which 'Honey Scotsburn' and 'Precious Duckholes' were two (I'm not making this up - check the paper). They then go on to test whether they could match respondents from a baseline and follow-up survey based on the porn star name. Porn star names were unique to 99% of their 1281 respondents to the baseline survey, and adding month/year of birth was enough to provide 100% uniqueness. When re-contacted later, they were able to match 76% of respondents between the two surveys using only the porn star name, and using month/year of birth they could further match 96% of those who provided a partially-consistent porn star name.

It seems this is a pretty unique way of matching respondents between waves of a survey while maintaining plausible anonymity for those respondents. However, the authors note that "calling the identifier a PSN... might also have made the question seem trivial to some participants and resulted in false responses". Of course, this could all be a joke - Lim and Hellard were also two co-authors on research about the survival of teaspoons. Even if this paper was taken seriously (and it could be, since it addresses a real issue), it seems the research community isn't interested - this paper has only been cited twice since it was published in 2011.

Sunday, 12 November 2017

School uniform monopolies

I recall many years ago having an argument with a school administrator about uniform requirements (if I recall correctly, this was about school shorts that were the correct colour, but were not allowed because they didn't have the school logo embossed on them). My side of the argument was that the school was using its market power over uniforms to create a monopoly (there was only one uniform provider who sold the school shorts with that particular logo) and unfairly price gouge parents. So, I was interested to read this story in the New Zealand Herald last week:
The new Education Minister has planned action to stamp out "covert" fundraising by schools such as marking up uniforms to make a profit.
Chris Hipkins told the Herald the new Government's overall objective was to make sure a state school education in New Zealand was free...
"At the moment, particularly around things like the big mark-ups on uniforms, schools are finding ways of getting around the rules that they shouldn't be asking parents to pay. We are going to be taking a much firmer line on that..."
A Weekend Herald price comparison carried out earlier this year found parents with a boy and girl at secondary school could pay $700 for just the uniform basics.
The Commerce Commission has received complaints about the costs of uniforms and stationery and issued procurement guidelines, recommending schools make the supplier-selection process transparent and tell parents why deals were entered into. It is illegal to enter an agreement that substantially lessens competition in a market.
With school uniforms, there are few substitutes. If your child is going to School A, you need the appropriate uniform for School A. This gives the school considerable market power (the ability for the seller to set a price above the marginal cost of the uniform). Since most schools are not uniform producers or sellers themselves, they instead transfer that market power to a uniform provider. Usually this takes the form of an exclusive deal with the uniform provider, where that provider is the only one that can sell the school's uniforms, and in exchange the school receives some share of the profits. This creates a monopoly seller of the uniforms, and the monopoly maximises its uniform profits by raising the price. The result is that parents must pay higher prices for uniforms, which must be purchased from the exclusive uniform provider.

One might argue (as the Herald article does) that this is a covert way of increasing school fundraising, in the absence of the ability for schools to do so through higher school fees. A rational school would want to maximise this source of revenue, and they can do that by ensuring that there are few substitutes for the uniform (because, when a firm has market power, the mark-up over marginal cost can be greater if there are fewer substitutes for what they are selling). When I was at school, any shorts of the correct colour were acceptable for my school uniform. However, one way that rational schools can ensure that there are few substitutes for uniform items is to require each item to have the school logo printed or embossed on it. So now, every child must wear not just the correct colour item, but the correct colour item endorsed by the school (and sold by the exclusive monopoly uniform provider).

However, you might not be concerned with high uniform costs if you believe that the additional money you pay is going to the school. But this is probably not the case at all, because schools probably cannot capture all of the excess profits that they create through this market power. If there are many potential uniform providers, then ultimately the school can probably receive the entire profits from the market power, since they could play uniform providers off against each other until they get the best offer (equal to the entire profits from selling uniforms). But if there are few potential providers, this is not the case, and the successful bidder will capture at least some of the profits. And that is what my argument with the school administrator was about, all those years ago. I had no problem with giving the school extra money, but objected to enriching the exclusive monopoly uniform provider.

An idealistic solution to this problem would be to 'adequately' fund schools, so that they don't feel the need to create market power in the uniform market in the first place. However, that would ignore the fact that any school would be better off with a little bit more funding, and so a rational school would always engage in this practice regardless of the level of government funding they receive. The only way to prevent this practice then is to regulate against it. Labour has pledged to draw up 'guidelines' for schools. If they are enforceable, then that might be the best we can hope for, unless school uniforms were abolished entirely.

Thursday, 9 November 2017

Female student performance in high-stakes biology exams reported on a new study a couple of weeks ago:
A new study of students in introductory biology courses finds that women overall performed worse than men on high-stakes exams but better on other types of assessments, such as lab work and written assignments. The study also shows that the anxiety of taking an exam has a more significant impact on women's grades than it does for men.
 "It was striking," said Shima Salehi, a doctoral student at Stanford Graduate School of Education and one of the study's two lead authors. "We found that these types of exams disadvantage women because of the stronger effect that test anxiety has on women's performance."
The original study is available here (open access), published in the online journal PLoS ONE. The authors were Cissy Ballen (University of Minnesota), Shima Salehi (Stanford), and Sehoya Cotner (University of Minnesota). The results are based partly on data from 1205 first-year biology students over ten sections, with the results on test anxiety (and 'interest in course content') based on survey data from 372 students over three sections. In the paper, the key research questions were:
1) What is the extent of the gender gap in incoming academic preparation among students? 2) What is the extent of the gender gap in exam grades and non-exam grades? 3) Do women and men report different levels of test anxiety and interest in science? 4) Do these two affective factors influence performance outcomes in undergraduate biology courses?
The authors found that there was a significant gender gap in academic preparation among students, with ACT (American College Test) scores on average about 0.28 standard deviations lower for female than for male students. There was also a difference in exam grades between female and male students, of 0.15 standard deviations. However, to me the key results is:
When we included incoming ACT score in the model as a fixed effect, the gender gap in exam performance disappeared...
In other words, the performance gap in exams between female and male students was almost entirely explained by differences in student quality (as measured by the ACT score). There was no need for the authors to dig into text anxiety or interest in course content, especially given that the results they present based on their mediation analysis actually don't show anything because the combined paths are not statistically significant. Female students did worse because they were worse students, not because of some gender bias or because of test anxiety.

Or maybe not. I noticed that at one point in the paper, the authors note that the exam grades were "multiple-choice exam grades", which implies (to me) that the exams were wholly multiple choice. And we know based on past research that female students have a disadvantage in multiple choice questions. In the article, one of the authors is quoted as saying:
We want to figure out what kind of instructional methods will ensure that everyone can navigate successfully through these courses and have a wider range of career options.
Worry less about the instructional methods. Ditch the multiple choice in your exams, or replace them with a mixture of multiple choice and constructed response. Your female students will appreciate it.