Friday 31 March 2017

Why study economics? Understand the world edition...

John Hewson (former Liberal opposition leader, now Professor at Australian National University) wrote yesterday in the Sydney Morning Herald:
A well-rounded training in economics should be a necessary, but not sufficient, requirement for good judgment along with understanding human and institutional behaviour and an appreciation of the social, political and environmental constraints against which such judgments are made.
Nevertheless, most public policy debates these days see the various vested interests claiming models that, surprisingly, prove their case. You get the distinct impression that these models were generally generated by starting with the desired conclusion, and then working backwards to see what had to be assumed to get that result. Disraeli's three kinds of lies – lies, damned lies, and statistics...
There are calls for education to focus on STEM subjects – science, technology, engineering and maths. I would suggest the urgent need to add economics to the list.
Would that turn STEM into STEEM? Seriously though, there is much to agree with in what Hewson wrote in that article. I encourage you to read it. Using economics to develop a deeper understanding of the world we live in would help people to make better business and policy decisions. This is one of the points we try to make in first-year economics classes at Waikato, from the very first week. Even if you're not studying an economics major, and the extent of your formal economics training will be limited to a single paper like ECON100 or ECON110, even a basic understanding of fundamental economic concepts like the cost-benefit principle, demand-and-supply, and elasticities, can go a long way towards making better decisions.

[HT: "Writer's Den" on the Waikato EDG Facebook group]

Read more:

Wednesday 29 March 2017

The deadweight loss of a pillow tax

For those of you who have missed the news over the last couple of weeks, Auckland mayor Phil Goff has proposed a targeted rate for accommodation providers in Auckland, to be used to fund Auckland Tourism, Events and Economic Development (ATEED) in its role of promoting the city to tourists. At first glance, this might seem like a useful user-pays charge. After all, the accommodation industry receives benefits from the increased tourism, so why shouldn't they pay the costs rather than the ratepayers at large?

The New Zealand Herald has several stories on this topic, including this one from Tuesday:
As part of its annual budget, Auckland Council wants to shift the funding of Ateed from ratepayers to 330 accommodation providers, ranging from backpackers to camping grounds to big hotels.
Auckland mayor Phil Goff says the accommodation sector has profited from the boom in tourism and increased room rates so it was fair it should pay for Ateed rather than ratepayers.
Money saved could be redeployed to fund infrastructure such as roads, which also benefited the tourism sector.
But the accommodation sector says it has been unfairly singled out and should not be the only sector of businesses to pay for the cost of Ateed.
In its submission it says that it gets 9 per cent of visitor spend in Auckland but is being asked to fund 100 per cent of council efforts through Ateed to increase this spend.
On average rates will increase 150 per cent for the affected accommodation providers and in some cases by more than 300 per cent.
The accommodation sector is right in its criticism of this targeted rate. There are two lines of criticism though, and the sector has struck on only the first: fairness. If a commercial business benefits from the activities of ATEED, they should be paying towards its costs. To be equitable, all businesses would pay proportionally to the benefits received, but that is clearly infeasible. Paying proportional to current property value might be a second-best option. Clearly, some sectors (e.g. finance, law firms) would pay vastly more than necessary due to high property values and low benefits received, while others would pay less relative to benefits received. I would argue that including residential ratepayers in this system would be inequitable (but others might argue the opposite, since the activities of ATEED creates jobs that benefit ratepayers).

The second line of criticism, which is missed by the accommodation sector, is efficiency. If the government has a targeted amount of funding they want to raise (e.g. $27.8 million to fund ATEED), then a small tax (like rates) on a wide number of taxpayers generates a much smaller deadweight loss than a larger tax on a smaller number of taxpayers.

To see why, consider the market diagrams below. There are two sectors (A, on the left; and B, on the right). Without the tax, both markets operate at equilibrium (prices P0 and Pa, quantities Q0 and Qa), and total welfare (a measure of benefits to buyers and sellers in these markets, combined) is the area AED in Sector A, and the area FKJ in Sector B. If the government taxes the firms in both sectors a similar amount, we represent this by a new curve (S+tax) [*]. The per-unit amount of the tax is equal to the vertical distance between the S and S+tax curves (the distance BC in Sector A, or GH in sector B). The quantities traded fall to Q1 and Qb. The prices paid by customers in the two sectors increase to P1 and Pb, while the effective prices for the sellers (the price after paying tax to the government) fall to P2 and Pc. Total welfare falls to ABCD in Sector A (with a deadweight loss, or lost total welfare, equal to the area BEC) and FGHJ in Sector B (with a deadweight loss equal to the area GKH).


Now consider what happens if the government wants to raise the same tax revenue, but by taxing only one sector instead of both sectors. This is shown in the diagrams below. Notice that there is no tax in Sector A, and total welfare is maximised at AED. However, in order to earn double tax revenue from Sector B, the government must more than double the tax rate, to the distance LM. [**] The quantity traded in Sector B falls to Qc (instead of Qb). The total welfare in Sector B falls to FLMJ, and the deadweight loss increases to LKM.


Now compare the size of the deadweight losses in the first pair of diagrams (BEC+GKH) to the deadweight loss in the second pair of diagrams (LKM). It should be clear that the size of the deadweight loss in Sector B is more than four times bigger when it is the only sector that is taxed, than when both sectors are taxed. The combined deadweight loss (when you factor in that there would no longer be any deadweight loss in Sector A in the second pair of diagrams) is more than doubled. Total welfare is therefore much lower if the tax is targeted on only one sector.

So, there are both equity and efficiency arguments against the proposed targeted rate for accommodation providers in Auckland. It probably needs a careful re-think.

*****

[*] Strictly speaking, the targeted rate is different to the specific excise tax that is shown in these diagrams. The difference is that the S+tax curve should be curved in towards the supply curve (but not ever quite touch the supply curve) to represent that the average cost of the targeted rate would reduce, the greater number of accommodation nights provided by the industry. However, I have kept the diagrams simple, as this fact makes no qualitative difference to the discussion.

[**] The government needs to more than double the tax rate, because as the tax increases the quantity sold in the market decreases, so simply doubling the tax would not raise enough tax revenue.

Monday 27 March 2017

Evaluating the fiscal cost of tax cuts

As reported in the New Zealand Herald today, the Taxpayers' Union released a new report today titled "5 Options for Tax Relief in 2017". I haven't had a chance to read the report in great detail, but here's what the Herald had to say:
The average New Zealand worker is paying $483 a year more in tax because income brackets have not been adjusted with inflation, a new report by a right-wing lobby group claims...
In an effort to put pressure on the National-led Government ahead of May's Budget, the report assumes $3 billion is available for tax relief in 2017/18.
The report outlines five different options for how those tax cuts could be divvied out.
They are:
• A tax-free threshold of up to $13,000. The report states this would save taxpayers $1295 a year, for those earning more than $13,000.
• Cutting the marginal tax rate for earnings between $48,001 and $70,000 to 17.5 per cent, and increasing the 10.5 per cent threshold from $12,000 to $24,000.
The report says this would particularly benefit middle income earners, giving the example of a $4000 a year saving for a dual-income household with a combined income of $100,000.
• Cutting tax rates for high income earners by eliminating the top tax bracket and reducing the rate above $48,001 to 26 per cent, and slashing company and trust tax rates to 26 per cent.
Those measures would save a person earning a $120,000 salary more than $4300 a year. A low income earner would get no benefit, while the average earner would save just $360 a year.
• Increasing the income thresholds of each tax bracket without adjusting any of the tax rates.
• Reducing the company tax rate from 28 per cent to 13 per cent, at a cost of $2.88 billion.
The report stated that an "average worker" on $57,000 is paying $483 a year more in tax than they would had income tax thresholds been adjusted for inflation since 2010.
Back in February in another article about tax cuts, Brian Fallow pointed us to this handy tool on the Treasury website: The Personal Income Tax Revenue Estimate Tool. It's an Excel-based tool (it would be even cooler if it were fully online) that allows us to evaluate the impact of changes in the tax brackets or marginal income tax rates (the proportion of the next dollar that would be paid in income tax) on total tax revenue (from personal income tax).

I had a bit of a play with the tool and the assumptions above, and here's what it came up with:

  • The tax-free threshold of up to $13,000 would reduce personal income tax revenue by $3.35 billion
  • Cutting the marginal tax rate for earnings between $48,001 and $70,000 to 17.5 per cent, and increasing the 10.5 per cent threshold from $12,000 to $24,000 would reduce personal income tax revenue by $3.46 billion
  • Eliminating the top tax bracket and reducing the rate above $48,001 to 26 per cent would reduce personal income tax revenue by $2.49 billion (and reducing company and trust tax rates to 26 percent would reduce taxes paid by those entities as well) 
  • Increasing the income thresholds of each tax bracket without adjusting any of the tax rates (as per their report) would reduce personal income tax revenue by $3.46 billion
  • Reducing the company tax rate can't be evaluated using the PITRE tool.
I leave it up to you to decide whether these different tax changes, each costing around $3-3.5 billion according to the PITRE tool, are affordable and worthwhile. The estimates should be taken with some caution however (and the PITRE tool even warns against making large changes to the tax rates and bracket thresholds). Whenever marginal tax rates change there are a raft of effective marginal tax rate changes that affect incentives to work (or not work), that cannot easily be evaluated with a simple tool such as this. However, I do encourage you to download and play around with the tool, especially if you want to see how marginal and average tax rates work, and the effects of changing them (slightly!).


Wednesday 22 March 2017

Personality traits and field of study choice in university

I found this new paper by Martin Humburg (Maastricht University) published in the journal Education Economics very interesting (sorry I don't see an ungated version anywhere online). In the paper, Humburg used data on over 14,000 Dutch students, and looked at whether cognitive skills (maths ability, verbal ability, information processing ability) and the Big Five personality traits (measured at age 14) affected whether students later went to university, and (for those that did go on to university) their choice of field of study. Measuring these traits at age 14 is important, because it avoided any issues of reverse causality (field of study affecting personality traits). The fields of study are fairly coarse (limited to six categories).

In terms of the first part of the paper, Humburg found that:
all three measures of cognitive skills as well as extraversion, conscientiousness, and openness to experience increase the probability of going to university... Cognitive skills seem to be the primary driver of educational attainment. For example, a one standard deviation increase in math ability is associated with an increase in the probability of entering a university of 7.7 percentage points. These effects are very large, given that only around 15% of individuals in our sample go to university. Of the personality traits that influence individuals’ probability of going to university, conscientiousness has the largest effect. A one standard deviation increase in conscientiousness is associated with an increase in probability of entering university of 1.9 percentage points. While much smaller than the impact of cognitive skills, this effect is substantial and amounts to a relative increase of the probability of entering university of 12%.
So, university students (in the Netherlands) are more conscientious and open to experience that non-university-students, and have greater verbal, maths, and information processing abilities. No surprises there. I note that the extraversion result is only marginally significant (and negative - university students are less extraverted than non-university-students).

In terms of the second part of the paper, here is the key table that summarises the results:


Extraverted students are more likely to choose to study law, or business and economics, and avoid science, technology, engineering and mathematics (STEM). Agreeable students are more likely to study social sciences (excluding business and economics). Conscientious students are more likely to study medical studies (probably due to the entry requirements that keep out low-performers) and less likely to study social sciences (that probably confirms some people's priors). Emotionally stable (least neurotic) students are more likely to study STEM and less likely to study the humanities (more on that in a moment). Finally, students who are most open to new experiences are more likely to study law and less likely to study social sciences (which the author finds surprising - so do I!). The results for gender are unsurprising too, with women more likely to study social sciences, and least likely to study STEM or business and economics.

Importantly, the effects of the personality traits on field of study choice are similar in size to the effects of the cognitive skills (whereas university vs. not-university was mostly related to cognitive skills). The results are fairly robust to the inclusion of additional control variables (parental education, income, father's occupation, migrant status). You might worry about selection bias (since only the field of study choices of students who actually went to university are observed), but I doubt that is a big factor.

On that emotional stability and humanities result, Humburg concludes:
Another explanation can be derived from Tokar, Fischer, and Subich’s (1998) finding that emotionally instable (sic) individuals exhibit higher career indecision. It may therefore be the case that less emotionally stable individuals are more likely to choose the Humanities as they have a weaker link to particular occupations than STEM and Law programmes, which enables these young people to postpone their final career decision.
I'd be interested as to whether our conjoint degree students (who often seem to be studying joint degrees because they couldn't settle on one or the other) have similar personality traits. It would also be good to know, for those students who are non-conforming (in terms of their personality traits compared with others in their field of study), how well they performed academically (and in terms of getting a job, etc.). That additional work would certainly provide some valuable data for careers advisors, which is sorely lacking.

Tuesday 21 March 2017

The Maxim Institute on dealing with population decline

Last week the Maxim Institute released a new report on regional development in New Zealand. Radio New Zealand reported on it here, but note that they say the report:
predicts populations in many regions will drop or stagnate within three decades.
Actually, that's based on work that Natalie Jackson and I have done, which is available in this working paper (forthcoming in the Journal of Population Ageing, and with an update based on stochastic projections methodology due in the journal Policy Quarterly later this year - I'll talk about that in a later post).

Anyway, the Maxim report (written by Julian Wood) doesn't contribute anything new research-wise, but does do a good job of collating important research on regional development with a particular focus on New Zealand. There are some parts of the report that should be required reading for local council planners, particularly those in rural and peripheral areas where populations are declining. As one example:
When looking at the age composition of population growth this broad-based regional decline is accelerated by the fact that “only 16 TAs will not see all their growth to 2043 at the 65+ years [age group].” In short, in 10 national election cycles (thirty years), the majority of local governments will not only be experiencing population stagnation, but the vast majority will be experiencing far older populations with far fewer people in their prime working age (aged 15-64). This reality means that the vast majority of rural New Zealand shouldn’t be planning for, or counting on population growth as a driver of economic growth.
Rather, as a rural community’s population ages and or declines it will likely come under increasing economic, financial, and social pressure. Fewer people of working age can mean less employment income in a community and less consumer spending and hence less business income. Local government income can also decline as there are fewer people and businesses paying rates.
And this quote from the report pretty much sums it up:
There is a need for a “growth everywhere” reality check.
The reality that population growth is not a given for most of the country has not dawned on many councils (based on many discussions I have had over the years). Many councils still refer to population projections as 'growth projections', which makes no sense whatsoever if your population has been declining for two decades or more!

"But won't Big Project XXXX lead to expanded population growth?" I've heard that one before, and in fact in one of the early population projection projects I was involved in we quantified and accounted for 'big projects', but it turned out later that if we took the projected populations excluding the big project effects we weren't too far off. 'Big projects' are simply business-as-usual for a growing city or district like Hamilton City or Waikato District, but for declining peripheral areas the money would probably be better spent elsewhere. The Maxim report notes:
The overall picture remains, however, that building physical infrastructure alone will be insufficient to economically “restart” a rural economy in long-term population stagnation and decline.
The Maxim report recommends three 're-thinks', which are definitely worth considering:
  • Rethink #1: All regional development goals must be explicitly and clearly stated to enable clarity, transparency, scrutiny and co-ordination. As part of this “regional wellbeing indicators” should be explicitly developed and included in these regional development goals.
  • Rethink #2: Regional development goals need to be ranked and prioritised with tensions, trade-offs, or the subservient relationships between the goals explicitly outlined and prioritised so as to enable evaluation.
  • Rethink #3: New Zealand needs to rethink its sole focus on economic growth, shifting to a framework that also empowers communities to meet both the economic and social needs of their populations in the midst of “no growth or even decline.”
The first two should be obvious for any decision-making, not just in terms of regional development. The third needs policy makers to face up to the reality that population growth is not the destiny of every part of the country. Which is a point I have made before.

Read more:

[HT: Natalie Jackson]

Sunday 19 March 2017

Newsflash! NZ health insurers want us to buy more health insurance

Roger Styles (chief executive of the Health Funds Association of New Zealand, the industry body for health insurers) wrote an opinion piece in the New Zealand Herald last week:
Thanks to an ageing population, healthcare inflation, and the rise of new and costly treatments, New Zealand's health spending has one of the fastest rates of increase in the OECD.
The Treasury has repeatedly advised this unsustainable growth presents a bigger fiscal problem for the Government than the soaring cost of NZ Super...
Currently about 20 per cent of healthcare in this country is privately funded, amounting to about $4b a year. Just over a quarter of this is funded through health insurance, which is held by about 1.36 million New Zealanders.
Health insurance could be playing a bigger role in meeting future healthcare costs and thereby relieving the pressure on government budgets and the public health system.
Private health insurance is ideally placed to be able to routinely fund high-cost treatments, which user charges cannot. We just need to address some of the disincentives that stand in the way of more people taking out cover...
Insurance works by aggregating premiums across a large number of people in order to fund the healthcare costs which might otherwise be unaffordable or cause financial hardship.
The fact that New Zealand can achieve $1.13b annually of healthcare funding through 28.5 per cent of the population having health insurance indicates that there is significant scope to increase the contribution to future healthcare costs by lifting coverage rates...
he Government needs to face up to the unsustainability of future health spending and develop a collaborative strategy to reduce dependency on public financing and move closer to the OECD average for public/private health spending shares. It won't be able to raise the age of eligibility for surgery, and it will have to act before 2040.
It may be true that health care costs will rise in the future, due in part to population ageing but equally or more due to Baumol's cost disease (see here for more on this). However, that in itself doesn't mean that we need more health insurance. Anyone who believes that health insurance is a solution ought to be looking carefully at the continuing mess that is the cost of healthcare in the U.S.

The main problem with insurance is adverse selection - insurers want low-risk people to buy insurance, but the incentives to buy insurance are greater for high-risk people. In the case of voluntary health insurance (as in New Zealand currently), the people who buy health insurance are either: (1) the worried well and risk averse healthy people; and (2) people at high risk of getting sick. The second group are of course quite expensive for insurers, so they rely on getting as many healthy people (who won't make claims) to sign up as possible. One way to do this is the lobby government to make health insurance more attractive in some way, such as making employer-funded schemes tax-advantageous to firms (which is what Styles is arguing for).

However, we need to see this for what it is. A self-serving lobby from a profit-motivated health insurance industry that ought to be ignored by policy makers.

Wednesday 15 March 2017

Durban dodges a bullet by losing the Commonwealth Games

I read with interest this week that Durban has been stripped of the hosting rights for the 2022 Commonwealth Games. The New Zealand Herald reported on Tuesday:
Durban was stripped of the right to host the 2022 Commonwealth Games mainly because the South African government couldn't provide financial guarantees.
Also, other commitments the city made when it won the bid had still not been met nearly two years later.
Durban presented a revised budget and hosting proposal to the Commonwealth Games Federation over the weekend but the last-ditch effort to save Africa's first international multi-sport event wasn't enough.
"It is with disappointment that the detailed review has concluded that there is a significant departure from the undertakings provided in Durban's bid, and as a result a number of key obligations and commitments in areas such as governance, venues, funding and risk management/assurance have not been met," the CGF said in a statement.
The CGF was "actively exploring alternative options, including a potential replacement host," CGF president Louise Martin said.
Those of us who have read Andrew Zimbalist's excellent book "Circus Maximus: The Economic Gamble behind Hosting the Olympics and the World Cup" (which I reviewed here) will know that this outcome is probably a good thing for the government and the people of Durban. There is little evidence that there is any short-term or long-term positive impact on the economy of hosting a large event like the Olympics, Commonwealth Games, or FIFA World Cup.

However, I did read this paper recently, by Paul Dolan (London School of Economics) and others, which shows a short-term increase in subjective wellbeing (happiness) in London at the time of the Olympics, compared with Paris and Berlin. However, the increase in happiness was short-lived, and had disappeared within a year of the event.

So, perhaps the people of Durban will be less happy in 2022 than they would have been while hosting the Commonwealth Games, but by 2023 they will be just as happy having not hosted it. And they won't be saddled with a bunch of white elephant venues and a significant public debt to pay off.

Read more:


[HT: Tim Harford for the Dolan et al. paper]

Tuesday 14 March 2017

Take your pick: Economists are dodgy researchers, or not

I've written a couple of times here about economics education and moral corruption (see here and here). If economists were really corrupting influences though, you would probably expect to see them engage in dodgy research practices like falsifying data. I was recently point to this 2014 paper by Sarah Necker (University of Freiburg) published in the journal Research Policy (sorry I don't see an ungated version anywhere online).

In the paper, Freiburg reports on data collected from 426 members of the European Economic Association. She asked them about the justifiability of different questionable research practices, about whether they had engaged in the practice themselves, and whether others in their department had done so. Here's what she found in terms of views on justifiability:
Economists clearly condemn behavior that misleads the scientific community or causes harm to careers. The least justifiable action is “copying work from others without citing.” Respondents unanimously (CI: 99–100%) agree that this behavior is unjustifiable. Fabricating or correcting data as well as excluding part of the data are rejected by at least 97% (CI: 96–99%). “Using tricks to increase t-values, R2, or other statistics” is rejected by 96% (CI: 94–98%), 93%(CI: 90–95%) consider “incorrectly giving a colleague co-authorship who has not worked on the paper” unjustifiable...
Strategic behavior in the publication process is also rejected but more accepted than practices applicable when analyzing data or writing papers. Citing strategically or maximizing the number of publications by slicing into the smallest publishable unit is rejected by 64% (CI: 60–69%). Complying with suggestions by referees even though one thinks they are wrong is considered unjustifiable by 61% (CI: 56–66%)...
So, these practices are all viewed as unjustifiable by the majority of respondents. Does that translate into behaviour? Freiburg reports:
The correction, fabrication, or partial exclusion of data, incorrect co-authorship, or copying of others’ work is admitted by 1–3.5%. The use of “tricks to increase t-values, R2, or other statistics” is reported by 7%. Having accepted or offered gifts in exchange for (co-)authorship, access to data, or promotion is admitted by 3%. Acceptance or offering of sex or money is reported by 1–2%. One percent admits to the simultaneous sub-mission of manuscripts to journals. About one fifth admits to having refrained from citing others’ work that contradicted the own analysis or to having maximized the number of publications by slicing their work into the smallest publishable unit. Having at least once copied from their own previous work without citing is reported by 24% (CI: 20–28%). Even more admit to questionable practices of data analysis (32–38%), e.g., the “selective presentation of findings so that they confirm one’s argument.” Having complied with suggestions from referees despite having thought that they were wrong is reported by 39% (CI: 34–44%). Even 59% (CI: 55–64%) report that they have at least once cited strategically to increase the prospect of publishing their work.
According to their responses, 6.3% of the participants have never engaged in a practice rejected by at least a majority of peers.
You might think those rates are high, or low, depending on your priors. However, Freiburg notes that they are similar to those reported in a similar study of psychologists (see here for an ungated version of that work). Other results are noted as being similar to those for management or business scholars.

Freiburg then goes on to demonstrate that these questionable behaviours are related to respondents' perceptions of the pressure to publish. That is, that those facing greater publication pressure are more likely to engage in questionable research behaviours. Those results are not nearly as clear, as they are for the most part statistically insignificant, mostly likely due to the relatively small sample size. So, although they provide a plausible narrative, I don't find them convincing.

However, the takeaway message here clearly depends on your own biases. Either economists are dodgy researchers, frequently engaging in questionable research practices ("only 6.3% have never engaged in a practice rejected by at least a majority of peers"), or they are no better or worse than other disciplines in this regard. Take your pick.

[HT: Bill Cochrane]

Sunday 12 March 2017

The irrationality of NFL play callers, part 2

A few weeks ago I wrote a post about the irrationality of NFL offensive play callers, specifically that they fail to adequately randomise their play choices, with the implication that defensive play callers should be able to (and do) exploit this for their own gain. Why would they do this? The Emara et al. paper (one of the two I used in the post) suggested:
Perhaps teams feel pressure not to repeat the play type on offense, in order to avoid criticism for being too “predictable” by fans, media, or executives who have difficulty detecting whether outcomes of a sequence are statistically independent. Further, perhaps this concern is sufficiently important so that teams accept the negative consequences that arise from the risk that the defense can detect a pattern in their mixing.
Which seems like a plausible suggestion. Last week I read this recent post by Jesse Galef on the same topic:
In football, it pays to be unpredictable (although the “wrong way touchdown” might be taking it a bit far.) If the other team picks up on an unintended pattern in your play calling, they can take advantage of it and adjust their strategy to counter yours. Coaches and their staff of coordinators are paid millions of dollars to call plays that maximize their team’s talent and exploit their opponent’s weaknesses.
That’s why it surprised Brian Burke, formerly of AdvancedNFLAnalytics.com (and now hired by ESPN) to see a peculiar trend: football teams seem to rush a remarkably high percent on 2nd and 10 compared to 2nd and 9 or 11.
What’s causing that?
Galef argues that there are two possibilities (note that the first one is similar to the suggestion by Emara et al.):
1. Coaches (like all humans) are bad at generating random sequences, and have a tendency to alternate too much when they’re trying to be genuinely random. Since 2nd and 10 is most likely the result of a 1st down pass, alternating would produce a high percent of 2nd down rushes.
2. Coaches are suffering from the ‘small sample fallacy’ and ‘recency bias’, overreacting to the result of the previous play. Since 2nd and 10 not only likely follows a pass, but a failed pass, coaches have an impulse to try the alternative without realizing they’re being predictable.
Galef then goes through some fairly pointy-headed methodological stuff, before arriving at his conclusion:
If their teams don’t get very far on 1st down, coaches are inclined to change their play call on 2nd down. But as a team gains more yards on 1st down, coaches are less and less inclined to switch. If the team got six yards, coaches rush about 57% of the time on 2nd down regardless of whether they ran or passed last play. And it actually reverses if you go beyond that – if the team gained more than six yards on 1st down, coaches have a tendency to repeat whatever just succeeded.
It sure looks like coaches are reacting to the previous play in a predictable Win-Stay Lose-Shift pattern...
All signs point to the recency bias being the primary culprit.
However, I'd still like to see some consideration of risk aversion here. Galef controlled for game situation and a bunch of other game- and team-level variables, but not individual-level variables related to the coaches (he did control for quarterback accuracy, but as far as I can see that might be a team-level variable if the team has changed quarterback mid-season).

This is yet more evidence that there is an exploitable trend in NFL offensive play calling, but the reason underlying this trend is still not fully established. Defensive play callers need not care about the reasons why though - they should be adjusting their strategies now.

Read more:


Tuesday 7 March 2017

Grade inflation is harming students' learning

I just finished reading a 2010 paper by Philip Babcock (University of California, Santa Barbara) published in Economic Inquiry (ungated earlier version here), which looked at the real costs of grade inflation. In the paper, Babcock makes a convincing argument that grade inflation exhibits a trade-off that ultimately harms students. The trade-off is that, in classes where grades are inflated, students spend less time studying. On the surface, that might seem like the least startling research conclusion ever, but actually it takes some unpicking.

Babcock uses data on course evaluations for "6,753 classes from 338,414 students taught by 1,568 instructors across all departments, offered in 12 quarters between Fall 2003 and Spring 2007" at the University of California, San Diego. Importantly, the dataset included data on the number of hours (per week) that students reported studying in each of their courses, and their expected grade. The average expected grade for each course as a whole is a measured of how the course is graded (he didn't have data on actual grades, but student decision-making is really about perceptions and expectations, so the expected grade data are the right data to use). The simplest analysis is just to graph the data - here is a plot of study time against the average expected grade:

Note the downward trend line, which nicely illustrates that students in classes with higher average expected grades study for fewer hours per week, on average. Babcock notes that this is a general result across all departments. For instance, here is the same graph for economics:

However, he doesn't stop there, applying regression models to control for characteristics of the course, but also instructor-specific effects, course-specific effects and any time trends. So, the regression results can be thought of as representing the answer to the question: if the same instructor, teaching the same course, had higher expected grades, what would be the effect on study time? Here is what he finds:
Holding fixed instructor and course, classes in which students expect higher grades are found to be classes in which students study significantly less. Results indicate that average study time would be about 50% lower in a class in which the average expected grade was an "A" than in the same course taught by the same instructor in which students expected a "C".
Obviously, that's quite a significant effect. And as I see it, it has two negative implications. First, students who study less will learn less. It really is that simple. Second, as Babcock notes in his paper this isn't so much a story of 'grade inflation' as 'grade compression'. Since the top grade is fixed as an A, if an A is easier to get, that compresses the distribution of grades. The problem here is that the signalling value of grades for employers becomes less valuable. If employers can no longer use grades to tell the top students from the not-quite-but-nearly-top students, then the value of the signal for top students is reduced (I've written about signalling in education before here). If grade compression continues, the problem gets even worse.

Of course, the incentives for teachers are all wrong, and Babcock demonstrates this as well. Students give better teaching evaluations to teachers when those teachers are teaching a course where the average expected grade is higher. So, if good teaching evaluations are valuable to the teacher (for promotion, tenure, salary increments, etc.) then there is incentive for them to inflate grades to make students happy and more likely to rate the teaching highly. On a related note, Babcock also finds that:
...even though lower grades are associated with large-enrollment courses, when the same course is taught by a more lenient instructor, significantly more students enrol.
My only gripe with the paper is pretty minor, and is something that Babcock himself addressed (albeit briefly, in a footnote), and that is the lack of consideration of general equilibrium effects. I've had a number of conversations with other lecturers (and students) about making additional resources available for students (past exam papers, worked examples, extra readings, this blog, etc.). The idea is we put in this additional effort in order to help our students to pass our courses. However, the kicker is that if we as teachers put more effort in, then students might re-direct their own effort away from our courses and towards other courses where the effort requirement from them to achieve their desired grades is higher. I have only anecdotal evidence (from talking with students over a number of years) that this unintended consequence occurs, but it is worrying.

In the case of grade inflation, making it easier to get an A in our course may make it easier for the students to do better in other courses, since they re-direct efforts to the other courses they perceive as being more difficult. While it might make the impact of grade inflation higher (since average grades would also increase in courses where grades are not inflated), I don't think you could argue that this is a good outcome at all.

Overall, while this is based on a single study in a single institution, it is reasonably convincing (or maybe that is just confirmation bias?). Grade inflation is not good for students, even if it might be good for teachers.

Monday 6 March 2017

Cocaine makes a comeback in the U.S.

The Washington Post reports:
While much of the recent attention on drug abuse in the United States has focused on the heroin and opioid epidemic, cocaine has also been making a comeback. It appears to be a case of supply driving demand.
After years of falling output, the size of Colombia’s illegal coca crop has exploded since 2013, and the boom is starting to appear on U.S. streets...
Given that we are covering supply and demand in ECON100 at the moment, this seems like an appropriate time to look at what is happening here. There has been an increase in the supply of cocaine (a shift to the right, or down, of the supply curve for cocaine), as shown in the diagram below, from S0 to S1. This lowers the equilibrium price of cocaine (from P0 to P1), and increases the quantity sold and consumed (from Q0 to Q1). In other words, since cocaine is now cheaper, more people use it (rather than more expensive substitutes such as heroin or opioids).

Why has supply increased? The Washington Post article explains:
The State Department report cites four major reasons for the sudden coca-growing binge by Colombian farmers.
The first is that FARC rebels appear to have encouraged farmers in areas under their control to plant as much coca as possible in preparation for the end of the war, “purportedly motivated by the belief that the Colombian government’s post-peace accord investment and subsidies will focus on regions with the greatest quantities of coca,” the report said.
At the same time, the government reduced eradication in those areas “to lower the risk of armed conflict” and create a favorable climate for the final peace settlement.
The Colombian government also ended aerial spraying with herbicides in favor of manual eradication. But when eradication brigades have arrived to tear out illegal crops, local farmers have blocked roads and found other ways to thwart the removal, including the placement of improvised explosive devices among the coca bushes.
The final factor is the Colombian government’s financial squeeze, “resulting in a 90 percent reduction in the number of manual eradicators in 2016 as compared to 2008,” according to the report.
Notice how all of the four factors listed above either lead to a greater supply of the crop getting to market, or lower the costs of supplying cocaine. Both of which leads to an increase in supply. However, it's likely that the change in price shown in the diagram above is somewhat exaggerated, in the reverse of the effect explained by Tom Wainwright in his excellent book, Narconomics (see my post here for details).

Read more:


Sunday 5 March 2017

Beware bogus taxi surcharges when travelling on an expense account

In my ECON100 and ECON110 classes, we discuss moral hazard. In ECON110, we also discuss supplier-induced demand. So, I was interested to read this new paper (ungated earlier version here) by Loukas Balafoutas (University of Innsbruck), Rudolf Kerschbamer (University of Innsbruck), and Matthias Sutter (European University Institute), published in the journal The Economic Record.

In the paper, Balafoutas et al. look at moral hazard among taxi drivers. Recall that moral hazard is the tendency for a person who is imperfectly monitored to engage in dishonest or otherwise undesirable behaviour. That is, moral hazard is a problem of post-contractual opportunism, where people have the incentive to change their behaviour to take advantage of the terms of the agreement (or contract) to extract additional benefits for themselves, at the expense of the other party.

However, what the authors are looking at in this paper isn't your run-of-the-mill moral hazard. They distinguish it as what they call second-degree moral hazard:
As an illustration consider the market for health care services and assume that the consumer of the service – in this case, a patient – is fully insured and interacts with a seller of the service – in this case, a physician. Moral hazard implies that the patient may have incentives to demand more of the service than required (by asking for more numerous or more extensive tests or treatments), since he will not bear its costs. However, the behaviour of the physician may also be affected by the extent of the coverage: if the physician expects that the patient is not concerned about minimising costs, he may be more inclined to suggest or prescribe more expensive treatments. Notice that the two stories – which we will call first-degree moral hazard and second-degree moral hazard – are observationally equivalent in terms of final outcomes, in the sense that more extensive insurance coverage leads to higher expenditure, but the mechanisms are different. While first-degree moral hazard operates through the demand side, second-degree moral hazard increases expenditure through supplier-induced demand – the artificial increase in demand induced by the actions of the seller.
Supplier-induced demand occurs when there is asymmetric information about the necessity for services. In a city that is unfamiliar to the taxi passenger, they may not know the shortest route to get to their destination, whereas the taxi driver does. So, the taxi driver has an incentive to take the passenger on a longer (and more expensive route). Balafoutas et al. note:
In the case of taxi rides in an unknown city, the service traded on the market is a credence good... meaning that an expert seller possesses superior information about the needs of the consumer. In particular, the driver knows the correct route to a destination while the consumer does not. This property of credence goods opens the door to different types of fraud: overtreatment occurs when the consumer receives more extensive treatment than what is necessary to meet his needs (with taxi rides, this amounts to a time-consuming detour); in the opposite case of undertreatment, the service provided is not enough to satisfy the consumer (i.e. he does not reach his destination); finally, in credence goods markets where the consumer is unable to observe the quality she has received, there might also be an overcharging incentive, meaning that the price charged by the seller is too high, given the service that has been provided.
The paper uses an interesting field experiment to tease out how (and by how much) taxi drivers engage in second-degree moral hazard behaviour. They used four research assistants (two male, two female), who each took 100 trips across Athens. All four of them went on each trip within a few minutes of each other (to ensure they faced similar traffic situations, etc.), and two of them (one male, one female) made it clear to the driver that their (the passenger's) employer was paying for the ride (which I guess was true - the researchers no doubt paid for these rides). The research assistants then recorded the price paid for the journey, plus measured the distance travelled using a portable GPS. This allowed the authors to compare rides between male and female passengers, as well as between the 'moral hazard treatment' and control. They found that:
our moral hazard manipulation has an economically pronounced and statistically significant positive effect on the likelihood and the amount of overcharging, with passengers in that treatment being about 17% more likely to pay higher-than-justified prices for a given ride. This also leads to significantly higher consumer expenditures in this treatment on average. At the same time, the rate of overtreatment (by taking time-consuming detours) does not differ across treatments. Hence, second-degree moral hazard does not increase the extent of overtreatment compared to the control, while it does increase the likelihood and the extent of overcharging.
The actual mechanism of the overcharging was interesting:
In the large majority of these cases (86 out of 112, or 76.8%) bogus surcharges were applied, namely higher-than-justified extras from and to the airport, the port, the railway station and the bus station. The second most frequent source of overcharging (14 cases) were manipulated taximeters or the use of the night tariff during daytime, while rounding-up (not tipping!) of the price (by more than 5%) accounted for the remaining twelve cases of overcharging.
I also found it interesting that in the control, female passengers were more likely to pay a higher fare than male passengers. However, in the moral hazard treatment (i.e. when passengers told the drivers their employer was paying) there was no gender difference in overcharging.

Overall, this is a really interesting use of a field experiment to tease out results that would not be easily possible in other ways. The only part of the paper that made me a little concerned was buried in a footnote:
We note that the sample initially consisted of 256 observations collected in 2013. The remaining 144 observations were collected in 2014 at the advice of the editor and referees and resulted in stronger statistical significance. Based on the initial sample alone, some of the results outlined in subsection 2.2 were only marginally significant.
This trick of boosting the sample size in order to generate effects that are statistically significant is a trick that experimental psychologists use, and shouldn't really be condoned. It's been implicated in the replication crisis that psychology is facing. At least they were up-front about it, and the results were published. But imagine, if the results became less statistically significant, then what do they do? Not publish? Or continue to collect additional data, until eventually they attain some statistical significance?

Anyway, that little gripe aside, I found this paper a good read, with important implications. One of which is that I will certainly be looking out for bogus taxi surcharges the next time I take a taxi from the airport in a strange city.

[HT: The Economist]

Saturday 4 March 2017

Book Review: Economix

Earlier this week I finished reading Economix: How Our Economy Works (and Doesn't Work) in Words and Pictures by Michael Goodwin. This book wasn't really what I expected, but that isn't necessarily a bad thing. I've previously read Grady Klein and Yoram Bauman's The Cartoon Introduction to Economics (both Volume I and Volume II), and was expecting a similar treatment of economic principles, illustrated with some mildly humorous cartoons. Instead, Goodwin takes the reader on a journey through the history of economic thought, from Jean-Baptiste Colbert and mercantilism in France in the 17th Century, through to Occupy Wall Street. The book is ably illustrated by Dan Burr.

One of my criteria for a good pop-economics book is whether or not there is anything in them that I can use in my ECON100 or ECON110 classes, and there were certainly a few bits that will be useful in that sense. I also found the discussion of the physiocrats in France useful for myself, along with the explanation of the centrality of the labour theory of value to Marxism, which I hadn't realised before. I guess that just reflects that I haven't deeply studies the history of economic thought myself.

The general reader will probably find this book interesting, but it is probably a bit America-centric for my tastes. To be fair, Goodwin makes this point early:
...while I tried to cover the whole world, I focused on the economy of the United States because I'm an American and that's the economy I live in.
Fair enough. Some may find the end of the book a little preachy, and if you have strong neoliberal beliefs you'll probably not appreciate many parts of the book. I found it to be a fair treatment, and there is nothing terribly controversial in it apart from perhaps one bit where Goodwin advocates a progressive revenue tax on corporations (which he argues would incentivise the corporations to split into smaller entities, limiting monopoly power).

If you're interested in the history of economic though, but not sure where to start, this is probably a better place than most. There is also a good amount of additional content on Goodwin's website, economixcomix.com. I especially like this timeline of economists.

Wednesday 1 March 2017

Future life expectancy is good for NZ men, but be cautious

What are the limits of human life expectancy? I don't think any of us really know. Last month, I posted about an article in the journal Nature that claimed we were already at the limit by the mid-1990s. Now, almost at the other extreme, a new article by Vasilis Kontis (Imperial College London) and others, and published in the The Lancet (and appears to be open access), projects larger-than-expected future increases in life expectancy.

Kontis et al. basically ran every major model type that is used to model age-specific death rates (21 models in all) across 35 industrialised (high-income) countries, and then took a weighted average of those models [*]. The modelling approach also allowed them to make probabilistic forecasts (which is something that Jacques Poot and I have been working on, in terms of population projections, for many years). They looked at life expectancy at birth, and life expectancy (remaining life years) at age 65. I'm just going to focus on the first of those. Here's what they found:
Taking model uncertainty into account, we project that life expectancy will increase in all of these 35 countries with a probability of at least 65% for women and 85% for men, although the increase will vary across countries. There is nonetheless a 35% probability that life expectancy will stagnate or decrease in Japanese women by 2030, followed by a 14% probability in Bulgarian men and 11% in Finnish women...
There is 90% probability that life expectancy at birth among South Korean women in 2030 will be higher than 86·7 years, the same as the highest life expectancy in the world in 2012, and a 57% probability that it will be higher than 90 years... a level that was considered virtually unattainable at the turn of the 21st century by some researchers.
So, they are offering better than even odds that life expectancy for South Korean women will exceed 90 years by 2030, a point that has been picked up in the media (see for example here and here). However, something that wasn't picked up (even by the NZ media) was the somewhat surprising projection for male life expectancy in New Zealand. Here's the relevant part of their Figure 3:


Male life expectancy in New Zealand for 2010 is already one of the highest in those 35 countries (ranking us sixth out of 35), but look at the left panel of the graph, which is their projection for 2030. Notice that, based on the median projection (the red dot), New Zealand pretty much retains its same ranking. However, notice also that the green smudge (the distribution of projected life expectancies) for New Zealand is much wider than for other countries (I guess they are much less certain about their projection for New Zealand). That leads to, on the right panel of the graph, New Zealand having a relatively high probability of holding the top ranking for male life expectancy in 2030. I must say I was a little surprised by this. Maybe good reason for men to stay in New Zealand?

There's a lot to commend in this research, and the BMJ article is not too mathy (but I wouldn't recommend reading the online appendix on that score). However, it isn't without its problems. The authors have done a good job of using multiple models and bringing them together using a weighted average.

However, one of the key problems with models is that they are less good at extrapolating beyond the range of data that are inputs into the model. And essentially, that is unavoidable when it comes to projecting life expectancy. No industrialised country has ever had life expectancy before as high as it is in those countries today, let alone projecting future further gains in life expectancy. One of the big remaining questions in human biology is the limits to human lifespan, and this sort of trend extrapolation doesn't really help us to understand that. There may be biological limits to lifespan that we haven't approached yet and maybe we won't even know we have approached them until we hit them. At which point, extrapolations of past trends will not be a good predictor of future gains in life expectancy.

The authors themselves note:
Early life expectancy gains in South Korea, which has the highest projected life expectancy, and previous to that in Japan, were driven by declines in deaths from infections in children and adults; more recent gains have been largely due to postponement of death from chronic diseases.
That's not just true in South Korea and Japan, but all industrialised countries. For most of these countries, the future gains in life expectancy at birth that could arise from further reductions in infant and child mortality are limited. The low-hanging fruit of life expectancy gains have already been picked. Further increases in life expectancy now are most likely to arise through reductions in the 'stupid-young-male effect' (e.g. reductions in injury deaths). Remember that life expectancy at birth is measured as the age by which half of that birth cohort will have died (half would still be alive). So, life extending medical technologies that work for the very-old (likely to be those already above the median age for those born in their cohort) will have no effect on measured life expectancy.

Overall, it might be wise to be more cautious in interpreting these projected gains in life expectancy. For New Zealand men, it may be premature to be popping champagne in anticipation of our long-livedness.

*****

[*] What they actually did is called Bayesian model averaging, which essentially means that they weighted the models by how good they are at predicting actual data, with models that are better predictors receiving higher weights.