Saturday, 29 April 2017

Why all students need to understand adverse selection and signalling

Of the economic concepts we cover in ECON100 and ECON110, adverse selection is one of the most deceptively difficult problems to explain. It is easy to understand that some people simply know some things that others don't (economists call that private information, and when there is private information we also say that there is information asymmetry). However, in order for there to be an adverse selection problem, the private information needs to lead to market failure in some way, and explaining the market failure is more difficult than explaining the private information. Not all private information leads to market failure, and the market failure is the reason why we need to have ways of dealing with the adverse selection problem. Since the problem stems from private information, solving an adverse selection problem involves revealing the private information to the uninformed party. When the informed party (the one that knows the private information) tries to credibly reveal that information, economists call that signalling.

Students are engaging in a sophisticated array of signals, on multiple levels. It's not possible to avoid signalling in this case, since trying not to provide a signal is itself a signal. The problem that this signalling is trying to avoid stems from private information about the quality of the student - students know whether they are high quality (intelligent, hard working, etc.), but employers don't. Employers want to hire high-quality applicants, but they can't easily tell them apart from the low-quality applicants. This presents a problem for the high-quality applicants too, since they want to distinguish themselves from the low-quality applicants, to ensure that they get the job. In theory, this could lead the market to fail, but in reality the market has developed ways for this private information to be revealed.

One way this problem has been overcome is through job applicants credibly revealing their quality to prospective employers - that is, by job applicants providing a signal of their quality. In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality applicants, would provide the signal), and it must be costly in a way that makes it unattractive for the lower quality applicants to do so (such as being more costly for them to engage in).

Qualifications (degrees, diplomas, etc.) provide an effective signal (they are costly, and more costly for lower quality applicants who may have to attempt papers multiple times in order to pass, or work much harder in order to pass). So by engaging in university-level study, students are providing a signal of their quality to future employers. The qualification signals to the employer that the student is high quality, since a low-quality applicant wouldn't have put in the hard work required to get the qualification. Qualifications confer what we call a sheepskin effect - they have value to the graduate over and above the explicit learning and the skills that the student has developed during their study.

However, there are actually multiple levels of signalling associated with university study. Employers are faced with many applicants that have similar qualifications, and it is difficult to distinguish who, among those with the qualification, is the better applicant. So, the choice of major provides an additional signal to employers. Some majors are clearly more difficult than others - students who can complete a degree while majoring in more difficult majors are signalling to employers that they higher-quality employees than students who complete a degree with easier majors. I leave it up to you to determine which majors might be easier, and which might be more difficult.

Within majors there is a further signal, which is the student's choice of papers. Taking economics as an example, econometrics is likely to be the most difficult paper that students will take. So, students who complete an economics major without completing an econometrics paper are signalling to employers that (among economics graduates) they are the lower-quality economics graduates. [*] I'm sure there are certain papers within other majors that are perceived as difficult and provide a similar signal for students taking those majors.

Within papers there is a yet another signal, which is the grade the student receives. It is harder to get an A than to get a C, so the grade a student receives in any paper also provides a signal of student quality to employers. The saying goes that "C's get degrees", which may be true, but students with C's don't get their first choice of jobs (or at least, they have a lesser chance of getting the good jobs).

But there is still one more signal that students engage in, which isn't a signal to employers but a signal to their lecturers. Just like employers, lecturers don't know who the high quality students are (remember this is private information). So, how students engage in class, and how they perform in assessments, is a way for the students to signal their quality to lecturers. Students who don't complete some assessment items, or who don't attend lectures or tutorials, or who don't complete online tests, and so on, are providing a signal to their lecturers and it's not a signal of their high quality. Even avoiding a small piece of assessment, or an optional task in class, is a signal to the lecturer. And that signal may make the difference between an A and B grade, or between a pass and fail. Which in turn becomes a signal to employers, as noted above.

So, students are generating many signals (by completing a qualification, by their choice of major and individual papers to include in their qualification, by their grades in those papers, and by how they engage and perform within each paper). All of which means that every student needs to understand adverse selection and signalling. Otherwise, they might just end up providing the wrong signals.

*****

[*] Which is why I recommend to economics majors that they include econometrics in their programme of study, even though it is not compulsory.

Wednesday, 26 April 2017

This couldn't backfire, could it?... Possum bounty edition

Geoff Thomas wrote in the New Zealand Herald earlier this week:
We are going to get rid of all rats by 2050. Really? After killing all the rats in the mountains and forests and farms, how do they propose getting rid of the rats that live under most of the houses in New Zealand? Pet cats kill rats. They do a great job. But as long as you have pet cats, you will also have wild cats.
There is a partial solution which would tick a lot of boxes. Make the problem worth money.
When wild deer were being caught to stock the burgeoning deer farming industry in the 1960' and 70's you couldn't find a deer. A hind was bringing up to $2000 straight out of the bush and Kiwis came up with all sorts of ideas for live capture. They used helicopters, net guns, foot traps, fenced traps and all sorts of innovations, some of which didn't work very well. But the financial incentive ensured that deer were hard to find.
It has been reported that it costs something like $60 to kill a possum using aerial-spread 1080 poison. Whether it is $60 or less, the principle remains the same. If that was paid to hunters for every possum tail they produced, it would create employment in job-poor rural areas, encouraging youngsters to set traps and go out at night with a spotlight and a .22 rifle. Many do that now anyway.
Which should remind us of a famous story about cobras in Delhi that I wrote about earlier here:
The government was concerned about the number of snakes running wild (er... slithering wild) in the streets of Delhi. So, they struck on a plan to rid the city of snakes. By paying a bounty for every cobra killed, the ordinary people would kill the cobras and the rampant snakes would be less of a problem. And so it proved. Except, some enterprising locals realised that it was pretty dangerous to catch and kill wild cobras, and a lot safer and more profitable to simply breed their own cobras and kill their more docile ones to claim the bounty. Naturally, the government eventually became aware of this practice, and stopped paying the bounty. The local cobra breeders, now without a reason to keep their cobras, released them. Which made the problem of wild cobras even worse.
Replace cobras in the above story with possums, and Delhi with New Zealand, and you have Geoff Thomas's solution. Any entrepreneurial person would quickly realise that it is cheaper and easier to farm possums than to hunt them, and therefore much more profitable. This time really won't be different.

Read more:


Sunday, 23 April 2017

Is inequality increasing?

My post a couple of days ago pointed out that people are more concerned about unfairness, than about inequality. However, in case any of you are concerned about inequality, it pays to know a little more about it.

The Australian Economic Review has an excellent section in each issue titled "For the student". In a 2015 issue, Richard Pomfret (University of Adelaide) wrote an excellent article for that section with the same title as this post. He didn't really answer the question with data, but nevertheless it is an excellent review that outlines two main theories about the sources of inequality: (1) Thomas Piketty's assertion that inequality arises because the returns on capital are exceeding the growth rate of the economy; and (2) as a result of technological change and globalisation. I encourage you to read it (if you have access - I can't see an ungated version anywhere). I found it the most accessible summary of Piketty's work I have read (noting that I haven't read Piketty's Capital in the Twenty-First Century myself).

Pomfret might not have answered his question for Australia, but in a recent paper published in the journal New Zealand Economic Papers (also unfortunately no ungated version I can see), Christopher Ball (Treasury) and John Creedy (Treasury, and Victoria University) do so for New Zealand. They use the Gini coefficient as their measure of inequality, and measure inequality based on market incomes (before taxes and transfers), disposable incomes (after taxes and transfers), and consumption expenditure. They find:
The Gini measure of market incomes saw a steady rise through the second half of the 1980s to the early 1990s, from about 0.4 to around 0.5. Subsequently, there has been a steady though less marked decline, with the exception of ‘spikes’ around 2001 and 2011. For disposable incomes, the systematic increase in the Gini measure does not appear to have started until the late 1980s, rising from about 0.27 to about 0.33 in the mid- 1990s.
Nothing too surprising there for anyone who is also a reader of Eric Crampton's excellent blog (see here and here and here) - inequality increased during the reform period in the late 1980s and early 1990s, and then has remained fairly steady or declined since then. Here's their Figure 1, which shows the results over time in more detail:

The changes in taxes and benefits have interesting incentive effects, as Ball and Creedy note:
It appears that the 1980s reforms involving cuts in the top income tax rate along with benefit cuts and the ending of centralised wage setting are associated with increasing inequality. The spikes in the market and disposable income profiles from 2000 may also be associated with changes in top income tax rates. In the first case of an increase from 33 to 39 per announced in 2000 but effective in 2001, the anticipation of the rate increase could have led to a certain amount of income shifting into the year before the increase. Much of the shifting is likely to have been by those in higher income groups, and hence this contributes to the sudden increase in inequality, followed by a reduction. In the case of the 2010 reduction in the top rate, the opposite incentive effect operated.
The changes in inequality over time in New Zealand are interesting, and you can tell a plausible story about the relationship between those and changes in taxes and benefits based on the results of this paper (and especially the figure above). However, Ball and Creedy don't do so, instead concluding that:
...interesting questions about the precise causes of those changes remain a challenge for future research.
I'd say they have gotten us about 90% of the way there already.

Thursday, 20 April 2017

People want fairness, not equality

I was interested to read this new paper by Christina Starmans, Mark Sheskin, and Paul Bloom (all from Yale), which reviews a lot of recent research that demonstrates that people want fairness, not equality. This actually relates to a post I made last year on this topic, which funnily enough linked to this 2015 article in The Atlantic, by Paul Bloom. So, clearly this idea has been around for a while and yet, most commentators still seem to focus on inequality as a great evil. Anyway, coming back to the Starmans et al. paper, here's a little of what they say (I do recommend reading the whole article though, as it is very readable):
...when people are asked to distribute resources among a small number of people in a lab study, they insist on an exactly equal distribution. But when people are asked to distribute resources among a large group of people in the actual world, they reject an equal distribution, and prefer a certain extent of inequality. How can the strong preference for equality found in public policy discussion and laboratory studies coincide with the preference for societal inequality found in political and behavioural economic research?
We argue here that these two sets of findings can be reconciled through a surprising empirical claim: when the data are examined closely, it turns out that there is no evidence that people are actually concerned with economic inequality at all. Rather, they are bothered by something that is often confounded with inequality: economic unfairness.
One bit of that bears repeating (and in bold): it turns out that there is no evidence that people are actually concerned with economic inequality at all. What people really care about is ensuring that there is no unfairness. And that would explain the unusual differences between laboratory experiments and real-world observations, which Starmans et al. do a good job of explaining in their paper. I also like this bit in the conclusion:
Worries about inequality are conflated with worries about poverty, an erosion of basic rights, and—as we have focused on here—unfairness. If it’s true that inequality in itself isn’t really what is bothering people, then we might be better off by more carefully pulling apart these concerns, and shifting the focus to the problems that matter to us more.
This is a point I have made before - most of the arguments I have heard about why inequality is bad and should be addressed, are really arguments about why poverty is bad and should be addressed. People conflate inequality and poverty, when they are not the same thing at all. To see why, consider this thought experiment I do with my ECON110 students every year: Think about a problem that you associate with inequality. Would the problem be reduced by burning 10% of the wealth of all of the richest people? If the answer is yes, then the problem probably stems primarily from inequality. Otherwise it is more likely to be primarily a problem of poverty.

Now, if you are concerned about fairness you are very likely to be concerned about poverty. In most circumstances, it would be hard to argue that poverty is a fair outcome for the poor person. But that doesn't mean that all inequality is also unfair. In fact, equality may be seen as unfair, if it means that some people work harder than others to achieve the same outcome. And that was one of the points that Starmans et al. were making.

[HT: Berk Ozler at Development Impact, and Marginal Revolution]

Read more:

Wednesday, 19 April 2017

If laptops are bad for student learning, maybe mobile phones are too

Last week I wrote a post about how laptop computers were bad for learning in the university context. Naturally, that probably makes you wonder about other devices. If laptops are bad because they are distracting for students, are tablets or mobile phones just as bad?

A recent paper by Louis-Philippe Beland (Louisiana State), Richard Murphy (University of Texas at Austin), published in the journal Labour Economics (ungated earlier version here), provides some suggestive evidence about mobile phones, but in the context of secondary school. Specifically, the authors looked at data from students at secondary schools in four cities in the UK (Birmingham, London, Leicester, and Manchester) and at school mobile phone policies. They looked at how the implementation of a mobile phone ban at a school affected students' performance in the GCSE exams (which are standardised national exams).

Of course, the problem here is that schools will be more likely to implement a mobile phone ban if they believe that it will positively affect their students (including by increasing their learning), but Beland and Murphy rightly identify that this means that their results offer an upper bound of the effects of banning mobile phones from schools. It may not be a big issue though - only one of the 91 schools in their sample didn't implement a mobile phone ban at some point over the period they look at (2001-2011).

They find that:
...following a ban on phone use, student test scores improve by 6.41% of a standard deviation. This effect is driven by the most disadvantaged and underachieving pupils. Students in the lowest quintile of prior achievement gain 14.23% of a standard deviation, while, students in the top quintile are neither positively nor negatively affected by a phone ban.
The stated effects are only marginally statistically significant, and the effect (0.06 standard deviations) is not huge (which is why I labelled this as 'suggestive evidence' above). Beland and Murphy undertake a battery of robustness checks though, which suggest the effect is real (if possibly over-stated, given the self-selection of schools into banning mobile phones). Some of the supplementary results are interesting in their own right though, such as:
The interaction of the ban with prior achievement is negative... implying that it is predominantly low-ability students who gain from a ban.
The high-ability students (as measured by their results in testing prior to starting secondary school) were not affected by the mobile phone bans, but the low-ability students were made better off. This is similar to the effects of laptops in university classes I blogged about last week. Also:
...the impact of the ban is mostly on language based subject English and other subjects with no impact on mathematics.
Are mobile phones more of a distraction in English classes than in maths? I'm not convinced. This result is also a bit strange:
We find that less strict bans of mobile phones are more effective in raising student test scores than bans that prohibit phones to be on school premises. Exploring the impact of strict and less strict bans by prior student achievement, we find that both are effective in raising the test scores of the lowest performing students, but again that less strict bans are more effective. 
It's hard to see how a weak ban on mobile phones (e.g. students may bring them to school, but they must be on silent mode) would have a greater positive impact on student outcomes than a stronger ban (e.g. students cannot bring mobile phones to school at all). Beland and Murphy suggest that a weak ban may require more teacher time to enforce. Maybe. Perhaps the English teachers are more vigilant at enforcing bans than maths teachers too?

So, this is clearly not the last word on this topic. The negative effect may be small and statistically significant, but some of the other results need further exploration. Laptops may be bad for learning, but for mobile phones the evidence is not so clear.

[Update]: Peter Lyons says something closely related in this Herald opinion piece.

Read more:

Monday, 17 April 2017

Climate change and migration in poor and middle-income countries

I just read this 2016 article by Cristina Cattaneo (Fondazione ENI Enrico Mattei, Italy) and Giovanni Peri (UC Davis), published in the Journal of Development Economics (ungated earlier version here, which I realised partway through reading the final published version that I had already read a couple of years ago, so perhaps I should have said I just re-read this article). The article looks at how changes in temperature affect international migration (and later, urbanisation), and has an interesting argument.

The authors suggest that higher temperatures reduce agricultural productivity, and hence reduce incomes in poor and middle-income countries. These lower incomes encourage more people to migrate to higher income (in this paper, OECD) countries, since the net gains (higher incomes, minus the monetary and non-monetary costs of moving) from migrating are now greater. However, there is a key difference between poor and middle-income countries, and that is the level of income (duh!) and hence savings, which people can use to pay the costs of migration. People in poor countries are less likely to have the necessary financial resources to migrate than are those in middle-income countries. When high temperatures lower incomes (and savings) in poor countries further, even fewer people from these countries would be able to migrate. So, the authors expect to see higher temperatures associated with more migration to rich countries from middle-income countries, but less migration to rich countries from poor countries. And indeed, they find that:
...in very poor countries increasing temperature leads to lower emigration and urbanization rates, while in middle-income countries it leads to larger rates. We also show that long-run temperature increase speeds the transition away from agriculture in middle-income countries. Conversely, it slows this transition in poor countries – worsening the poverty trap – as poor rural workers become less likely to move to cities or abroad. We also find, for middle-income countries, emigration induced by higher temperature is local and is associated with growth in average GDP per person, while the decline in emigration and urbanization in poor countries is associated with lower average GDP per person.
The idea for urbanisation is much the same. Rural folk in poor countries are less able to respond to increased temperatures (and reduced agricultural incomes) by moving to the city in search of work than are rural folk in middle-income countries.

These arguments do not extend to rich countries though, since agriculture is a much smaller share of national production in rich countries, and a much less important source of income for people, even in relatively rural areas. So, Cattaneo and Peri's results do not contradict mine for example, where the effects of climate change on internal migration in New Zealand are relatively small (see this post where I discuss my recent working paper on this).

Coming back to developing countries though, it is clear from Cattaneo and Peri's paper that climate change presents a particularly troubling potential poverty trap for the poorest countries.

Sunday, 16 April 2017

Book Review: Merchants of Doubt

I just finished reading the 2010 book by Naomi Oreskes and Erik M. Conway, Merchants of Doubt. The subtitle is "How a handful of scientists obscured the truth on issues from tobacco smoke to global warming" and is a pretty fair summary of the book. The authors have completed what appears to be a very thorough investigation of several scientists whose names keep appearing as combative opponents against the broad consensus views of scientists. The authors summarise their findings on pages 6-7:
In case after case, Fred Singer, Fred Seitz, and a handful of other scientists joined forces with think tanks and private corporations to challenge scientific evidence on a host of contemporary issues. In the early years, much of the money for this effort came from the tobacco industry; in later years, it came from foundations, think tanks, and the fossil fuel industry. They claimed the link between smoking and cancer remained unproven. They insisted that scientists were mistaken about the risks and limitations of SDI [the Strategic Defense Initiative, or Star Wars]. They argued that acid rain was caused by volcanoes, and so was the ozone hole. They charged that the Environmental Protection Agency had rigged the science surrounding secondhand smoke. Most recently - over the course of nearly two decades and against the face of mounting evidence - they dismissed the reality of global warming. First they claimed there was none, then they claimed it was just natural variation, and then they claimed that even if it was happening and it was our fault, it didn't matter because we could just adapt to it. In case after case, they steadfastly denied the existence of scientific agreement, even though they, themselves, were pretty much the only ones who disagreed.
The way that these scientists managed to get their views, and the appearance of a lack of consensus among scientists, was by exploiting journalistic balance required until the 1949 Fairness Doctrine. Under the doctrine, journalists in the U.S. are required to dedicate airtime or column space to present issues in a balanced manner, such as by presenting both sides of an argument. This sounds good in theory, but when one side of the argument consists of hundreds (or thousands) of scientists and the other side consists of a handful of naysayers, it is hard to see how this doctrine doesn't just reverse the bias in favour of the minority. And it was effective. The authors note (p.32):
...their ability to invoke the Fairness Doctrine to obtain time and space for their views in the mainstream media was crucial to the impact of their efforts.
This allowed these scientists to create doubt in the minds of policy-makers and the public, making it difficult for these groups to assess the veracity of scientific claims (and counterclaims):
"Doubt is our product," ran the infamous memo written by one tobacco industry executive in 1969, "since it is the best means of competing with the 'body of fact' that exists in the minds of the general public". The industry defended its primary product - tobacco - by manufacturing something else: doubt about its harm. (p.34)
The book goes through a narrative that is mostly chronological, with chapters devoted to tobacco from the 1950s to the 1970s, strategic defense (the Star Wars programme in the 1980s), acid rain and the ozone hole in the 1980s and 1990s, secondhand smoke in the 1990s, global warming from the 1990s onwards and the recent (at the time of writing the book) re-opened debate about DDT (a pesticide banned in the 1960s). Oreskes and Conway conclude that:
The link that unites the tobacco industry, conservative think tanks, and the scientists in our story is the defense of the free market.
Throughout our story, the people involved demanded the right to be heard, insisting that we - the public - had the right to hear both sides and that the media had an obligation to present it. They insisted that this was only fair and democratic. But were they attempting to preserve democracy? No. The issue was not free speech; it was free markets. It was the appropriate role of government in monitoring the marketplace. It was regulation. (p.248)
That may well be the case, but I was surprised that the authors singled out Thomas Schelling as one of their bad guys (in the climate change chapter), for identifying the importance of uncertainty in climate projections. And I was even more surprised that they also took issue with William Nordhaus:
What could be done to stop climate change? According to Nordhaus, not much. The most effective action would be to impose a large permanent carbon tax, but that would be hard to implement and enforce. (pp.178-179)
Later on, Oreskes and Conway go on to say:
A handful of economists in the late 1960s had realized that free market economics, focused as it was on consumption growth, was inherently destructive to the natural environment and to the ecosystems on which we all depend. (p.183)
One of those economists? William Nordhaus, though the authors don't note this. [*]

That gripe aside, this book was a really timely read, given the post-truth world we are currently living in, where it seems that even the most robust scientific findings are made to look questionable. If this book had been written now, I wonder if this passage would be even need to be re-written:
With the rise of radio, television, and now the Internet, it sometimes seems that anyone can have their opinion heard, quoted, and repeated, whether it is true or false, sensible or ridiculous, fair-minded or malicious. The Internet has created an information hall of mirrors, where any claim, no matter how preposterous, can be multiplied indefinitely. And on the Internet, disinformation never dies. (p.240)
For some more on this, I recommend this Tim Harford post (which, you will probably note, covers some of the same ground as the Merchants of Doubt book).

Overall, despite the minor gripe about Nordhaus and Schelling, I found this an excellent read. It also makes me think about the current situation in New Zealand with alcohol licensing and local alcohol policies, where the industry has funded some limited research. Food for thought.

*****

[*] This bit made me angry enough that the next book I am reading is Nordhaus's The Climate Casino. You can expect a review of that book in due course.

Saturday, 15 April 2017

Who are the global top 1 percent?

That is the title of a new paper by Sudhir Anand (University of Oxford) and Paul Segal (King's College London), and published in the journal World Development (ungated earlier version here). The paper answers the question in the title, and is based on a combination of household survey data and data from the World Top Incomes Database (one of the many famous outputs of Thomas Piketty, Emmanuel Saez, Tony Atkinson and others, and now called the World Wealth and Income Database). The paper is also an update of this earlier paper by the same authors.

There's lots of interest in the paper, but here are the headline results:
...the threshold for an individual to enter the global top 1% in 2012 is an annual income of about PPP $50,600 per capita household income, or PPP$202,000 for a family of four. We find that for many developed countries it includes the top 4–8% of their national income distribution. These income groups are likely to include senior professionals and some middle managers as well as business owners and ‘‘supermanagers”... Among developing countries, Brazil has the largest share of its own population in the global top 1%, where 1.5% of its national distribution is in that group...
An individual in the global top 0.1%, on the other hand, has a minimum of PPP$181,000 per capita household income, or about PPP$725,000 for a family of four. This comprises the top 1% in the US, and the top 0.3%—0.5% in Japan, Germany, France and the UK...
The threshold for an individual to enter the global top 10% in 2012 was about PPP$15,300 per capita household income, or PPP$61,000 for a family of four. This income level would not count as "rich" within a developed country: for most developed countries this group includes more than half their populations. For the US the top 60.4% of its population is in the global top 10%, and for Switzerland the corresponding figure is 71.2%.
Anand and Segal also look at changes in inequality over time:
The two decomposable measures, MLD and Theil T, show that within-country inequality was rising up to 2005 — which was offset by declining between-country inequality — but that from 2005 to 2012 even within country inequality declined...
The income shares of the top 10%, the top 1%, and the top 0.1% also rise and then decline, peaking in 2002 for the top 10% and in 2005 for the top 1% and the top 0.1%...
I note that declining global inequality is consistent with other recent research (see for example my post here). Anand and Segal find that the turning point for global inequality was around 2005.

The focus of the paper overall is the increasing trend towards people in developing countries joining the global top one percent. Here's what they conclude:
The turning point for the participation of the emerging economies in the global income rich appears to have been around 2005, which mirrors our finding that the advanced economies’ share of WEF [MC: World Economic Forum] attendees peaked in 2006 and has been on a declining trend since then. Moreover, we find that global inequality starts to decline around the same time, and that top 1% income shares within countries start to decline also from 2005. This trend was no doubt sharpened by the global financial crisis in 2008, which is having a lasting effect of slow growth in the advanced countries. But many developing countries were already converging with the developed economies before that point. As long as emerging economies continue to grow faster than the developed countries — which seems likely for the near future — we can expect both trends to continue.
None of that should surprise us.

Friday, 14 April 2017

What to do about students buying essays?

I recently read this 2015 paper by Dan Rigby (University of Manchester), Michael Burton (University of Western Australia), Kelvin Balcombe (University of Reading), Ian Bateman (University of East Anglia), and Abay Mulatu (London Metropolitan Business School), published in the Journal of Economic Behavior & Organization (ungated here). I thought this was an interesting paper because it applied non-market valuation techniques to a good that is actually sold in markets - essays.

The authors use a specific non-market valuation technique that is called discrete choice modelling (which my colleague Riccardo Scarpa is a world-leading expert in, and which I have also been involved in for a couple of projects, including this one). Discrete choice modelling involves presenting the survey participants (in this case, 90 humanities and science students from three UK universities) with a number of hypothetical choices. Each choice involves a number of goods with different attributes (in this case, the attributes included the price of the essay, the quality of the essay in terms of the grade it would receive, the risk of being caught, and the penalty if caught), and often there is also the choice to buy nothing at all. The participants make several choices, which allows us to determine the implicit weighting the participants place on the different values of the attributes.

In analysing the data, Rigby et al. use a latent class model. I won't go into the detail underlying this, but essentially it determines how many different types of decision-makers there are, with each type placing different weight on the attributes of the good (in this case, essays). They found that there were two types of students, corresponding to students who were very reluctant to buy essays, and those who were more willing to do so. They also found that:
...half of our subjects indicate a willingness to buy one or more essays in the hypothetical essay choice experiment. Students’ stated willingness to participate in the essay market, and their implicit valuation of purchased essays, vary with the characteristics of student and institutional environment. Risk preferring students, those for whom English as an additional language, and those expecting a lower grade are willing to pay more. Purchase likelihoods and essay valuations decline as the probability of cheats being detected, and the penalties if caught, increase.
There's probably nothing too surprising there. However, why is cheating through buying essays a problem? Because it reduces the signalling value of education. As I wrote in this 2014 post:
One of the key characteristics of a degree or diploma is the signal that it provides to prospective employers about the quality of the applicant for positions they have available. Employers don't know up front whether any particular applicant is good (intelligent, hard working, etc.) or not - there is asymmetric information, since each applicant knows their own quality. One way to overcome this problem is for the applicant to credibly reveal their quality to the prospective employer - that is, to provide a signal of their quality. In order for a signal to be effective, it must be costly (otherwise everyone, even those who are lower quality applicants, would provide the signal), and it must be more costly for the lower quality applicants. Qualifications (degrees, diplomas, etc.) provide an effective signal (costly, and more costly for lower quality applicants who may have to sit papers multiple times in order to pass, or work much harder in order to pass).
In the same way that qualifications are a signal, the grade students receive is also a signal of their quality, because it is harder (more costly, in terms of effort) to get an A grade than a C grade. However, if some students are cheating, then high grades are no longer as effective a signal to employers of students' quality. This is because it is no longer more costly for low-quality students to get an A grade, because any student can do so by buying an essay. This reduces the value of education for everyone.

The overall conclusion by Rigby et al. was, unsurprisingly, that if penalties are high enough, students will avoid buying essays. So, universities should be vigilant, and heavily penalise students who are caught cheating. That reduces the expected net benefit of cheating, and reduces the incentive to buy essays. Gary Becker would be proud.

However, an alternative is to make asymmetric information work for you. Most of the online markets where students buy essays simply link up a willing buyer with a willing essay-writer. The sites themselves mostly don't employ people to write essays directly (and those that do are pretty low quality). So, universities could combat this by flooding the online markets with low-quality rubbish essays. How would this work to reduce cheating?

Students can't be sure about the quality of any essay they buy. Essay writers can't easily prove to students that they will write a high-quality essay. So, there is asymmetric information, but is there adverse selection? I argue yes, since sellers of low-quality essays can take advantage of students' inability to distinguish between low- and high-quality essays.

Will the market fail? Students' willingness-to-pay for an essay is affected by the quality of the essay they expect (as per Rigby et al.'s results above). So, if universities flood the market with lots of low-quality rubbish essays, then students will start to expect lower quality essays and adjust their willingness-to-pay downwards, and may drop out of the market entirely (why bother trying to buy an essay if the quality is highly likely to be rubbish). Sites selling essays will make less money and begin to shut down, since they can't easily prove to students that their essays are high-quality. It probably won't make the market fail completely, [*] but it would reduce the problem and be extremely funny (and no doubt distressing for cheating students).

*****

[*] Readers of a certain vintage will remember the music sharing service Napster. In the last months before Napster was shut down, the music companies (I assume - who else would do this?) started flooding the service with fake MP3 files. This didn't work in terms of shutting Napster down, but it was pretty frustrating for users.

Thursday, 13 April 2017

Strong evidence that laptop use in lectures is bad for learning

A couple of years ago, I wrote a post about how laptop use in lectures was bad for student learning. That post was based on this article (ungated version here). Here's what I said then:
Mueller and Oppenheimer conducted three studies with students in experimental settings...
In terms of results, participants who took notes by hand wrote significantly less than those using laptops, and wrote fewer verbatim notes. There was no statistically significant difference in terms of factual-recall question performance, but students who took notes by hand did significantly better than laptop users on conceptual-application questions...
So, perhaps students would be better off without their laptops in class, even if they are using them diligently for note-taking rather than watching the NBA finals (as I saw one student doing in class last semester).
Now, a new paper by Richard Patterson (US Military Academy) and Robert Patterson (Westminster College), and published in the journal Economics of Education Review (sorry I don't see an ungated version anywhere), provides even stronger evidence that laptop use is bad for student learning. The authors use data from 5571 students from a private liberal arts college over the period 2013-2015.

What really sets this study apart is the care with which Patterson and Patterson design the study in order to identify the causal (rather than correlational) impacts of laptop use on student academic performance. The college they draw their sample from allows lecturers to decide whether to make laptops required in class, make them prohibited in class, or make them optional (neither required nor prohibited). Importantly, a university policy that requires all students to own a laptop. So there is no question that students choose their courses based on whether laptops are required or not (and the authors confirm this with a student survey, where only 4 percent mentioned that this was a factor in their course selection decision).

This is where the study gets very smart. The combination of the laptop policies mean that students in classes where laptops are optional will be more likely to have them (and use them) if they have classes on the same day where they have one or more classes where laptops are required. Patterson and Patterson exploit this and look at how a student's performance in one class is affected by the laptop policies in that student's other classes with lectures on the same day.

Laptop use in Class A where laptops are required shouldn't affect the student's performance in Class B where laptop use is optional, except through the fact that students will be more likely to have (and use) their laptop. Similarly, laptop non-use in Class C where laptops are prohibited shouldn't affect the student's performance in Class B, except through the fact that students will be less likely to have (and use) their laptop.

This allows Patterson and Patterson to make pretty strong claims of causality in their results by looking at the students' results in classes where laptop use is optional and whether they have laptop-required or laptop-prohibited classes on the same day. They also conduct a bunch of robustness checks in their analysis that make their findings quite compelling.

In their main results, they find that:
...having a laptop-required class on the same day increased the probability that a student used a laptop in class by 20.6% or 14.2 percentage points (significant at the 1% level) and having a class that prohibited laptop use on the same day decreased the probability of using a laptop by 48.9% or 36.7 percentage points (significant at the 5% level)...
Our results suggest that computer use has a significant negative impact on course performance, on the scale of 0.14–0.37 grade points or 0.17–0.46 standard deviations... Additionally, we find evidence that computers have the most negative impact on male and low-performing students and in quantitative and major courses.
So, they found that laptop use makes students significantly worse off, and the effects are much larger for male students and for low-performing students, and in quantitative subjects (presumably including economics). So, perhaps we really should be banning laptops from lectures?

Read more:


Tuesday, 11 April 2017

Right-leaning politicians are better looking, but we know less about right-leaning scholars

Two recently published research papers caught my attention. Both compared the relative attractiveness of people on the right and left of the political spectrum. One of the research papers was good, and one was decidedly less so.

Let's start with the good paper, which was written by Niclas Berggren and Henrik Jordahl (both of the Research Institute of Industrial Economics in Sweden) and Panu Poutvaara (LMU Munich), and published in the Journal of Public Economics (ungated earlier version here). In the paper, they compare the relative attractiveness of politicians on the right and on the left. Why would anyone care about this? The authors explain:
If one side of the political spectrum has a beauty advantage, it can expect greater electoral success and to have political decisions tilted in its favor. We put forward the hypothesis that politicians on the right look better, and that voters on the right value beauty more in a low-information setting. This is based on the observation that beautiful people earn more... and that people with higher expected lifetime income are relatively more opposed to redistribution...
This is a very nice study. The authors first demonstrate that politicians on the right are indeed more attractive than politicians on the left, using data from Australia, the European Union, Finland, and the United States. Then they:
...study beauty premia in municipal and parliamentary elections. The former can be regarded as low-information and the latter as high-information elections, where voters know little and reasonably much, respectively, about candidates. We show that in municipal elections, a beauty increase of one standard deviation attracts about 20% more votes for the average non-incumbent candidate on the right and about 8% more votes for the average non-incumbent candidate on the left. In the parliamentary election, the corresponding figure is about 14% for non-incumbent candidates on the left and right alike.
They argue with a nice theoretical model that the reason for these differences is based on two things: (1) attractiveness is itself valuable, and voters are more likely to vote for attractive candidates; and (2) attractiveness signals that politicians have views that are further to the right. So, this explains why the attractiveness premium is greater for politicians on the right in low-information settings (where both effects work in the same direction) compared to politicians on the left (where the effects work in opposite directions, since left-preferring voters are more likely to see an attractive left candidate as being to the right of their views).

Finally, the authors confirm their results with an experiment:
Experimental election results confirm the observational findings from real elections. When matching candidates of similar age, the same gender and the opposite ideology in a random manner and asking respondents whom they would vote for solely on the basis of facial photographs (i.e., with low information), we find that candidates on the right win more often because they look better on average. Candidates on the right get a higher vote share, both from voters on the right and voters on the left, but with larger success among the former.
One of the cool things the study does is that it uses a different survey group to assess the attractiveness of the politicians from those who provided an assessment of whether the politicians were from the left or right, and different from those participating in the experiment. This ensures there is no cross-contamination across the study (they also got Europeans to rate the American politicians, and vice versa). The conclusions, that politicians on the right are better looking, that attractiveness is a cue for voters as to a candidate's conservatism (in a low-information election), and that attractiveness confers an extra benefit for a right-leaning politician in a low-information setting, are all fairly robust.

On to the second, not-so-good paper, by Jan-Erik Lönnqvist (University of Helsinki), and published in the journal Personality and Individual Differences (sorry I don't see an ungated version anywhere). This follow-up study (Lönnqvist cites an earlier version of the Berggren et al. paper):
...sought to investigate whether the attractiveness advantage of the political Right is specific to politicians. To investigate this, the attractiveness of Right-leaning scholars (referring to people who professionally engage in mental labor, such as academics or writers) was compared to that of Left-leaning scholars.
The data collection was ok, although both the subjective assessments of the attractiveness of the scholars and of their political orientation were both provided by the same respondents (being five research assistants, compared with the hundreds in the Berggren et al. study). However, that isn't the main problem with the study, which is this:
We first regressed the ideological tone of the magazine on ratings of attractiveness and perceived political orientation.
So, in all of Lönnqvist's regression models, actual political orientation is the dependent variable (the variable he is trying to explain), and subjective perception of political orientation is one of the explanatory variables. This creates two big problems that I can immediately see. First is related to interpretation. I'll try to keep this from getting too pointy-headed, but if you have the actual variable on the LHS of your econometric model, and a subjective measure of the same variable on the RHS, you think you have this model:

[Actual Political Orientation] = f{[Perceived Political Orientation], other stuff}

What that equation says is that actual political orientation is a function of perceived political orientation and some other stuff (which includes attractiveness). But actually what that equation really is, is a rearranged version of this:

[Actual Political Orientation - Perceived Political Orientation] = f{other stuff}

So, really this is a model of deviations between actual and perceived political orientation (being a function of other stuff, including attractiveness). Keep that in mind when you read the results:
...perceived political orientation accurately predicted actual magazine-related political orientation. However, physical attractiveness did not.
Then Lönnqvist splits the attractiveness variable into attractiveness and grooming ("the extent to which the target person appeared to have prepared physically for the photograph") and finds:
Perceived political orientation still predicted magazine-related political orientation, but now also physical attractiveness and target grooming predicted magazine-related political orientation.
In other words, there was a negative correlation between attractiveness and right political ideology, and a positive correlation between grooming and right political ideology.  Lönnqvist argues this means that left-leaning scholars are more attractive, while right-leaning scholars are better groomed.

However, go back to the equations above. What these results really mean is that the difference between actual and perceived political orientation is more negative for attractive scholars. This may be because attractive scholars are more left-leaning, or because the research assistants who did the rating thought that the attractive scholars were more to the right than they actually were! Notice that this result could easily just be in line with the Berggren et al. results above - people believe that better-looking people have a more right-leaning political orientation.

Similarly, the results demonstrate that the difference between actual and perceived political orientation is more positive for well-groomed scholars. This may be because well-groomed scholars are more right-leaning, or because the research assistants who did the rating though that the well-groomed scholars were more to the left than they actually were.

So, the results of the second paper tell us very little about the attractiveness of scholars and their political ideology. At the very least, we'd need to know about the results, excluding the subjective rating of political orientation from the models.

[HT: Weird Science on the NZ Herald, but I had seen the Berggren et al. paper earlier but I forget the source]

Monday, 10 April 2017

Teacher proposal tries to increase market power

As reported in the New Zealand Herald last week, the Education Council (the professional organisation for teachers) are floating a change that would require all prospective teachers to have a post-graduate qualification. Here's what was reported:
People wanting to become teachers would need to get a post-graduate qualification, under a change floated by the Education Council in a bid to raise the status of teaching.
The change would cover all teachers - early childhood, primary and secondary - and has the backing of universities.
First, it's not at all surprising that this proposal would have the backing of universities. If teachers require post-graduate qualifications, that means more people studying post-graduate qualifications (albeit probably not a whole lot more - see below), which means more fees for universities.

Second, it's really hard not to be cynical and see this as a simple way for existing teachers to increase their market power. If you raise the threshold for new graduates to become teachers, this creates a barrier to entry for graduates into the teaching market. [*] Essentially this restricts the supply of new teachers, because it would cost more to become a teacher (an additional year of student loans and foregone earnings, made even worse by student allowances not being available for post-graduate study). This would make alternative occupations (that are cheaper to get into), just a little bit more attractive. So, while this wouldn't stop everyone considering teaching as a career from following that path, at the margin it will be enough to dissuade some. Which is essentially what the PPTA says in the article:
However, the PPTA is unconvinced, saying teacher supply problems could worsen as a result.
Requiring all future teachers to get a post-graduate qualification would raise the bar for entry into the profession, and would likely be most keenly felt at the primary and ECE level.
Restricting competition makes sellers (in this case, sellers in the labour market for teachers) better off. It raises the bargaining power of the existing teachers, and when schools are looking for new teachers there are fewer options available, so they have to offer slightly higher salaries to attract applicants. Voila! Existing teachers are made better off, whether they are staying at the same school or moving between schools, at the expense of people who would have become teachers but for the higher barrier to entry.

The Education Council argues that this proposal would "raise the status of teaching". However, it's hard to see how it does that, other than by increasing teacher salaries at the margin. It may increase the quality of new teachers, to the extent that you believe a post-graduate qualification adds to their quality (dubious, when compared with the alternative of giving a graduate a year of actual teaching experience).

I'd argue that we probably need to go in precisely the opposite direction, especially for high school teachers of STEM subjects. When my father was at university, he taught chemistry at a high school in Auckland while studying his Bachelor of Science. That gave him a little bit of free cash flow, and benefited the school by providing a cost-effective way of teaching a subject that was difficult to recruit quality teachers for (and it certainly hasn't gotten any better since then). Why not open up teaching in STEM subjects to final-year graduates (probably economics too), given the teacher shortages in those subjects? Who knows? Some of them may actually enjoy the experience enough to stay on and become teachers (which they wouldn't have done otherwise). Raising the bar for people to become teachers is ridiculous, when you can't fill the vacancies for quality teachers in STEM subjects (and other subjects) now.

*****

[*] Ok, there is already a barrier to entry because teachers require a qualification. However, this proposal raises the barrier to entry even higher.


Saturday, 8 April 2017

Masturbation and partnered sex: Substitutes or complements?

That is the title of a new paper by Mark Regnerus (University of Texas at Austin), Joseph Price and David Gordon (both Brigham Young University), and published in the journal Archives of Sexual Behavior (I don't see an ungated version online). The title pretty much explains what the authors are trying to establish, using data on 15,738 adults (aged 18-60 years) from the Relationships in America study in 2014.

Essentially they are testing two competing models. The first is the compensatory model, which:
...holds that masturbation and paired sexual activity are inversely associated; that is, masturbation is an outlet for sexual energy when paired sexual activity is not possible, either due to lack of a partner or the unwillingness or inability of a partner to engage in sex as often as desired.
The compensatory model suggests that masturbation and partnered sex are substitutes. In contrast, the complementary model suggests that they are complements, i.e.
...that paired sex stimulated demand for additional sex and sexual activities, including masturbation.
Past studies have shown that men's behaviour is consistent with the compensatory model, while women's behaviour is consistent with the complementary model. However, Regnerus show that it isn't quite that simple, and that sexual contentment matters. Their results showed that:
Among men who were content with their sexual frequency, we saw few discernable trends in the likelihood of masturbation based on recent sexual frequency... However, the pattern was different for men who were sexually discontented. Among them, the odds of recent masturbation among those who have had sex 2-3 times, or 4 or more times, in the past 2 weeks were significantly lower than those who have not had sex at all in the past 2 weeks.
For women, the pattern appeared to be reversed. The odds of recent masturbation among women who reported being content with their sexual frequency were more than twice as high if they had had sex four or more times when compared to those who not had [sic] any sex in the past 2 weeks... In this way, sex and masturbation again appeared complementary among them. Meanwhile, there was no discernable association, net of controls, between frequency of recent sex and masturbation for women who reported sexual discontentment.
The concluded that:
...the compensatory model modestly fits sexually unsatisfied men, and a complementary model fits sexually satisfied women.
The main downside of this study of course is that it was based on a cross-sectional sample, so the results are correlations, not causal. They had a number of control variables, including most of the obvious ones like whether people were partnered. However, they only ever look at the differences between people, and it's possible that there is something systematically different between those who report masturbating and those who don't which isn't captured by the variables in the model. Having longitudinal (panel) data would go some way (but not all the way!) towards solving this issue, since it would allow you to observe at least some of the people in the sample at times when they are content, and at other times when they are not, and see if that affects the likelihood of reporting masturbation. Still, for the moment this is the best study on the topic and provides a more nuanced picture than earlier studies.

[HT: Marginal Revolution]

Wednesday, 5 April 2017

The price tag for a top decile Auckland education is not $2m+

I meant to write a post on this a while back, when Corazon Miller wrote in the New Zealand Herald:
Parents wanting to send their children to some of Auckland's prestigious state schools may be forced to fork out more than $2m to buy a house in the zone, or more than $700 a week in rent - double the cost ten years ago.
Figures from property analysis site Relab.co.nz which analysed 33 Auckland school zones showed buying a home was more costly close to higher decile schools.
Topping the list were two decile nine school zones, Auckland Grammar and Epsom Girls Grammar - with median values topping $2m last year...
Relab marketing director Bill Ma said the figures showed parents what they should be budgeting for.
"The higher decile schools definitely come with a premium price."
Yes, homes in good school zones do have a premium for the school zone. But no, the premium is not the whole price. The reason is simple - if you weren't living in a house in the double Grammar zone, you'd have to be living somewhere else, and presumably that somewhere else is not free. To find out the actual premium for school zoning, you have to look at two otherwise-identical houses: one in the zone; and one outside the zone.

This relies on the concept of hedonic pricing - an idea that was first introduced by Irma Adelman, who sadly passed away in February. Hedonic pricing recognises that when you buy some (or most?) goods you aren't so much buying a single item but really a bundle of characteristics, and each of those characteristics has value. The value of the whole product is the sum of the value of the characteristics that make it up. For example, when you buy a house, you are buying its characteristics (number of bedrooms, number of bathrooms, floor area, land area, location, etc.). When you buy land, you are buying land area, soil quality, slope, location and access to amenities, etc.

In the case of Auckland, school zone is only one of many characteristics that make up the value of the house. And it is possible to separately identify and value those characteristics if you have good data, as my colleague John Gibson and his wife Geua Boe-Gibson did in this 2014 working paper using data on 8000 houses in Christchurch. They didn't look at school zones, but looked at the value of school outcomes (measured by NCEA pass rates), and found that a standard deviation increase in school performance raises house prices by 6.4%.

So overall, the 'price tag for a top decile Auckland education' won't be $2m or more. If we (generously) assumed that the premium was the difference between the $2.09 million median in the Auckland Grammar school zone mentioned in the article above, and the $1.05 million median across Auckland as a whole (as of today), then the premium is a little over $1 million. However, that wouldn't be correct, since the median house in the Auckland Grammar zone is clearly not the same as the median house across all of Auckland, as well as having access to different amenities (besides the school zone). So the premium for the Auckland Grammar zone is likely to be substantially lower than $1 million.

Monday, 3 April 2017

Latest research seems to support the Kuznets hypothesis

Simon Kutzets won the Nobel Prize in Economics in 1971 for his work on economic growth, but he made a number of contributions to the discipline. One of these related to the empirical relationship between economic growth and inequality, the so-called Kuznets curve.

Kuznets hypothesised that, at low levels of development, inequality was relatively low. Then, as a country developed, the owners of capital would be the first to benefit because of the greater investment opportunities that the economic growth provided. This would lead to an increase in inequality associated with economic growth. Eventually though, at even greater levels of development the taxes paid by the capitalists would increase, leading to developments such as a welfare state, improved education and healthcare, all of which would improve the incomes of the poor. So, at higher levels of development, inequality would decrease with economic growth. This leads to the Kuznets curve:


The empirical support for the Kuznets hypothesis was drawn from the experience of western developed countries since the Industrial Revolution. However, the relevance of the experiences of western countries to the current development trajectories of developing countries is fairly limited, so the Kuznets hypothesis has been called into question. For instance, in China economic growth has been high for last last few decades, and yet to date inequality has shown little sign of slowing down.

However, in a recent Bloomberg view article Noah Smith recently pointed to two new pieces of research that seem to provide new support for the Kuznets hypothesis. He wrote:
...recent evidence is coming down on the side of Kuznets.
In Latin America, inequality has been falling for over a decade. A recent study by economists Nora Lustig, Luis Felipe López-Calva and Eduardo Ortiz-Juarez found that almost all Latin American countries became more economically equal from 2002-2012...
Only in Honduras did inequality go up during this period...
Lustig and her colleagues found that government transfers and pensions accounted for between 21 and 26 percent of the decline in inequality. That’s important, but it’s far from the whole story. The biggest share of the improvement, by far, was caused by reductions in wage inequality. Lustig's group connected this to increasing levels of education -- in the 1990s, a lot more Latin Americans started going to school. Economic growth has been another reason for increasing wages...
As for China, there are signs that inequality there has peaked as well. A recent study by economists Ravi Kanbur, Yue Wang and Xiaobo Zhang combed through China’s notoriously murky data and found that the Gini coefficient declined to 0.495 in 2014 from 0.533 in 2010.  That's a high level of inequality by international standards, but a trend in the right direction.
The jury may be called back in on the Kuznets hypothesis. It seems it cannot be quite written off yet.

[HT: Marginal Revolution]