Thursday, 27 October 2016

Nurses and teachers should be paid more in Auckland

I was surprised to learn this week that nurses, teachers, and police officers get paid the same salary regardless of where they are employed. This came to my attention through stories like this one, about teachers and nurses leaving Auckland because of the high cost of living:
Teachers aren't the only ones struggling with Auckland's overheated housing market, as staff in other industries look to live and work in more affordable regions.
Mount Albert Grammar School in central Auckland is losing three of its young science teachers at the end of the year, with the cost of living in Auckland a significant factor, but the nursing industry is also feeling the pinch.
Both teachers and nurses get paid on the same pay-scale across New Zealand, with pay likely to go further in the regions than in Auckland - something New Zealand Nurses Organisation industrial adviser Lesley Harry said was pushing workers out of the city.
Why is it surprising to me that there are common national pay scales for nurses and for teachers, regardless of the location of their employment? Because of compensating differentials. As I noted in a post in 2013:
Some jobs have desirable characteristics, while other jobs have undesirable characteristics. Jobs with desirable characteristics attract more workers, increasing labour supply and lowering wages relative to jobs with undesirable characteristics. In other words, workers are compensated (with higher wages) for undertaking jobs that have undesirable characteristics. The standard example I use is welders - on oil rigs, welders can earn several times higher salaries than they can onshore. They are essentially being compensated for a number of undesirable characteristics of the job though - long hours, high stress environment, long periods away from family, and higher risk of death or injury.
Living in Auckland might not be a negative characteristic of a job in its own right (or maybe it is for some people), but a high cost of living is negative, as is a long (and costly, both in terms of money and in terms of time) commute to work. So, jobs in Auckland necessarily come with undesirable characteristics relative to otherwise-identical jobs in the regions, where the cost of living is lower and the commute to work is less arduous.

So, I would expect that nurses and teachers should be paid more to endure the high cost of living and long commutes associated with employment in Auckland. The fact that they're not is certainly going to lead to more of the outward migration described in the article (and other recent stories).

Sunday, 23 October 2016

How do assessment and grading practices at Waikato compare with U.S. universities

Bill Walstad and Laurie Miller (both University of Nebraska-Lincoln) have a new paper in the latest issue of the Journal of Economics Education, summarising the grading policies and practices across the U.S. (sorry I don't see an ungated version anywhere). The data comes from a survey of 175 instructors of principles courses (whether microeconomics, macroeconomics, or combined).

The most interesting thing about this paper was the comparison with our two first-year economics papers at the University of Waikato. Now, we don't actually teach principles of microeconomics in a single paper. Instead, we have ECON100 which is essentially business economics, and ECON110, which is more of a survey course with a welfare economics flavour. However, both fit a similar role to principles courses as taught at U.S. institutions. So, in this post I look to see how they compare.

First, Walstad and Miller look at average grades, and find the average grade is a B-. Now, a direct comparison of grades between U.S. and New Zealand institutions is somewhat unhelpful, but it is interesting that the average grade for ECON100 is usually in the high B- (in the Waikato grading scale), and ECON110 is usually in the mid-high B range. So at least on the surface, things appear similar there.

In terms of grading practices, Walstad and Miller note:
The grading policies that instructors adopt to determine a grade are quite different across instructors. The majority of instructors (74 percent) calculate grades based on an absolute standard such as the percentage of points earned in a course.
Count us in the majority for both ECON100 and ECON110. Next, Walstad and Miller look at grade adjustments. They note:
Regardless of whether an absolute or relative standard is used as the grading policy, student grades can be adjusted at the margin... An instructor can decide at the end of the course to give students bonus points to increase the class average or for meeting some requirement, such as having excellent attendance. The bonus adjustment, however, seems to be more the exception than standard practice because it is used by only 15 percent of instructors.
Again, we are in the majority here for both ECON100 and ECON110, but then:
Another type of positive adjustment is to increase a grade near a grade cutoff. This cutoff adjustment is more widely used than the bonus adjustment because while 13 percent say that they will increase a grade if it is very close to a cutoff, another 56 percent replied that maybe they would increase a grade.
This is something we often do. Given that marks are measured with some error, it makes sense to give students who are on the boundary of a grade the benefit of the doubt (in most cases - if a student wasn't attending class or missed handing in assessments, we are less inclined to move them over the grade boundary).

Next, Walstad and Miller look at extra credit:
What is more popular for increasing grades than bonus points or a cutoff bump among almost half of instructors (46 percent) is to give students extra credit for some type of activity or project. The ones most often given extra credit are for participating in an instructor-sanctioned event or activity outside of class (46 percent), such as attending a guest lecture on an economic topic. Also considered highly valuable is doing something extra for class such as writing something (35 percent); taking an extra quiz, homework, or class assignment (14 percent); or bringing new material to class (10 percent). Students also can be rewarded with extra credit points for good attendance (10 percent); contributing to a class discussion (6 percent); or participating in a class experiment, game, or project (5 percent)...
Among the 46 percent of instructors who use extra credit, the average allocation to the course grade is 4 percent, and the median is 2.5 percent, indicating that the percentage allocation for extra credit is positively skewed, but with large clumps of responses at 3 percent (29 percent) and 5 percent (20 percent). 
I have given extra credit in ECON110 for the last few years, both for attendance (at randomly-selected lectures) and for completing in-class experiments, exercises and short surveys (where the data will be used later in the same topic or a later topic). This semester in ECON100, we gave extra credit for the first time, for being in class for spot quizzes in randomly-selected lectures. In ECON110, extra credit could be worth up to 3 percent of a student's overall grade, and in ECON100 up to 2.5 percent. So, even though we are in the (large) minority here, both classes are around the median in terms of the weighting of extra credit in the overall grade.

Lastly, Walstad and Miller summarise the types of assessment used:
Exams constitute the largest component of a course grade (65 percent). The number of exams that are administered can range from as few as one to as many as six, but the majority of instructors give three exams, in which case the exam grade weights are 30, 30, and 40. When a final exam is given, it is most likely comprehensive (69 percent) rather than limited to the content covered since the previous exam (31 percent). The predominant type of questions on these exams is multiple-choice (56 percent) followed by numerical or graphical problems (21 percent) and short-answer questions (15 percent). Very few instructors (8 percent) allow students to retake an exam to improve their score, and if they do, the exam is a different one from what they previously took.
The other contributions to a course grade come from homework or problem sets (15 percent) and quizzes (10 percent). The majority of the type of items for homework or problem sets are numerical or graphical problems (49 percent) followed by multiple-choice items (26 percent) and short-answer questions (16 percent). By contrast, multiple-choice items are more likely to be used for quizzes (50 percent) than problems (18 percent) or short-answer questions (14 percent).
Exams here include tests, so ECON100 (80 percent tests and exam made up of 15+15+50) and ECON110 (60 percent tests and exam made up of 30+30) are similar to the U.S. institutions. Both the ECON100 exam and the ECON110 final test are comprehensive. ECON100 is predominantly multiple choice (60%), and the rest is short answers or graphical problems, while ECON110 has no multiple choice but is all short answers, numerical or graphical problems. We don't allow students to retake an exam or test.

Where we differ most from the U.S. institutions is in the use of homework or problem sets. ECON100 doesn't use these at all, but does have quizzes (using the online testing system Aplia). ECON110 doesn't have quizzes but does have weekly assignments (similar to homework or problem sets, but more applied).

Overall though, to me it seems that in both ECON100 and ECON110 we are following current practice in U.S. institutions in our grading and assessment practices. So our students can feel pretty confident we are following best practice. Phew!

Friday, 21 October 2016

A cautionary tale on analysing classroom experiments

Back in June I wrote a post about this paper by Tisha Emerson and Linda English (both Baylor University) on classroom experiments. The takeaway message (at least for me) from the Emerson and English paper was the there is such a thing as too much of a good thing - there are diminishing marginal returns to classroom experiments, and the optimal number of experiments in a semester class is between five and eight.

Emerson and English have a companion paper published in the latest issue of the Journal of Economic Education, where they look at additional data from their students over the period 2002-2013 (sorry I don't see an ungated version anywhere). In this new paper, they slice and dice the data in a number of different ways from the AER paper (more on that in a moment). They find:
After controlling for student aptitude, educational background, and other student characteristics, we find a positive, statistically significant relationship between participation in experiments and positive learning. In other words, exposure to the experimental treatment is associated with students answering more questions correctly on the posttest (despite missing the questions initially on the pretest). We find no statistically significant difference between participation in experiments and negative learning (i.e., missing questions on the posttest that were answered correctly on the pretest). These results are consistent with many previous studies that found a positive connection between participation in experiments and overall student achievement.
Except, those results aren't actually consistent with other studies, many of which find that classroom experiments have significant positive impacts on learning. The problem is the measure "positive learning". This counts the number of TUCE (Test of Understanding of College Economics) questions the students got wrong on the pre-test, but right on the post-test. The authors make a case for this positive learning measure as a preferred measure rather than the net effect on TUCE, but I don't buy it. Most teachers would be interested in the net, overall, effect of classroom experiments on learning. If classroom experiments increase students' learning in one area, but reduce it in another, so that the overall effect is zero, then that is the important thing. Which means that the "negative learning" (the number of TUCE questions the students got right on the pre-test, but wrong on the post-test) must also be counted. And while Emerson and English find no effect on negative learning, if they run the analysis on the net overall change in TUCE scores (which you can get by subtracting their negative learning measure from their positive learning measure), they find that classroom experiments are statistically insignificant. That is, there is no net effect of classroom experiments on students' performance in TUCE.

Next, Emerson and English start to look at the relationship between various individual experiments and TUCE scores (both overall TUCE scores and scores for particular subsets of questions). They essentially run a bunch of regressions, where in each regression the dependent variable (positive or negative learning) is regressed against a dummy variable for participating in a given experiment, as well as a bunch of control variables. This part of the analysis is problematic because of the multiple comparisons problem - when you run dozens of regressions, you can expect one in ten of them to show your variable of interest is statistically significant (at the 10% level) simply by chance. The more regressions you run, the more of these 'pure chance' statistically significant findings you will observe.

Now, there are statistical adjustments you can make to the critical t-values for statistical significance. In the past, I'm as guilty as anyone for not making those adjustments when they may be necessary. At least I'm aware of it though. My rule of thumb in these cases where multiple comparisons might be an issue is that if there isn't some pattern to the results, then what you are observing is possibly not real at all and the results need to be treated with due caution. In this case, there isn't much of a pattern at all, and the experiments that show statistically significant results (especially those that are significant only at the 10% level) are showing effects that might not be 'real' (in the sense that they are not pure chance results).

So, my conclusion on this new Emerson and English paper is that not all classroom experiments are necessarily good for learning, and the overall impact might be neutral. Some experiments are better than others, so if you are limiting yourself to five (as per my previous post), this new article might help you select those that may work best (although it would be more helpful if they had been more specific about exactly which experiments they were using!).

Read more:



Thursday, 20 October 2016

New research coming on surf rage in New Zealand

Last month I wrote a post about the escalation of surf gang violence:
Surf breaks are a classic common resource. They are rival (one surfer on the break reduces the amount of space available for other surfers), and non-excludable (it isn't easy to prevent surfers from paddling out to the break). The problem with common resources is that, because they are non-excludable (open access), they are over-consumed. In this case, there will be too many surfers competing to surf at the best spots.
The solution to the problem of common resources is to somehow convert them from open access to closed access. That is, to make them excludable somehow. And that's what the surf gangs do, by enforcing rules such as 'only locals can surf here'.
Now a Massey PhD student is starting a new study on 'surf rage' in New Zealand. The Bay of Plenty Times reports:
The surf at Mount Maunganui will be used as a location to explore surf rage - with locals saying it is real.
Massey University PhD student Jahn Gavala said surf rage, with surfers protecting their local surf and leading to intimidation and physical assault, was prevalent across New Zealand.
"People have ownership of, or mark certain spaces in the surf zones. They form packs of surfers. They use verbal intimidation, physical intimidation and the raging is being physically beaten up - boards broken, cars broken."
Mr Gavala planned to observe surfers at six top surf breaks including Mount Maunganui over summer.
Seems like a good excuse to hang out at the beach and call it research. On a more serious note though, I hope Gavala reads the extensive work of Elinor Ostrom on private solutions to common resource problems, of which surf rage is one example.

Wednesday, 19 October 2016

Brexit and the chocolate war

I've avoided adding to the sheer volume of stuff that's been written about Brexit. However, in this case I'm willing to make an exception. The New Zealand Herald recently ran a story about the reopening of the 'chocolate war':
A 30-year battle between Britain and the European Union over chocolate, which was settled by a court ruling only in 2003, could reopen when the UK quits the bloc, former Deputy Prime Minister Nick Clegg warned Monday.
British chocolate manufacturers fought for the right to sell chocolate containing vegetable fat, which their continental competitors said was not as pure as the products they were marketing and should be branded "vegelate" or "chocolate substitute."
In 2000 a compromise was reached to call it "family chocolate" and the European Court ordered Italy and Spain, the most vociferous opponents, to allow its sale three years later.
"The chocolate purists, I guarantee, will quite quickly start fiddling with the definition of chocolate to make it much more difficult for British exporters to export elsewhere in Europe," Clegg said after a speech in central London...
Arguments over "common definition" will sit alongside tariff barriers and customs controls as obstacles to British food and drink manufacturers if Britain leaves the EU single market, Clegg said as he introduced a report on the UK's 27 billion pound (NZ$46 billion) food and drink sector.
It seems somewhat obvious that Brexit will lead to an increase in trade barriers between Britain and the European Union. However, most people are concentrating on the implications in terms of tariffs (essentially, taxes on imports or exports that make traded goods more expensive).

Fewer people are considering the rise of non-tariff trade barriers. Non-tariff trade barriers exist where the government privileges local firms (or consumers) over the international market, but does so without direct intervention (such as tariffs or quotas). Because they don't involve an explicit tariff or quota, these trade barriers are somewhat hidden from view. However, having rules that prevent UK chocolate from being sold or marketed as chocolate in the European Union would certainly fit the definition, given that it would make it difficult (if not impossible) for British chocolate manufacturers to export to Europe (at least, not without renaming their products 'vegelate' - yuck!).

Other than the rekindling of the 'chocolate war', I wonder how many other non-tariff trade barriers will arise after Brexit is triggered?

Tuesday, 18 October 2016

Explaining changes in the price of chicken

I just love how the simple economics we teach in ECON100 and ECON110 can explain things we see in the newspaper. The simple workhorse model of supply and demand is a pretty useful tool for this. Take this article from the New Zealand Herald last week on chicken prices:
Enjoy cheap chicken prices while they last.
That's the message consumers can take from a sharebroker's report that says a glut in New Zealand's favourite meat will shortly come to an end.
Average prices for fresh chicken pieces were 9 per cent lower in August than in March, according to the First NZ Capital research.
And whole frozen chickens were 16 per cent cheaper.
NZ poultry production rose 11 per cent year-on-year in the 12 months to June 2016, to reach 210,000 tonnes, according to the report.
First NZ said "oversupply conditions" had resulted in a build-up of frozen chicken inventory.
But the glut is expected to recede in the next few months as operators adjust production, according to the report.
And here's the simple supply and demand model at work, in the figure below. In March, the market is operating with demand D0 and supply S0, with an equilibrium price of P0 and quantity Q0. Chicken production increases, shifting the supply curve to the right (to S1). The price of chicken falls to P1 (9 per cent lower than March, according to the quote above), while the quantity of chicken traded increases to Q1.


Then, "as operators adjust production" (by reducing supply back towards S0), the price of chicken can be expected to rise (back towards P0). Nice!

Sunday, 16 October 2016

Which asylum seekers do Europeans want?

The latest issue of Science has an interesting article by Kirk Bansak, Jens Hainmueller, and Dominik Hangartner (all Stanford; Hangartner is also at London School of Economics) on the topic of European attitudes towards asylum seekers (sorry I don't see an ungated version anywhere). What caught my attention was the method employed.

Most studies of attitudes to migrants (or refugees, or asylum seekers) would simply ask a straightforward question measured on a Likert scale. Bansak et al. instead use a conjoint experiment method (which is very similar to discrete choice modelling, which I've written about before). They explain:
To provide such an assessment, we designed a conjoint experiment and embedded it in a large-scale online public opinion survey that we fielded in 15 European countries...
Conjoint experiments ask subjects to evaluate hypothetical profiles with multiple, randomly varied attributes and are widely used in marketing and, increasingly, in other social science fields to measure preferences and the relative importance of structural determinants of multidimensional decision-making... Specifically, we used a conjoint experiment to ask 18,000 European eligible voters to evaluate 180,000 profiles of asylum seekers that randomly varied on nine attributes that asylum experts and the previous literature have identified as potentially important... This design allows us to test which specific attributes generate public support for or opposition to allowing asylum seekers to stay in the host country and how this willingness varies across different groups of eligible voters, countries, and types of asylum seekers.
This is actually a very cool idea, and implemented in a very large sample size (conjoint experiments are more often run with samples in the hundreds, but here they have 18,000). The findings are many, and I encourage you to read the paper (if you have access). Here's what the authors say:
The results demonstrate that European voters do not treat all asylum seekers equally. Instead, the willingness to accept asylum seekers varies strongly with the specific characteristics of the claimant. In particular, preferences over asylum seekers appear to be structured by three main factors: economic considerations, humanitarian concerns, and anti-Muslim sentiment.
To summarise, they found that doctors, teachers, and accountants were more acceptable as asylum seekers than 'lower' occupations like cleaners, who were in term more acceptable than the unemployed. Language skills were important, with much lower acceptance of asylum seekers who had 'broken' or no host-country language skills. Asylum seekers who applied because of political, religious, or ethnic persecution were much more acceptable than those who applied because of economic opportunities. The vulnerable (e.g. torture victims) were also more acceptable as asylum seekers. Religion mattered a lot - Christians were most acceptable, agnostics less so, and Muslims least of all. Female asylum seekers were preferred over males, and younger asylum seekers were preferred over older asylum seekers. Country of origin didn't appear to matter nearly as much as the other factors above.

The results (in terms of the factors associated with asylum seeker acceptability) didn't appear to differ much between the 15 countries included in the study, nor did they vary much by education (of the survey respondents), income, or age. Those might be the most surprising results of all.

[HT: David McKenzie at Development Impact]

Read more:



Saturday, 15 October 2016

Police are not winning the P war - they need to focus on demand

Just a quick follow-up on yesterday's post, where I reviewed the excellent Tom Wainwright book, "Narconomics: How to Run a Drug Cartel". Last week, the New Zealand Herald had a front page story about the drug (read: methamphetamine, or P) war in New Zealand:
Police Association president Greg O'Connor said despite several big drug busts in recent months, anecdotal evidence from front line officers suggested the country now had a greater problem with the drug than ever before...
Police announced yesterday they had seized $17 million worth of the drug following a seven-month investigation. And in June almost $500 million worth of meth was discovered in Kaitaia - the biggest P haul in New Zealand history.
But O'Connor said despite such significant stings, they seemingly had no impact on the price - or availability - of the drug.
"We've got a major issue," he said.
"We're having a second wave now."
"The first wave was at the end of the 90s. It sort of caught New Zealand by surprise - the policies were way behind."
For those of you who read yesterday's post, or my post from back in March, or this post from 2015, this should come as no surprise. When you target sellers, you may increase the price, which simply increases profits and encourages more sellers to step into the market. At least this point doesn't appear to be totally lost on the police, with both demand-side and supply-side policies featuring in their 'wish-list':
Asked whether we had made a dent in the war against P, O'Connor replied: "It doesn't appear so."
More rehabilitation services were needed for those battling meth addiction.
A shortage of organised crime policing, particularly in the provinces, was also a problem, he said. A police spokeswoman said law enforcement agencies worldwide were facing problems with meth.
"But stamping out meth is not police's job alone. It requires law enforcement and social agencies to work together. That's what we're doing under the Prime Minister's Meth Action Plan."
She also noted work around the Government's gang action plan aimed at targeting and dismantling gang activity.
"These are all valuable multi-agency tools that help us to combat meth in NZ. We've had some great results so far, but we recognise there's still more work to be done."
That additional work had best focus on the demand side of the market.

Friday, 14 October 2016

Book review - Narconomics

Back in March I promised a review of Tom Wainwright's new book, "Narconomics: How to Run a Drug Cartel". I finished reading it last week, and although I'm not sure that it has fully equipped me to run a drug cartel, it certainly contains lots of interesting parts. Below I share some of the highlights (at least, to me).

Chapter 1 discusses the supply chain for cocaine, and simply reiterates the futility of governments targeting supply in the war on drugs. Here is one bit:
Because cartels depend on coca leaf to make their cocaine, governments have targeted coca plantations as a means of cutting off the business at its source. Since the late 1980s, the coca-producing countries of South America, backed by money from the United States, have focused their counternarcotic efforts on finding and destroying illegal coca farms. The idea is a simple economic one: if you reduce the supply of a product, you increase its scarcity, driving up its price... Governments hope that by chipping away at the supply of coca, they will force up the price of the leaf, thereby raising the cost of making cocaine. As the price of cocaine rises, they reason, fewer people in the rich world will buy it.
Wainwright then points out the main flaws in this argument. First, this is a giant game of whack-a-mole. Governments target coca producers in Peru, and production simply moves across to Colombia. When coca producers are targeted in Colombia, they move back to Peru. And so on. Second, the drug cartels are monopsonies - buyers with substantial market power. It is local farmers who grow the coca (not the cartels themselves), and since the farmers can only sell their illegal coca crop to the cartels, the cartels are able to dictate the price. So, even if coca eradication efforts are successful, they don't much affect the price that the cartels pay for the raw product. Third, even if the price of the raw material increases, it will have almost no effect on the street price of cocaine. Wainwright notes that the markup on cocaine is more than 30,000 percent (from farm-gate price to street price). So, even if government efforts managed to treble the farm-gate price of coca, the street price of cocaine would increase by only 0.6 percent - a trivial change. The takeaway is something I've noted before - targeting demand is likely to be more effective than targeting supply.

The second chapter looks at competition and collusion in the drug supply chain, and has a really interesting bit on gang tattoos:
The defining feature of El Salvador's young mareros is their head-to-toe tattoos. Like Old Lin, nearly all gang members sport body art declaring their allegiance to either the Salvatrucha or Barrio 18... Once a young man has become a member and has gotten his body covered in Salvatrucha tattoos, defecting to join Barrio 18 is out of the question, and vice versa. Even leaving the mara to start a new, noncriminal career is virtually impossible, as employers tend to be perturbed by job candidates who show up for an interview with skulls and crossbones etched on their foreheads. In economic terms, this means that whereas Mexican gangbangers are highly footloose, liable to change sides to work for whichever cartel seems to be stronger or higher paying, the labor market for Salvadoran mareros is completely illiquid.
I see this as gang tattoos acting as a form of credible commitment by the mareros. In a simultaneous game, where the marero chooses whether to be loyal or not and the gang must decide whether to trust the marero or not, the marero can make a credible commitment to be loyal by covering themselves in tattoos. Note that this is also a form of signalling - revealing private information about their loyalty to the gang - as only the truly loyal would go to the trouble of getting head-to-toe tattoos.

Chapters 3 and 4 talk about the human resource management issues of cartels, and their corporate social responsibility activities (yes, you read that right), while Chapter 5 talks about international outsourcing (or offshoring) and Chapter 6 covers franchising. I didn't find too much of particular interest in those chapters, though the chapter on franchising did raise some questions for me about whether international terror groups are also undertaking a form of franchising.

Chapter 7 covers the legal highs industry, with particular reference to New Zealand, and Chapter 8 talks about digital disruption. In the latter chapter, I found the discussion of drugs as a 'network good' of interest. Network goods are goods that can only be bought or sold if you belong to a particular network. Here's one bit:
Under these conditions, life is good for the established dealer. A key feature of network markets is that they tend to work strongly in favor of incumbents, who have had time to build up the biggest and strongest networks. Picture the stable, longtime drug dealer, who has been supplying the same city for years. He knows the importers. He has a long list of clients. He may even have contacts in the police whom he pays to turn a blind eye to his business. Now picture the young up-starts, someone who spots that the local market is uncompetitive, with watered-down drugs being sold at high prices. It ought to be easy to enter the market and win some business. But entering the drugs markets - a network economy - isn't so easy. Buying wholesale quantities of illegal drugs requires a rare set of high-level contacts. Selling them in smaller quantities requires a second, larger set of potential buyers. Without a network to buy from and sell to, the new dealer won't get far (and that is before even thinking about the possibility that the established dealer may not take kindly to someone else operating on his patch).
Of course, digital disruption means that whole new networks are being created online, and the chapter talks about the marketplaces on the 'dark web'. Chapter 9 talks about the diversification of the cartels, including from drug smuggling to people smuggling. Chapter 10 talks about the legalisation of cannabis in several U.S. states, and how that is affecting cartel business.

Wainwright concludes with what he sees as the main mistakes in official efforts to tackle the drugs industry: (1) the obsession with supply (see above); (2) saving money early on and paying for it later (prevention is much cheaper than cures, but cures win votes); (3) acting nationally against a global business (see the note on whack-a-mole above); (4) confusing prohibition with control (simply making something illegal is not a solution in and of itself).

Overall, I found this to be an excellent, well-researched book that maintained my interest throughout. I recommend it to anyone who wants to know more about the drugs trade, and how the economics (and business management) concepts we teach in business schools applies in that industry.

Tuesday, 11 October 2016

Nobel Prize for Oliver Hart and Bengt Holmström

The 2016 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka Nobel Prize in Economics) has been awarded to Oliver Hart and Bengt Holmström, "for their contributions to contract theory". Excellent coverage of their contributions to economics are provided by Tyler Cowen (here for Hart, and here for Holmström). See also here for a more accessible summary of their work (also by Cowen), as well as this piece by Noah Smith. Note that much of their work is quite theoretical, and there are definite links to Jean Tirole (who won the prize in 2014).

My ECON100 and ECON110 students might recognise some of their work in our discussions of moral hazard, principal-agent problems, and performance pay. I just finished talking in the final ECON110 lecture about the links to the work of Nobel laureates in that paper - I wish I'd checked my emails sooner, and I could have noted this award in that lecture!

Sunday, 9 October 2016

Stuck with indecision? Let the coin decide!

Quasi-rational decision makers are loss averse (we value losses more than we value equivalent gains). One of the interesting outcomes of loss aversion is status quo bias. Because changing something entails both a loss (we give up what we were doing before) and a gain (we start doing something new), the change has to make us much better off before we are willing to make the change. Status quo bias keeps us investing in projects that have little chance of success, and keeps us in unhappy relationships, horrible jobs, and so on.

So, what if we could overcome our indecisiveness by outsourcing the decision to a coin? Kind of like this:


Ok, maybe not quite like that. Tim Harford explains in a recent post:
The roll of a die or the toss of a coin can actually help us make better decisions.
There are two quite different reasons for this. The first is that by pre-committing to follow a random instruction, we can end up making decisions that we should have been making all along. The status quo has a strange hold over us. Stuck in a job that we dislike, or with a romantic partner who is anything but romantic, all too often we stick with the devil we know.
Deciding that “if the coin comes up heads, I’ll leave my boyfriend” may be the only way that some of us have to break through the inertia and make tough decisions. A 50 per cent chance of dumping the oaf is better than no chance at all.
Harford goes on to talk about this recent NBER Working Paper (ungated version here) by Freakonomics author Steven Levitt (University of Chicago). I've been meaning to write about this paper for some time - it was covered by Marginal Revolution, Jodi Beggs (Economists Do It With Models), and The Economist Free Exchange blog, all back in August, and has been on my must-read list since.

Why do a study that uses coin tosses to make decisions? Levitt explains in the paper:
What we really care about, however, is the impact on the marginal decision maker. It would not be surprising if getting a divorce would have a devastating impact on the inframarginal married person. A much more interesting question is whether divorce, ex post, will be the right choice for someone teetering on the edge of ending a relationship.
Even if one found such a group of individuals who are close to indifferent between remaining married and getting divorced, an ex post comparison of the happiness of those who do and do not make a change still would not have an easy causal interpretation, because the people who make a change will systematically differ from those who do not on many dimensions. To convincingly answer the question, a researcher would not only need to find large numbers of these marginal individuals, but also, through some sort of randomization, influence their important life choices.
That is what I do in this study. I created a website called FreakonomicsExperiments.com. On the website, individuals who are having a difficult time making a life decision are asked to answer a series of questions concerning the decision they are struggling with... One choice (e.g., “go on a diet”) is assigned to heads and the other choice (in this case “don’t go on a diet”) is assigned to tails. The outcome of the coin toss is randomized and the user is shown the outcome of the coin toss. The coin tossers are then re-surveyed two months and sixth months after the initial coin toss.
What Levitt finds is remarkable. First, people actually do follow the advice of the coin (in at least some cases). Those who flipped heads were 24.9 percentage points more likely to change than those who flipped tails (a statistically significant change). And even better, those who changed were happier afterwards. Levitt notes:
when it comes to “important” decisions (e.g. job quitting, separating from your husband or wife), making a change appears to be not only correlated with increased self-reported happiness, but also causally related, especially six months after the coin toss. Those who were instructed by the coin toss to make a change were both more likely to make the change (as noted above) and, on average, report greater happiness on the follow-up surveys... Choices on “less important” decisions (e.g. dying hair, improving posture) do not generally have a measurable impact on later happiness.
How big was the impact on happiness? People who made a change reported happiness about 0.48 points higher (on a scale of 1 to 10), or about 0.2 standard deviations. When looking at individual decisions, the results are under-powered to find much but do show that the effects appear to be largest for job quitting and breaking up. So, the next time you are agonising over whether to end that relationship or quit that job, maybe let a coin decide.

[HT: Marginal Revolution first, then others]

Saturday, 8 October 2016

Flipped classrooms work well for top students in economics

Earlier this year I blogged about two AER Papers and Proceedings papers that compared online only, blended (e.g. flipped) and traditional classes. Here's what I said then:
These two papers give quite complementary findings. The Swoboda and Feiler paper found a significant positive effect of their blended learning approach (compared with face-to-face), while Alpert et al. find no significant effect of blended learning...
Now I would really like to know what the distributional effects of blended learning are. My expectation is that it will probably work well for keen, motivated students, who are almost certain to watch the lectures before class. These are the students who currently read the textbook, and additional resources (like the lecturer's blog!). These students will likely benefit most from the change in approach, and gain significantly from the interactive learning in class, which is what the blended learning approach is designed to facilitate. The less-motivated students, who may watch some of the videos before class but often don't, will not benefit as much, or may actually be made worse off by the switch to blended learning.
Which brings me to this new paper by Rita Balaban and Donna Gilleskie (both UNC Chapel Hill), and Uyen Tran (University of Chicago), published in the latest issue of the Journal of Economic Education (sorry I don't see an ungated version anywhere). In the paper the authors compare a flipped classroom model (where students watch lectures online before attending class where more active learning approaches, such as problem-based learning, are employed) with a traditional lecture-based model. There were nearly 400 students in each semester. Unfortunately, the research design is not clean because the two models were employed in different semesters. The authors demonstrate that there are no observable differences between the students, but I would also be concerned that the lecturer (who is one of the co-authors) knew that the study was being undertaken during the second semester (when the blended learning approach was used), and put in greater effort (a genuine concern for any single-blinded trial).

Notwithstanding my concern about the single-blinded nature of the approach, the results are interesting and very positive for the blended learning approach:
The values of average percent correct among all common questions for the traditional and flipped classroom formats suggest that the course redesign led to a (statistically significant) 6.9-percentage-point improvement in student performance (i.e., a difference-in-means result).
Once they control for student characteristics, the results remain similar, with an overall increase of about one-half of a standard deviation. This is quite a large impact. When they disaggregate the results by question type, they find:
Our results indicate that the flipped classroom does not differently impact performance on knowledge questions (objective 1), which require memory, recognition, and recall. We find that the flipped format significantly improves performance on comprehension questions (objective 2) by one-quarter of a standard deviation...
With regard to performance on application questions (objective 3), the flipped classroom boosts performance by 0.74 standard deviations on average...
On analysis questions (objective 4), we find gains of 0.47 standard deviations. The ability to analyze involves differentiating, organizing, and attributing.
So, as one might expect, the main impacts are on students being better able to apply their learning (it's worth noting that there were only three questions in the exam in the knowledge domain, and only three questions in the analysis domain).

Finally, the authors looked at the results by performance level (using quantile regression). They find:
that the flipped classroom had slightly different effects on students (depending on their position within the performance distribution) for different types of questions. Overall, students in the top 25 percent of the distribution appear to benefit more from the flipped format than those below the 75th percentile, although all students benefit substantially.
Which brings me back to my initial disquiet at the flipped classroom model. This paper has done little to dissuade me that it benefits the motivated top-performing students and could make things worse for the unmotivated marginal student. I wouldn't necessarily take this study as representative of the average university student. The average exam result was 80.7 percent, which struck me as rather high until I looked at the average SAT scores for the sample, and found that the average student was in about the 90th percentile on the SAT. So, even the students in the bottom of this class are relatively good students in the overall scheme of things. To add to that, the lecture attendance was over 90 percent on average. So not only are these mostly top-achieving students, they are well-motivated top-achieving students. Exactly the students I would expect to benefit from the flipped classroom. I guess I'm still waiting for the research that will convince me that the flipped classroom model will have positive outcomes for the marginal (or even the median) student that I teach.

Tuesday, 4 October 2016

Could your social media posts make insurance more expensive?

James from my ECON110 class pointed me to this insightful Tamsyn Parker article in the New Zealand Herald with the above title. Parker writes:
Could that Instagram image of you bungy jumping in your 20s result in having to pay higher insurance costs in the future? One insurance expert thinks so.
Michael Naylor, a senior lecturer in finance and insurance at Massey University, says people should expect insurers to mine their social media accounts in the future to determine how much they will charge for insurance premiums and if they will pay out on claims.
"People have to be aware everything they do on social media can be effectively public.
Why would insurance companies want to mine social media data to find out about us? It's because of the adverse selection problem. An adverse selection problem arises because the uninformed party (the insurer) cannot tell those with 'good' attributes (low-risk people) from those with 'bad' attributes (high-risk people). To minimise the risk to themselves of engaging in an unfavourable market transaction, it makes sense for the insurer to assume that everyone is high-risk. This leads to a pooling equilibrium - low-risk people are grouped together with the high-risk people and pay the same premium, because they can't easily differentiate themselves. This creates a problem if it causes the market to fail.

I've written about how the insurance market fails before (this comes from a post about health insurance, but it equally applies to accident insurance or life insurance - see also this post on adverse selection in life insurance):
In the case of insurance, the market failure may arise as follows (this explanation follows Stephen Landsburg's excellent book The Armchair Economist). Let's say you could rank every person from 1 to 10 in terms of risk (the least risky are 1's, and the most risky are 10's). The insurance company doesn't know who is high-risk or low-risk. Say that they price the premiums based on the 'average' risk ('5' perhaps). The low risk people (1's and 2's) would be paying too much for insurance relative to their risk, so they choose not to buy insurance. This raises the average risk of those who do buy insurance (to '6' perhaps). So, the insurance company has to raise premiums to compensate. This causes some of the medium risk people (3's and 4's) to drop out of the market. The average risk has gone up again, and so do the premiums. Eventually, either only high risk people (10's) buy insurance, or no one buys it at all. This is why we call the problem adverse selection - the insurance company would prefer to sell insurance to low risk people, but it's the high risk people who are most likely to buy. 
In order to solve an adverse selection problem, the uninformed party can try to reveal the private information - we call this screening. Mining people's social media posts could reveal to the insurance companies which people are high-risk and which are low-risk. They can then separate the two groups, and we move from a pooling equilibrium to a separating equilibrium. High-risk people will pay higher premiums, and low-risk people will pay lower premiums. Or, as noted in the New Zealand Herald article linked above:
[Michael Naylor] predicts it will be one to three years away in New Zealand and says it may not come from existing insurers but new entrants to the market who will use personal data to cut insurance premiums for less risky customers.
That could leave old-style insurers with more risky customers and the prospect of rising premiums to cover their costs.
He says the change could have implications for people who seek adventure when younger and record it all on their social media pages.
"Of course the internet doesn't die."
They may have to prove they no longer undertake those activities to get insurance in the future or sign exclusion agreements meaning they won't be covered for certain activities, says Naylor.
So, if new insurers use social media mining to offer cheaper insurance to low-risk people, you can bet the large incumbent insurers will follow suit soon after, because insuring low-risk people is much more profitable than insuring high-risk people.

So for now, before you apply for life insurance (or health insurance, accident insurance, or even car insurance), it might be best to lock down your social media accounts. Or at least delete any references to the risky exploits of your youth.

If everyone locks down their social media accounts so that insurers can't access them, things start to get interesting. Will insurers simply ask to see your past social media posts? If I was an insurer, I would. People who say 'no' to such a request are more likely to be high-risk (because low-risk people would have nothing to hide), and the insurer could price their premiums accordingly. Essentially, low-risk people could signal that they are low risk by making their past social media posts available to their insurer, and reap the reward of a lower premium. In fact, low risk people could probably do this right now.

[HT: James from my ECON110 class]