Friday 7 July 2017

Subjecting teachers to a WMD

The New Zealand Initiative has a new report, entitled "Amplifying Excellence: Promoting Transparency, Professionalism and Support in Schools" (direct link to the report here). The New Zealand Herald reported on it yesterday, and focused on a point that drew my attention:
Teachers could be rated based on how much they lift student achievement, if a big-business think tank has its way.
The New Zealand Initiative, which says the combined revenue of its 54 member companies represents a quarter of the NZ economy, calls in a new report for better data to measure how good teachers are...
But the report does not take the next step, which teacher unions have feared, of linking teachers' performance directly to their pay.
"I don't say anything about what is happening to the teacher who is not performing well," said the report's author, Martine Udahemuka.
"First and foremost, it's to provide them with the support they need to become good teachers."
The report advocates for creating models of teacher value-added. Such a model would predict the achievement of each student based on their characteristics (e.g. age, ethnicity, parents' income, etc.) and would compare their actual achievement with the achievement predicted by the model. If a teacher on average raises their students' achievement above that predicted by the model, they would be considered a 'good' teacher, and if their students' achievement on average does not reach that predicted by the model, they would be considered a 'not-so-good' teacher.

On the surface, this sounds fairly benign and uses data in a more sophisticated way. However, hiding just below the surface there is a serious problem with the advocated approach. This problem is highlighted by Cathy O'Neill in her book "Weapons of Math Destruction" (which I reviewed just last week).

Essentially, each teacher is being rated on the basis of 25 (or fewer!) data points. O'Neill argues that this isn't a robust basis for rating or ranking teachers, and she provides a couple of (albeit anecdotal) stories to illustrate, where excellent teachers have been seriously under-rated by the model and lost to the system. She labels the teacher value-added model a Weapon of Math Destruction (WMD) because of the serious damage it can do to otherwise-good teachers' careers. Although the effects weren't all bad, as O'Neill observed:
After the shock of her firing, Sarah Wysocki was out of a job for only a few days. She had plenty of people, including her principal, to vouch for her as a teacher, and she promptly landed a position at a school in an affluent district in northern Virginia. So thanks to a highly questionable model, a poor school lost a good teacher, and a rich school, which didn't fire people on the basis of their students' scores, gained one.
I know that Udahemuka says "I don't say anything about what is happening to the teacher who is not performing well", but you would have to be incredibly naive to believe that the results of a teacher value-added model would not be used for performance management, and for hiring and firing decisions. The report acknowledges this:
As part of a supportive performance development system, principals could receive detailed feedback from the Ministry about class level performance to identify and share good practice, support performance improvement, and assign students to teachers best suited to their needs. All the decisionmaking would be at the school level.
Surely decision-making would include hiring and firing decisions.

Having said that, the greatest part of the problem with teacher value-added that I can foresee is if the model is used only to generate point estimates, with no consideration of the uncertainty of those point estimates. If each teacher fixed effect (assuming that is the approach adopted) is based on only 25 observations (a typical class size), the 95 percent confidence interval is likely to be quite wide on these estimates. I doubt that, apart from seriously awful or seriously superstar teachers, most teachers' effectiveness would be indistinguishable from the mean. I'm speculating without having completed or seen the analysis of course, but I think it would be fairly heroic to be making serious decisions about effectiveness at the individual-teacher level on the basis of such a model.

It gets worse though. The report argues that not only could you derive teacher value-added for the class as a whole, but also for sub-groups within that class:
In its sector support role, the Ministry should provide data to schools that more accurately shows areas of strength and weakness so teachers can seek necessary support. For example, a teacher may be highly effective with native English speakers but not students with English as a second language.
Even if you don't agree that a robust model couldn't be constructed on the basis of 25 data points per teacher, you must agree that going below that level (e.g. the number of ESOL students within their class) is ridiculous. And that's without even considering that these models are based on correlations and are not causal.

Another problem that isn't noted in the report is measurement. Unlike the U.S., New Zealand doesn't have a standardised testing system. Students select their NCEA subjects themselves, not all subjects have pre-requisites (e.g. a student can take NCEA Level 3 Economics, without Level 1 or Level 2) and not all subjects exist at all levels. On top of that, NCEA is based on achieving standards (or not) rather than a grade distribution. So it isn't clear how you would measure value-added in the absence of some standardised test.

Not only is the teacher value-added approach advocated in the report problematic for the reasons noted above, the empirical support for its effectiveness is notably thin. Ironically, the examples that Cathy O'Neill uses in her book come from the Washington, D.C. model, which is the sole source of empirical support for the value-added model that Udahemuka uses in her report. The local school example that the report highlights, Massey High School in Auckland, doesn't use a teacher value-added approach at all and is (according to the report) achieving excellent results. So, the key New Zealand example demonstrates that it isn't even necessary! This contrasts with other parts of the report, where more international examples of success are able to be used.

The New Zealand Initiative may have the best intentions. Improving education quality is a laudable goal. But one of the key recommendations in this report seriously misses the mark.

No comments:

Post a Comment