Monday, 9 March 2020

The New Zealand Initiative's new student success measure both goes too far, and not far enough

The New Zealand Initiative released a new research note today, by Joel Hernandez. It's part of an ongoing series of research they've been conducting into secondary school performance in New Zealand (see also my earlier post on their work here). In this new research note, Hernandez looks at secondary school performance, in terms of how well students perform at each level of NCEA and in University Entrance. Specifically, he looks at schools' relative performance (that is, their ranking compared with other schools) in terms of a raw performance measure, and a measure that has been adjusted for a bunch of student-level (family background) and school-level characteristics. The rationale for adjusting the measure is straightforward. As Hernandez notes:
...a more complex statistical model is required to separate the contribution of family background from the contribution of each school. Without it, the Ministry, principals and parents cannot identify a school’s true performance.
The adjusted ranking for a school provides an assessment on how well the school is doing, compared with what would be expected based on the family background of their students, and then ranks schools on that basis. Schools that do better than expected will rank higher on the adjusted measure than schools that do worse than expected. That all seems fair enough.

The research note then presents case studies for three schools, and shows that their ranking differs (in some cases quite substantially) between the raw measure, and the measure adjusted for student and school characteristics. The implication is that ranking schools on the basis of raw NCEA pass rates (or similar measures) does a poor job of capturing school quality, based on the body of students that each school actually has. No arguments here. Hernandez then concludes that the adjusted measure has value for the Ministry of Education, school principals, Boards of Trustees, and the public (that is, parents). I would have to disagree - giving this information to parents would not be a good thing. To see why, let's consider the decision-making process for parents.

Parents, to the extent they are able given other constraints, are looking to provide their children with the best education possible. However, information about school quality is seriously incomplete - most parents probably have little idea whether each available school is good, bad, or indifferent. Even the schools themselves may not know how high a quality the education they provide is, relative to other schools. So parents rightly look for some signal of quality - something that plausibly separates good schools from not-so-good schools. Many parents currently use the schools' decile ratings for this purpose - they assume that schools with a higher decile rating are higher quality schools.

However, the decile rating is not a measure of school quality - it is a measure of the socio-economic background of the families of students who attend the school. At the risk of over-simplification, students from more affluent families tend to go to higher decile schools. So, parents clamour to get their children into higher decile schools, which drives up house prices in school zones for those schools. That makes it even more difficult for the average family to afford to send their children to a high decile school. All of that in spite of the fact that decile rating should convey little information about the quality of the education provided by the school, because that's not what it's designed for.

The decile rating system not being a measure of quality was the problem that the New Zealand Initiative's new measure was designed to improve upon. And it does, but not necessarily in an entirely helpful way. The adjusted measure that Hernandez uses essentially captures the average 'value added' that the school provides its students - it is a measure of how much better (or worse) off they are by attending this school, relative to what would be expected. It essentially assumes that, having controlled for family background, all students would receive the same (average) impact of attending that school. That creates two problems.

First, it may actually make the issue of sorting even worse than before. At the moment, many parents select a school for their children based on the decile rating of the available schools. If the new measure of school quality is released to the public, parents now have some extra information on which to base their choice of school. They might look at the higher decile schools, find the high decile schools with the best quality, and then try to select into those. This will simply drive demand for those schools even higher than before. Previously, all high decile schools were treated similarly. This new measure will mean that not all high decile schools will be the same - some will become much more attractive than others. And the reverse will be true of low decile schools, especially low decile schools that perform poorly on the measure of school quality. In this way, the measure goes too far (if released to the public). Hernandez argues that the measure is "not designed to create new league tables" but that view is seriously naive.

Second, the measure doesn't really provide what parents are actually looking for. Parents want to know which school is going to provide their child with the best quality education. Taking the measure at face value, an uninitiated parent might conclude that high school quality means the same across different schools. But it doesn't - school quality in this measure is based on a comparison of how well the students fare compared to what is expected based on their characteristics. It captures the average effect on students who have the average characteristics of students at that school. Since no students have the average characteristics of students at any school, this measure doesn't actually apply to any students. So, just because a school adds a lot of value to its average student, it doesn't mean that it adds the same value to all of its students. Parents should really want a measure of value added for students with the same characteristics as their child, not the average. In this way, the adjusted measure doesn't go far enough.

Neither of those issues is a knock on the quality work that Hernandez has done. I think it's important. I'm just not sure that releasing this information to the public is necessarily a good thing. The implications need to be carefully thought through first.

Even releasing the results to the schools might not be a good thing. We already see schools trumpeting their raw NCEA performance on the big noticeboards at the school gates, soon after the results are released. I can imagine that schools that do well on this measure would want to publicise their results as widely as possible ("Look at us! We're a high quality school"), while schools that don't do so well would want to keep their results quiet. If you doubt this, consider Hernandez's research note - the one school that did better than expected in the adjusted measure is named in the research note, while the other two schools (that did worse than expected) did not want their identities revealed.

It is good to know that this research has moved on from a consideration of teacher value-added, and is now looking at the school level rather than the teacher level. This measure would provide good data to the Ministry of Education as to which schools are performing well, and which schools are performing not so well. This could easily be incorporated into ERO reports if desired (but noting the caveats above on releasing results to schools).

This approach also has potentially wider application. I wonder whether it would be possible to look at subject rankings (rather than school rankings), or rankings based on grouping particular standards - that is, an analysis at the subject or standard level, rather than the school level. That might provide some interesting results in terms of revising the NCEA curriculum itself.

[HT: New Zealand Herald]

Read more:


2 comments:

  1. We're not operating in a vacuum. The alternative to better measures of school performance, which show that most schools perform comparably to each other and which stop punishing lower decile schools for things outside of their control, are the current NCEA tables or decile as proxy for quality.

    If Boards of Trustees had these kinds of reports, they could better govern their schools. They could tell whether some new initiative started showing results, or not. A persistent feature of poorly performing schools is Boards without the nous to hold principals to account; these reports would help them to see what's really going on.

    The gains from being able to better govern our schools seem large. And to the extent that they're used to create league tables, they'd quickly show that schools are far closer to each other than people had previously thought.

    ReplyDelete
    Replies
    1. I agree that the information is important and valuable. We disagree on how widely it should be shared. I would set the threshold at the Ministry level, at least initially. But I can see a case for sharing with schools, and I agree there is value to be had there, especially if non-publication is a condition attached to the results going to schools.

      Delete