Thursday, 15 July 2021

The PBRF has served its purpose and it's time for a re-think

New Zealand's Performance Based Research Fund (PBRF) has been going through a review over the last year or so (see here, or read the review's discussion document here). For those of you not in the know, the PBRF allocates some proportion of government university funding to each university on the basis of research performance. It is essentially a ranking exercise, undertaken by evaluating individual researchers (which is quite different from the UK or Australia, where the unit of assessment is the department), and then aggregating up to disciplines and to each university as a whole. It provides legitimacy to the claim that Waikato is number 1 in economics.

With the review underway, this article by Bob Buckle, John Creedy, and Ashley Ball (all Victoria University of Wellington), published in the journal Australian Economic Review (ungated earlier version here), is particularly timely. They looked at how the three previous full PBRF rounds (in 2003, 2012, and 2018, and ignoring the partial assessment round that occurred in 2006) affected New Zealand universities. Specifically, they look at the incentive effects, noting that:

A university can improve its research quality in three ways, although strong constraints are placed on the changes that can be made. Changes in average quality depend on the exits and entries of individuals (to and from other universities in New Zealand, or international movements), and the extent to which remaining individuals can improve their measured quality. A university can also influence its average quality by changing its discipline composition in favour of higher-quality groups.

They use confidentialised data from the Tertiary Education Commission on every researcher included in the three PBRF rounds to date, and focus on changes between 2003 and 2012, and between 2012 and 2018. Overall, they find some positive impacts on research quality:

There was a rise in the proportion of As and Bs in both periods... the proportion of A‐quality researchers increased by a factor of 2.5 between 2003 and 2018: from 6.5 per cent in 2003 to 13.3 per cent in 2012 to 16.4 per cent in 2018. The proportion of Bs increased by a factor of 1.5: from 25.9 per cent in 2003 to 39.6 per cent in 2012 and to 41 per cent in 2018.

A-quality researchers are world class in their fields, so an increase in researchers in that category is clearly a good thing. However, it does matter how the universities got there, and on this point Buckle et al. find that:

The net impact of exits on the AQSs [Average Quality Scores] is positive for all universities and disciplines in both periods; the net impact of entrants is always negative; and the net impact of QTs [Quality Transformations] is always positive.

In other words, the average quality of academics who exited the industry was lower than the average of those who stayed, while the average quality of new entrants (often early career researchers) was lower than the average of those already in the universities, and there was an improvement in measured quality among those who stayed. Looking at whether changes in individual researcher quality within disciplines, or changes in the composition of disciplines, contributed more to the improvement in measured quality overall, Buckle et al. find that:

For all universities combined, for 2003–12, the decomposition method found that all the increase in the AQS came from researcher quality changes. During the second period, the contribution arising from quality improvement was by far the dominant influence: indeed, the overall AQS would have been 3 per cent higher in the absence of any change in the overall discipline composition.

In other words, more of the improvement in AQS came from improvements in individual researcher quality scores, and not from universities making opportunistic changes to their disciplinary structures. 

So, overall, PBRF has had some good effects, and avoided the worst potential incentive effects for universities (at least at the disciplinary level). However, there are limits to how far those changes can be pushed. As Buckle et al. note in their conclusion:

The substantial reduction in the rate of quality improvement during the period 2012–18, compared with the earlier period 2003–12, suggests some streamlining of the process may be warranted, particularly in view of the high compliance and administrative costs. Furthermore, the major contribution to the average quality improvement in all universities and disciplines has resulted from the large number of exits of lower‐quality researchers. This also suggests that the extensive process, and information required, used to distinguish among higher‐quality categories is no longer necessary.

The PBRF is immensely time-consuming and costly, both to individual researchers who need to spend a lot of time preparing and polishing individual portfolios, and to universities. Researcher quality has improved tremendously in New Zealand since the PBRF was introduced in the early 2000s. The low-hanging fruit has clearly been picked, and the non-performing researchers are mostly gone. It is time to re-consider whether the PBRF remains useful, especially in terms of an assessment of individual researcher quality. Buckle et al. stopped short of conducting a full cost-benefit analysis of the PBRF system, but it is time that someone followed through and completed that exercise.

No comments:

Post a Comment