Wednesday 28 December 2022

We really need to move to research grant lotteries

The premiere source of competitive research funding in New Zealand is the Royal Society of New Zealand's Marsden Fund, which "supports excellence in science, engineering, maths, social sciences and the humanities in New Zealand by providing grants for investigator-initiated research". Researchers submit proposals, which are assessed on the quality of the proposed research and research team, with typically less than 15 percent of proposals being funded each year. There is a two-round selection process. In the first round, an expert panel screens short proposals (one-page plus CVs and some additional details), with a small number invited to progress to the second round. In the second round, each full proposal (around five or six pages plus CVs and additional details) is assessed by several international referees, who each score the proposal. The proposals are then reviewed by the expert panel, and ranked, with the top-ranked proposals being funded. Typically about half of the proposals that go through to the second stage are ultimately funded. [*] That long process of selection of successful research proposals raises two obvious questions: (1) how well are the top proposals selected in this process; and (2) what is the consequence of being funded for the researchers involved?

Those are essentially the research questions addressed in this 2018 article by Jason Gush (Royal Society of New Zealand), Adam Jaffe (Motu Research), Victoria Larsen (University of Otago), and Athene Laws (Motu Research), published in the journal New Zealand Economic Papers (ungated earlier version here). They used data from 1263 proposals that made it to the second round, over the years from 2003 to 2008.

In terms of the second research question, their simplest test for an effect of funding on research output, while ignoring any selection bias, finds that:

...funding is associated with an increase in publications of about 6% and citations about 12% relative to what would have been predicted based on previous performance.

So, that suggests that being successfully funded is associated with greater research output in the future, which is what you would hope (although in other results Gush et al. find that there is no evidence that it generates 'home runs', in terms of very highly cited research). The next question is, how much of that is actually selection bias (better researchers, who would have published more anyway, are those that get funded)? In this case:

The surprising result... is that the coefficient on scaled rank is negative. This means that, controlling for the other regressors - including the effect of the funding itself - proposal teams that were highly ranked by the RSNZ panels actually performed worse than those that were ranked lower. Specifically, because the rank is scaled so that it is roughly one for the best-ranked proposal and zero for the worst, the coefficient of -.2 to -.3 means that the worst ranked proposal team got 20%-30% more output than the best team, after controlling for all other attributes, including previous performance.

Yikes! The selection process appears to do a negative job of selecting the best research teams. However, when Gush et al. change the model specification (to a counts-based model), they find that:

...the negative effect of scaled rank appears to be concentrated among the unfunded proposals, although there is still no evidence of the expected positive effect even for the funded proposals.

So, there isn't negative selection among the proposals that are actually funded, which should be a bit of a relief. However, there isn't positive selection either. You could do nearly as well in selecting the best proposals by simply selecting some at random (which earlier research has suggested might not be too bad as an option). Gush et al. conclude that:

Given the significant time and resources that both researchers and the RSNZ devote to the second-round selection ranking, its apparent ineffectiveness in predicting bibliometric outcomes suggests that the Fund could benefit from review of its selection processes.

That isn't far from what I have suggested before. Instead of expending substantial time, effort, and resources conducting a review process that is no better than random at selecting the best research proposals, run a research grant lottery instead.

Read more:

*****

[*] In the interests of full disclosure, I had an unsuccessful proposal in the second round of the Marsden Fund this year. You are welcome to interpret this entire post as a gripe against the system that thwarted my latest proposal. However, you should first note that I have been successful in other rounds, and from other funders, and I've written on this topic before. And, I've been on the other side of the funding decisions, being on panels for the Health Research Council the last two years (and my experience there suggests that we probably could have done just as well by randomising, after culling a few proposals that were clearly below par).

No comments:

Post a Comment