The last month has been a pretty topsy-turvy one for the Canadian Institutes of Health Research. After it rolled out the first round of applications for its new operating grant application termed “project grants,” it was all set to deliver evidence that its new systems of financial allocation and peer review were superior to the previous system. I was definitely a supporter of these bold changes when they were announced and have re-posted a snippet here:
“Let me applaud the CIHR for making some bold moves on the grant evaluation process – I hope they adopt them all. Doing the bulk of the work remotely and incorporating Internet-based discussions is clearly a positive step forward for sparing precious time and energy. Triaging applications at early stages will have two positive outcomes – letting unsuccessful applicants move on to alternate sources more quickly and saving their time by not making them prepare full applications. Exactly what criteria will help a grant avoid the chopping block was not made clear, and this will likely be met with some growing pains in the first few competitions…”
I wish that last statement wasn’t so correct, because from what I have heard, this first round of applications and reviews were a complete gong show. In particular, the remote peer review process was an epic failure – CIHR opted to host an online discussion forum where reviewers logged in and commented and the discussion was moderated by virtual chairs. Madness ensued – swathes of missing reviews, poor virtual chairing, and massive inconsistency between evaluation of grants. Holly Witteman compiled a delightful three day summary which inspired the title of my post for its ridiculousness. What a mess.
One of the people I spoke with when this was occurring was Professor Bill Stanford, and he shared some chilling tales, summed up most depressingly with “We did what we could but the best grants will likely not be funded. It was a complete crapshoot.” A particularly disheartening aspect of the new review system was the poor quality of the reviews. While CIHR had promised that all grants, particularly those of an interdisciplinary nature, would get better reviewers with expertise matched to the grant rather than the best reviewers within the grants panel to which the grant was assigned.
“In my group of 14 grants, not only did three reviewers never submit reviews, we had reviewers with no expertise in the field (e.g., clinical epidemiologists and bioinformaticians reviewing a basic science zebrafish functional genomics grants?!) but still turned in reviews, often praising poorly designed studies or slamming grants for ‘overpromising the health significance of the project.’ Unlike in face-to-face meetings, the reviewers could simply stop responding to questions and suggestions to find consensus scores. Look, I couldn’t review a clinical epidemiology grant. In some cases it was not the reviewers’ faults, just the fault of CIHR.”
To top it all off, CIHR had a glorious technical glitch resulting in some reviewers’ identities being inappropriately shared. A rough month to say the least.
And so scientists stood up and shouted – most prominently amongst them being Jim Woodgett (on Twitter) and the collection of CIHR university delegates who wrote to the CIHR about this botched process. This galvanized scientists to demand change and CIHR capitulated, promising to reinstate face-to-face peer review – although CIHR President Alain Beaudet made it sound more like a joint solution:
“It’s a day to rejoice, a day we’ve achieved something together,” he said. He commended the scientists for their collective uprising.
Obviously things could have been better, but in the wake of this hubbub of media attention, I fear that the scientific community have missed the real point here – CIHR is the major health funding agency in Canada and its grant success rate was just 13 percent. I’d be willing to bet that one of the big reasons people are upset is because their labs and their colleagues’ labs are not getting urgently needed funding. To be fair, we actually have no idea if the decisions made in this round are any better/worse than historic rounds because the projects haven’t been undertaken yet. (N.B. I remind readers of an earlier post on the inability of peer review to distinguish between the top 3rd and 20th percentiles.) Changing the manner in which things are reviewed will not actually help scientists get more money into their labs.
At the end of the day, there is probably some merit in reducing the amount of face to face peer review since getting hundreds of scientists to travel to central locations for grant evaluation is extremely expensive. Effective triaging by remote review seems a reasonable solution (as is done with our neighbours to the south at the NIH who cull ~50 percent of grants prior to panel discussion. In this instance, CIHR’s big mistake was removing the “live” component – video conferencing (or even teleconferencing!) tools would have been much more effective and there are already numerous funding agencies that operate in such a manner (e.g., SSHRC does teleconference reviews for a number of its panels).
Peer review mechanics aside, I hope that scientists in Canada are clever enough to look beyond the debacle of this year’s peer review which obviously failed and see that this isn’t the core problem with this new system – the problem is that scientists are struggling to keep the lights on in their labs because the operating grants are getting smaller and more difficult to get. I certainly hope this new(ish) government and its two science-related ministers are listening – 13 percent is just not good enough when hundreds of fundable projects (according to reviewers) just don’t make the financial cutoff.