Skip navigation
The Black Hole

The CIHR gong show – a ridiculous state of affairs

BY DAVID KENT | JUL 20 2016

The last month has been a pretty topsy-turvy one for the Canadian Institutes of Health Research. After it rolled out the first round of applications for its new operating grant application termed “project grants,” it was all set to deliver evidence that its new systems of financial allocation and peer review were superior to the previous system. I was definitely a supporter of these bold changes when they were announced and have re-posted a snippet here:

“Let me applaud the CIHR for making some bold moves on the grant evaluation process – I hope they adopt them all. Doing the bulk of the work remotely and incorporating Internet-based discussions is clearly a positive step forward for sparing precious time and energy. Triaging applications at early stages will have two positive outcomes – letting unsuccessful applicants move on to alternate sources more quickly and saving their time by not making them prepare full applications. Exactly what criteria will help a grant avoid the chopping block was not made clear, and this will likely be met with some growing pains in the first few competitions…”

I wish that last statement wasn’t so correct, because from what I have heard, this first round of applications and reviews were a complete gong show. In particular, the remote peer review process was an epic failure – CIHR opted to host an online discussion forum where reviewers logged in and commented and the discussion was moderated by virtual chairs. Madness ensued – swathes of missing reviews, poor virtual chairing, and massive inconsistency between evaluation of grants. Holly Witteman compiled a delightful three day summary which inspired the title of my post for its ridiculousness. What a mess.

One of the people I spoke with when this was occurring was Professor Bill Stanford, and he shared some chilling tales, summed up most depressingly with “We did what we could but the best grants will likely not be funded. It was a complete crapshoot.” A particularly disheartening aspect of the new review system was the poor quality of the reviews. While CIHR had promised that all grants, particularly those of an interdisciplinary nature, would get better reviewers with expertise matched to the grant rather than the best reviewers within the grants panel to which the grant was assigned.

“In my group of 14 grants, not only did three reviewers never submit reviews, we had reviewers with no expertise in the field (e.g., clinical epidemiologists and bioinformaticians reviewing a basic science zebrafish functional genomics grants?!) but still turned in reviews, often praising poorly designed studies or slamming grants for ‘overpromising the health significance of the project.’ Unlike in face-to-face meetings, the reviewers could simply stop responding to questions and suggestions to find consensus scores. Look, I couldn’t review a clinical epidemiology grant. In some cases it was not the reviewers’ faults, just the fault of CIHR.”

To top it all off, CIHR had a glorious technical glitch resulting in some reviewers’ identities being inappropriately shared. A rough month to say the least.

And so scientists stood up and shouted – most prominently amongst them being Jim Woodgett (on Twitter) and the collection of CIHR university delegates who wrote to the CIHR about this botched process.  This galvanized scientists to demand change and CIHR capitulated, promising to reinstate face-to-face peer review – although CIHR President Alain Beaudet made it sound more like a joint solution:

“It’s a day to rejoice, a day we’ve achieved something together,” he said. He commended the scientists for their collective uprising.

Obviously things could have been better, but in the wake of this hubbub of media attention, I fear that the scientific community have missed the real point here – CIHR is the major health funding agency in Canada and its grant success rate was just 13 percent. I’d be willing to bet that one of the big reasons people are upset is because their labs and their colleagues’ labs are not getting urgently needed funding. To be fair, we actually have no idea if the decisions made in this round are any better/worse than historic rounds because the projects haven’t been undertaken yet. (N.B. I remind readers of an earlier post on the inability of peer review to distinguish between the top 3rd and 20th percentiles.) Changing the manner in which things are reviewed will not actually help scientists get more money into their labs.

At the end of the day, there is probably some merit in reducing the amount of face to face peer review since getting hundreds of scientists to travel to central locations for grant evaluation is extremely expensive. Effective triaging by remote review seems a reasonable solution (as is done with our neighbours to the south at the NIH who cull ~50 percent of grants prior to panel discussion. In this instance, CIHR’s big mistake was removing the “live” component – video conferencing (or even teleconferencing!) tools would have been much more effective and there are already numerous funding agencies that operate in such a manner (e.g., SSHRC does teleconference reviews for a number of its panels).

Peer review mechanics aside, I hope that scientists in Canada are clever enough to look beyond the debacle of this year’s peer review which obviously failed and see that this isn’t the core problem with this new system – the problem is that scientists are struggling to keep the lights on in their labs because the operating grants are getting smaller and more difficult to get. I certainly hope this new(ish) government and its two science-related ministers are listening – 13 percent is just not good enough when hundreds of fundable projects (according to reviewers) just don’t make the financial cutoff.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Jim Woodgett Cancel reply

Your email address will not be published. Required fields are marked *

  1. Jim Woodgett / July 20, 2016 at 12:42

    David, please define “extremely expensive”. The cost of the previous open competition panels was around $2 million/year. Not chump change but compared to the amount of money at stake ($500 million in grant funding ), I’d say that is a pretty good expense to ensure the best quality of review possible. Pretty much every other funding agency does their grant adjudication that way – including NIH which, after a presentation of the virtual system by CIHR reassured its research community that it was sticking to the gold standard of face to face reviews.

    The “travel expense” was offered up by CIHR as one of the main justifications for the change to virtual review which, in the light of this utter shambles, was clearly a price that should have been willingly paid 10X over. Penny wise, pound foolish.

    I’ve also linked (below) to the open letter signed by 1300 Canadian investigators.

    • David Kent / July 20, 2016 at 15:15

      Hi Jim,

      $2 million is another 4-6 “fundable, but not funded” grants – and Canada is no small country so travel time and environmental impacts are also larger. Flying/housing scientists from across the country should be reserved for the hard decisions (e.g., have a bigger triage, collect numerous virtual reviews and let a leaner face to face committee discuss the grants in the middle zone). I realise you have heaps more experience than I do on review panels, but in my limited experience, it is usually quite straightforward to identify the bottom 50% and the top 10%… the hard stuff is in between.

      All that said, if face-to-face is going to be the way, why not fly out the applicants to defend their grants? Surely this would do an even better job of finding out which grants pass muster.

      But, I do agree with you and others that the way this competition was carried out was extremely short-sighted. I commend you and others for holding CIHR to task, scientists need to do more to help fund the best research across the country.

      • David Kent / July 20, 2016 at 15:17

        PS – there are probably a dozen other pots of $2 million at CIHR that could be re-directed with even more haste!

      • Jim Woodgett / July 21, 2016 at 02:28

        I think you are completely off-side on this issue David. If there is no confidence in the peer review process, then $2 million is money extremely well spent. The point of face to face peer review is to hold the reviewers accountable for their reviews. People do not wish to be shown up for doing a poor job in their reviews. This accountability was entirely missing in the virtual scheme. If you want evidence, I am sure applicants would be happy to share their reviews. Most of the ones I’ve seen are embarrassing.

        Fly out the applicants? How on earth would that make any sense? There were over 3000 applicants! One idea I do support is to allow applicants to rebut a review before it is discussed (as is done in some jurisdictions) but you are now being absurd. As I said, penny wise.

        • David Kent / July 21, 2016 at 05:10

          Jim,

          While I do agree with your point Re: holding to account, I think you are conflating a few issues:

          1) holding to account – this requires a live discussion based element amongst the peer reviewers. This year’s process lacked this and it failed. It doesn’t mean that teleconference and/or videoconference options would not have worked.
          2) Effectiveness of small semi-expert panels compared to larger panels spending less time but with more specialised expertise.

          On the first, I’m guessing we could agree that it hasn’t been tested yet (certainly not by CIHR). Where we may not agree is whether or not it should be tested.

          On the second, I would remind you that the purpose is to make a reasonable funding decision, not to bring every single reviewer onside – a smaller less specialised panel should be able to distinguish a good v bad review. This gets even easier if there are 5-6 reviewers and a outlier review can be evaluated as such (either discarded or recognised as making points that others missed).

          Many agencies over here in Europe use the latter approach – written reviews from 5-6 experts per grant are requested and then a less specialised panel uses these reviews to make decisions by informing their discussions. But the panels are small as a result and involve fewer reviewers (who almost all travel shorter distances).

          I bring up the distance thing again because it’s not difficult to imagine CIHR preferentially pulling reviewers who are cheaper to transport/house. Do a fair proportion come in from Newfoundland or Saskatchewan for example? Virtual review removes the potential regional bias.

          Finally I’ve often wondered how much of a live in person discussion is dominated by 1-2 more extroverted reviewers who end up getting their ‘decision’ rather than the best decision. I would love to hear comments from other readers either here or by email who may have experienced this. if the eLife article re: being f2f peer review being unable to distinguish 3rd from 20th percentile grants from each other is to be believed then it seems to me that the biggest tip of the scales might actually be who is in the room fighting for a particular grant… But that’s a whole other issue.

          So where do we stand? It seems reasonable to me (your comments are definitely welcome on this!) that plthe following happens:

          1) grants are sent to 5-6 proper experts (both inside and outside Canada) and a 1-2 page report plus score is requested.
          2) these reports are compiled and the scores assessed – anything that tallies to a score in the lower 50th percentile is chucked out.
          3) everything else is ranked by average score and standard deviation is also reported so divergent views are picked up.
          4) reviews are sent to those in the top 50th and a 1 page response is encouraged for the panel to assess.
          4) a small panel meets face to face to discuss those rankings and 1 pagers and move grants up/down based on that discussion (this would require justification by the panel).

          Seem alright? It preserves the initial intent of CIHR (cost reduction, broader review) and allows f2f discussion of those on the bubble.

          • Jim Woodgett / July 21, 2016 at 12:44

            If you had been reading the multitude of reports, you’d have seen that CIHR was incapable of even getting 4 reviews for many of the applications. Many had 3, some 2 and several 1. Getting 5 or 6 will require a very special type of magical weed. Seriously, the quality of reviewers and number of reviewers were two of the major failings. So, to be clear, your proposal is about as useful as a chocolate fireguard in the practical world of 3500 or more grants needing assessed.

            I have few kind words for CIHR but I would never accuse them of saving airfares by not flying in people from the Maritimes, Prairies or West Coast. As a previous panel Chair, I suggested reviewers based on their expertise and CIHR did their very best to get them to agree to serve. There was never a question of distance or geographical bias. That would have, rightly, caused a revolt across the land. Seriously, have you ever looked into any of this? I look forward to your next column in which you describe a Virtual House of Commons which has the advantage is saving airfares, accommodation (including residences) and prevents accidental elbowing to boot.

            Synchronous videoconferencing is an option, however, CIHR has repeated stated it is incapable of supporting this. Their communications capabilities stop at WebEx. If you have the appropriate skillz, there may be an opportunity. That would certainly be preferable to teleconference and infinitely better than asynchronous discussion but still a major step back from in face meetings. Did I mention the social benefits of mentoring young scientists, after hours discussions, you know, science communication… Maybe all scientific conferences should be virtual.

            Lastly, domination of personalities at a panel can be an issue. It’s quite easily dealt with as most of the people in the room are adults and there to do a good job. By contrast, virtual reviewing can allow you inner Trump to expose itself with essentially no consequence. There are lots of inner Trumps, it turns out.

            By the way, the asynchronous nature of this discussion (requiring approval of comments) is a perfect representation of the failed Project Scheme – except that you have managed to engage more than one comment. For this, I award you the designation of CIHR Virtual Chair.

            Peer review is an imperfect process. It’s hard to get right but ridiculously easy to totally screw up. CIHR tried to remove all human elements through structured processes, virtuality and statistical analysis. This is a kin to someone putting a value a Mozart symphony by counting the notes.

          • David Kent / July 21, 2016 at 13:24

            Jim, I’m not going to stoop to making personal attacks but I will wade in on a few of the points again:

            1) If CIHR refuses to support a virtual live discussion then it makes sense for them to return to f2f peer review – they’ve agreed to this as far as my reading goes. My point was simply that they should be re-consider it in light of the epic failure that this round of reviewing was.

            2) Even with 3 reviewers, it still makes sense to do a massive triage – but why is it impossible to find as many reviewers as other organisations do across the world? The bigger issue here is that there are 3800 applications – the UK equivalent (MRC) only he 1800 across its set of programmes and has a bigger budget and more scientists to fund. Multiple applications from the same groups will inevitably cause this kind of impractical burden. One possible solution would be to make the awards bigger so they stretch further.

            3) re: geography – point taken. It wasn’t wise of me to include it, it distracted from the discussion more than anything.

            4) mentoring – I can see the benefit but there are many ways to interact with more experienced scientists. Obviously, a useful side benefit, but tangential at best I think.

          • Jim Woodgett / July 21, 2016 at 23:02

            I’ll conclude with a coupe of observations. In the last project scheme competition there were just over 3800 applications from just over 3000 individual investigators. This was a larger number of applications than typical because there wasn’t a competition run in the Fall of 2015. But as you can see, most (>80%) submitted a single application. Most of the rest submitted 2, etc. This was not indulgence. Indeed, there is a lesson here. Most of the applicants applied BECAUSE they had lost confidence in the reliability of the adjudication process. This is a useful warning for other agencies contemplating a similar path. It is a path to ruin.

            Secondly, we have a huge problem in career mentorship and succession. Our research funding systems have an inherent benefit from nurturing the next generation. This is not tangenital in any way. If there is a lack of confidence in the future or disproportionate advantage of those at the more established end, we kill our pipeline. If you have time, check out the presentation of CIHR leadership to the NIH Centre for Scientific Review. It is a good example of one funding agency rejecting the fallacies of another. Unfortunately, the issues predicts by the questioners are only too real for well over 3,000 Canadian scientists.

          • David Kent / July 22, 2016 at 10:27

            Jim – thanks for the link – the Q&A was particularly enlightening and I’d love to hear the answers to the same questions now. Also, thanks for all of the other opinions and resources. I’m sure readers appreciate it, I certainly did.

            Regarding mentorship – I completely agree about the need for it (and succession planning) – two areas that have been majorly lacking unless you happen to run into a nurturing training environment (Toronto has several, and a few others exist across the country in the biomedical field). One massive problem in my eyes is that many postdocs find themselves without disinterested mentors (e.g., they lack people to give them an honest opinion about their prospects) – we’ve suggested that departments could provide second mentors to postdocs (similar to how a thesis ctte exists for students), optional but available as a first step.

  2. Donald Forsdyke / July 20, 2016 at 15:21

    CANADA’S FUNDAMENTAL SCIENCE REVIEW

    Yes, “Changing the manner in which things are reviewed will not actually help scientists get more money into their labs.” It was a potentially useful exercise at deckchair rearranging, while not attending to the real issues. To remedy this, there is now a blue-ribbon panel engaged in “Canada’s Fundamental Science Review.” The USA has an analogous exercise (http://rescuingbiomedicalresearch.org/the-problem/). The panel will, in the fullness of time, issue a report. Each generation of researchers seems to think the problem is unique to their time. But decade after decade we have had such reports. Decade after decade the real issues have been skirted.

    In the 1990s a group of Canadian researchers formed CARRF (the Canadian Association for Responsible Research Funding). We took time off from our research to study the subject, had formal meetings, wrote papers (many of which were published), and addressed the Canadian Standing Committee on Industry, Science and Technology (http://post.queensu.ca/~forsdyke/peerrev2.htm#CARRF ).

    So that future researchers would not have to retrace our path, in 1998 I set up Webpages to accumulate information in the area (http://post.queensu.ca/~forsdyke/peerrev.htm). I even wrote a book (“Tomorrow’s Cures Today? How to Reform the Health Research System,” 2000; http://post.queensu.ca/~forsdyke/peerrev.htm#Book ).

    Alas, CARRF has faded. Its founders have retired or died. But our work remains. Yet the webpage announcing “Canada’s Fundamental Science Review” (http://www.sciencereview.ca/eic/site/059.nsf/eng/home) lists in the section on “Related Information,” neither the Webpage nor my book on the Canadian peer review system. It seems inconceivable that Canadian research authorities should be unaware of this extensive study (and, indeed, much other related matter). Perhaps the omission is an oversight. Perhaps, having assessed it, the authorities believe it can offer little to the contemporary debate. It would be nice to know.

  3. WA Cupples / July 20, 2016 at 17:00

    As you note, the original concept, with electronic review of, successively, the IDEA and feasibilty, followed by committee review of fundable applications, had considerable attractiveness. In particular it promised to force reviewers to address the IDEA explicitly which most Canadian reviewers are loath to do (based on too many years’ on HSFC, CIHR, NHLBI panels). However, as implemented the new process recreated all the worst features of the old system without offering any advantages. Furthermore, matching of reviewers to applicants (based on expertise) is a joke in the Canadian context. The pool is simply too small so one has to “write to a really smart 4th year undergraduate”. Actually, that’s not bad advice anywhere. Yes the success rate was 13%, but that too was predictable from previous funding rates and from the pent-up demand resulting from two cancelled competitions. Finally, I certainly agree that peer review is a blunt instrument and is suspect that one should not expect any such process to do more than differentiate excellent from good and good from bad.

  4. Michael Hendricks / July 20, 2016 at 21:59

    In a few months, 13% success rates are going to look like paradise, barring a funding windfall. The same budget that paid for the Spring 2016 Project round has to pay for the next TWO rounds: Fall 2016 and Spring 2017.

    The major strain on the Project grants compared to the old OOGP system (where funding rates had already been falling for years) is that nearly half the funds have been siphoned off to large, 7 year Foundation grants to a small number (~125/yr) of mostly senior scientists. While some form of this program could make sense at some point, putting a large proportion of funding into boutique mechanisms (as well as other proliferating earmarks special programs) is short-sighted and damaging under current funding realities.

  5. Michael Kolios / July 21, 2016 at 07:47

    I have reviewed for both the NIH and CIHR (foundation stage II) in parallel in the past few months, and have seen the contrast.

    The new CIHR peer review system should in principle do well, were it not for a) it is based on assumptions about the number and conduct of expert reviewers that did not materialize b) Canada is too small of a community to find 5 experts (that are not in a conflict of interest) to do expert review of a grant.

    What people that designed the system did not take into account (as much as they should have) was what I have called for a while “the sociobiology of peer review”. When you are face-to-face in front of your peers, and have to summarize the strengths and weaknesses of an application within a short time period, you usually have to prepare well – you’re held accountable to your opinions, live and in front of your peers. You do not want to be perceived as not having done your job, or being uninformed. So there is pressure to get it right and do your due diligence and preparation.

    In the new peer review system, I witnessed behavioral phenotypes that I never seen in a face-to-face review: people not submitting reviews (at all or on time), almost no discussion in certain instances, conflicting opinions very superficial discussed (if discussed), and the general process being inefficient in terms of reaching scientific consensus on technical matters of approach and feasibility. In principle, there should’ve been a vigorous and thoughtful discussion on-line, but the realities of busy schedules of expert reviewers, different time zones and the difficulty and inefficiency in explaining complex concepts online clearly demonstrated how difficult this type of review is to realize. Couple that with odd statements/opinions of people not in the field (due to the limited number of expert reviewers) not being scrutinized (due to the inefficiency stated above), this created a dynamic that I have not seen before.

    Yes, sometimes there are dominant “extroverted reviewers”, and it might seem that they dominate the conversation. However, it has been my experience that even in the presence of such reviewers, the wisdom of the expert crowd that eventually scores the applications gets it right most of the time. Moreover, more often than not, these dominant reviewers get called upon when making statements that are stretching the truth or are based more on opinion than on fact. Then they get a reputation that moderates their impact on such panels, so it is a bit self-correcting.

    The 13% success rate should be de-coupled from the peer review question. It becomes important in that when the success rate is so low ( almost 9 out of 10 not funded! ), faith in the peer review system becomes so critical. Absolutely critical. Applicants will work hard and play by the rules if they think that hard work and being a good citizen pays off and the system is as meritocratic as humans can make such a system. We all realize this is a human endeavor, and we cannot always get it right. But if applicants think there is too much random chance, then all kind of odd behaviors emerge, none good for the future of science.

    The success rate problem is of a different nature and kind.

    Having said that, there seems to be a convergence on a CIHR peer review system that is some type of hybrid of ideas that might work out well. But it is up to us as a community to show the leadership required to get this right.

    • David Kent / July 22, 2016 at 10:36

      Thanks for these comments Michael – nice to hear about the head to head comparison between NIH/CIHR from a second perspective.

      I agree that these are different issues, although as you point out, they end up putting strain on each other (bad review = big problems when rates are so low). Hopefully CIHR can identify a way to get more expert reviewers from other countries to reduce the local strain while increasing numbers. Also, fingers crossed that application numbers go down after this blip of missing competitions.

Click to fill out a quick survey