For those who have not yet heard, the CIHR plans to make major changes to their funding mechanism for health scientists. Last week, at the height of summer vacation, the CIHR released a “What CIHR heard” document that summarizes the feedback they received on the proposed changes.
At first, I was simply going to pick out what I thought were the items of highest relevance to early career researchers, but as I perused the document, I noticed something quite interesting. At the end of every survey question, the CIHR lists the group of researchers (organized by field and seniority) who agree or disagree most strongly with each of the proposed changes and in every instance of disagreement one category (out of 12) of researchers is always listed: Senior Biomedical (Basic) Researchers. It might be unfair to suggest it, but as this represents the most entrenched set of researchers (those who have been there the longest and are within the biggest CIHR research tier) it stands to reason that maybe, just maybe, they are trying to protect the system that keeps them at the top with the most resources.
Below I have pulled out the levels of disagreement based on proposed change:
- Integrated Knowledge Translation (46%)
- Multi-phased competition process (57%)
- Application Focused Review (25%)
- Use of Structured Review Criteria (35%)
- Remote Screening Process (49%)
- College of Reviewers (37%)
Overall, the picture is quite different, with many groups (especially in younger researchers and those in applied research areas) showing much stronger support for the proposed reforms. It seems to me that those who want things to change the most are those who do not have access to as many resources and hope that CIHR’s reforms will help redistribute the wealth.
In the very comprehensive report, numerous ideas, complaints, and comments were provided, some of which I would like to highlight:
Too many eggs in one basket – the proposed structure states that if an investigator has a program grant already, they cannot apply for a second grant (program or project). This prompted many concerns about what happens in the case of poor renewal rates (especially in the transition period from multiple grants to single grants) and whether or not it would stymie collaborative efforts between labs that each already have a program grant (i.e., they would be unable to apply for a joint grant). This is one of the main points we raised in our analysis as well and I think CIHR needs to think hard on the best solution to this concern.
Incentives – some respondents thought that providing incentives for reviewers would encourage more qualified reviewers to partake. These included honoraria, deadline extensions on one’s own applications, and increased value/duration of one’s own grant. Personally, I do not think that incentives are the way to go – it seems to me that the privileges associated with reviewing the country’s best research and the interactions with other reviewers are worth it. However, I could support the penalization of investigators who do not participate in peer review (i.e., ineligibility to apply for future grants, limited access to funding, etc).
Fear of the multi-phased process – one of the items that we most strongly supported was the blinded first stage of project grants which aims to cull poor proposals even if they come from excellent scientists. If you write a poor proposal, you don’t really deserve to get funded, it doesn’t matter who you are. It seems that support was not as strong amongst respondents to the CIHR – with respondents registering concerns about inability to assess anonymously and increased re-submission of proposals that didn’t make the cut – I wonder where our readers stand?
Many hands make the labour light – another item that provoked some concern was reviewer fatigue from having more reviewers (5-8) on all applications. Despite these assessments being intended to take a shorter amount of time, some respondents believed that in order to do a proper job, less time per application would lead to poor decisions. I disagree. To emphasize this point I will draw an analogy that I think scientists should understand. If presented with the following two sets of data, which one do you have the most confidence in?
- A well-respected, well resourced lab publishes a very detailed study that identifies Protein X as a marker of advanced colon cancer.
- 10 distinct groups from different institutions across the world all identify Protein Y as a marker of advanced colon cancer. The data in each individual study are not as detailed or exhaustive as the first study.
If you have to choose one to add into your diagnostic tool for colon cancer, which do you choose? Does repeatability outweigh the strength of the first study?
This seems to be the reason for increasing the number of reviewers per application. The CIHR is asking for less detailed, less rigorous reviews of a less comprehensive proposal. In the event that one reviewer misses the point or over-hypes an application, their impact is curbed by additional reviewers. If a grant gets through everybody, then chances are, it has some merit. The subsequent in-depth review is meant to clean up the remaining issues and fund or not fund based on merit.
Overall, the CIHR has a lot to think about. The report they produced does not suggest much about what the outcomes will be, but does say that final decisions will be released in the autumn. We’ll stay tuned and be sure to inform our readers of its release.