“Fundable, but not funded.”
This is one of the most heartbreaking phrases a scientist can get through their inbox and it is the one that tells them an organization felt that their work was of sufficient merit to warrant funding, but that other research proposals were rated higher. Bad luck.
We have written before about the arbitrary nature of funding decisions that hover around the funded / not-funded line and one study that measured grant outcomes even suggested that it might be worth randomly assigning grants at a certain stage in order to save time and resources (without any impact on the quality of research being funded). The current process is clearly a problem with many scientists pouring so much time into crafting grant proposals that are fundable but not funded – why can’t there be a better system?
Enter the “holding pen”
Major national and international funding agencies typically invest quite a lot of resource in soliciting peer reviewers and coordinating panels to evaluate grant proposals. Often > five expert reviewers are engaged to provide expert opinions (my last evaluated proposal had eight) and they are generally pretty good at picking out the main strengths and weaknesses of a particular grant. These reviews are passed on to a panel of scientists who make a final decision on what does and does not get funded and the system churns on. A scientist in receipt of the bad news then generally takes the grant and moves on to another funding organization in hopes of getting luckier the second time around.
The model I would imagine is that a major organization like the National Institutes of Health (NIH) or Medical Research Council (MRC) makes their reviewer comments available to other funding organizations for grants that “just miss” getting funded. Applicants could be asked whether they would like to have their grant application and comments passed on to third party organizations (and there could potentially even be a system for incorporating short “rebuttals”). Approved organizations could log onto a secure system and review the proposals and reviewer/panel comments on grants in the area they wish to fund, thereby allowing a charity that wished to fund a specific field (i.e., blood cancers) to look through the list of nearly funded proposals and say “we’ll fund that.” In the process, these smaller organizations would avoid the costly and time-consuming process of organizing peer review panels. If grants were structured as related work packages, it would even allow a charity to opt to fund part of a study rather than the whole study if the costs did not align with their typical funding amounts. In many cases, it would also increase the number of experts that assess a grant for a smaller organization (i.e., five or six reviewers versus two to three that are more common with smaller groups). Win, win, win.
There is precedent for this sort of structure with the same process happening for an increasing number of academic papers. Many scientists aim for a high-impact journal with an initial submission and then move down the ladder after (sometimes quite numerous) letters of rejection. The process can take months or even years before a paper is finally accepted. One key development in response to this process that has improved timelines is a “fast-track” review where an academic journal assesses the paper alongside the initial reviewer comments from another journal and decides to publish the paper. This is especially popular when journals are in the same “family” (e.g., Nature Communications, Cell Reports, and Blood Advances are linked to Nature, Cell, and Blood respectively). This saves enormously on needing to obtain additional reviews that will often point out the same issues as the initial ones. From the reviewer perspective, this would save on the overall number of requests to review and it would also avoid the same grant getting reviewed by the same person for two different places and/or a sub-par grant getting reviewed dozens of times by the larger and smaller organizations.
I’m not suggesting that this be rolled out as the only way that smaller organizations should decide what to fund – larger organizations like NIH and MRC panels and reviewers have specific priorities in mind when reviewing that may not align with the priorities of an organization that combs through the proposals bounced into the “holding pen.” This could act as an option to fast-track some of the funding decisions, but in some cases, where priorities align, it could become the sole mechanism of review, resulting in saving charities and foundations lots of time and money so they can either raise additional funds or support more grants (or both!). These sorts of money/time saving ideas are exactly the sort of thing that organizations could be exploring in the post-COVID-19 world when a number of charities have struggled to match their incomes due to cancelled events and shifted priorities.
Are there any pitfalls associated with major granting organizations sharing peer review comments with charities and other approved funders in a safe and secure manner? Are grant review criteria across different organizations so different that reviewers evaluate the project very differently? As always, we’d be delighted to hear what readers think – let us know in the comments or by email at email@example.com.