Skip navigation
The Black Hole

Could fake grants help clean up the peer review process?

By committing to this exercise, we might better understand why and how some grants get funded despite fatal flaws.

BY DAVID KENT | NOV 07 2018

My office mate and I were discussing the apparent randomness of grant assessment the other day. We traded stories of amazing grants that got shafted and horrible grants that somehow got cut a cheque for hundreds of thousands of dollars. In academia, we all have these tales, but funnily enough we don’t always agree on what “amazing” and “horrible” are. So how can we possibly improve the situation?

In the face of such subjectivity, we wondered (we are scientists after all!) whether we could objectify the problem – could we create a black and white decision in the world of grant writing? The result of our chat – fake grants.

What would it look like?

The fake grant would need to be believable (excellent even), but would also need to suffer from some fatal flaws that should be obvious to someone who reads the grant properly. It could range from simple things (done by others, but not mentioned) to technical things (100-fold more of something in an experimental protocol) to conceptual inconsistencies (If A happens then B must be true while also saying somewhere else if A happens, B must not be true). One can imagine dozens of ways to sabotage a grant and a few trial runs should produce something that strikes the correct balance between believability and critical mistakes.

How would it work?

This would depend on the current peer review system for granting agencies. In the case of smaller agencies where reviewers only receive one or two grants to assess, it doesn’t seem a particularly efficient way to introduce fake grants. Rather, it would seem more useful to roll this out in national and international granting agencies where the impact and utility of fake grants could be assessed systematically. As a reviewer, you could imagine receiving eight or 10 proposals, one of which might be a fake grant. You change nothing about the way you review, but perhaps an option to identify a “fake grant” might be included to fully assess the utility of the system (side bonus question:  how many real grants would get flagged as fake?). 

Big brother is watching

Just like we all slow down when driving past a police car or a speed camera, the fear of being caught can be quite effective in altering behaviour. If you thought that one of the 10 grants in your pile might be a fake grant, you wouldn’t want to miss it. In the first instance, fake grants would need to be reasonably common to instill this awareness. After this initial phase, the number of fake grants could be brought down so as to not waste too much reviewer time.

The tricky part:  finding volunteers

Sadly, unless reviewers were blinded to researcher identity, it would be nearly impossible to also have a fake applicant. Academics are very findable people (websites, papers, etc) and research fields are often so small that you would know that the person didn’t really exist. Blinding grants would be an excellent solution, but in the absence of that, each agency would need a few heavy hitting volunteers to agree to write a fake grant – the backbone of the grant could be largely similar to others they have written (or perhaps submitted the year prior) but key changes would need to be made to introduce the fatal flaws discussed above. The ideal volunteers would be senior established researchers who would typically be inclined to receive favourable reviews based on track record and research environment.

Final thoughts

At worst, this relatively low cost, easy-to-roll-out idea could confirm that a grant review process is robust and all fake grants are identified and dismissed – it might annoy some reviewers to know that they invested time into a fake grant, but this would be a small price to pay. At best, this process could help to identify core problems in the review process – why and how do some grants make it through despite fatal flaws? Once we understand the problem, we can better design strategies to fix the problem – fake grants might just be an easy way to start.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Brian Cox Cancel reply

Your email address will not be published. Required fields are marked *

  1. Andrew Gow / November 7, 2018 at 13:52

    This is a truly ridiculous proposal — almost as ridiculous as the current system for allocating grants! I would refuse to act as a reviewer, and esp. to act as a fake applicant. I am sure most colleagues with half a grain of ethical sense would also refuse. As one recent study showed, simply allocating funding equally among all eligible researchers could produce equal or even better outcomes by slashing administration costs and empowering academics to simply get on with the work we were trained and have been accredited to do: https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0183967#pone-0183967-t001

    • David Kent / November 8, 2018 at 04:50

      Hi Andrew – I hear what you are saying and I’m not suggesting widespread use of these – something like a frequency of 1 in 200 would likely do the trick in a large enough grant competition like the ones administered by the Tri-Council. The vast majority of reviewers would never see a fake proposal and the agency could still learn a lot from the exercise.

      Thanks for bringing the PLoS study to our readers attention – another interesting study came out of eLife where it actually suggested that peer review was very effective for identifying the top 3-5% of grants, but pretty rubbish beyond that. They also concluded that beyond that first 3-5%, strategic and/or random distribution would be potentially more effective (https://universityaffairs.ca/opinion/the-black-hole/peer-review-as-a-poor-predictor-of-impact/)

  2. Antony Puddephatt / November 7, 2018 at 14:51

    I have to agree with Andrew that it is not the best idea. We just had the Sokal squared hoax, which called into question the validity of what the authors termed “grievance based” cultural studies, as 7/20 fake (and fairly absurd) papers were accepted into many leading humanities journals. The outcry that followed tended to a wide agreement that the research was fundamentally unethical, and probably never actually visited an institutional REB for approval, and so should not even exist. I actually think fake scams can be somewhat telling about institutional problems in academia, so their results should not be brushed aside entirely, and they may perhaps even serve an interesting intellectual function for us to reflect and help improve things. The Sokal hoax from 20 years ago, for example, led to very interesting discussions of the use and validity of postmodern scholarship, and rich reflections about the connection of knowledge to politics and progress. Still, studies like this are probably too unethical and a waste of people’s intellectual time to be something to plan for going forward, so our ethics boards would/should not allow it.

  3. James Deaville / November 7, 2018 at 17:27

    Above and beyond the comments already expressed, I don’t like the use of the adjective “fake,” which valorizes the current tendency to dismiss anything disagreeable or oppositional as “fake.”

  4. Brian Cox / November 8, 2018 at 07:11

    I admire the proposal for its scientific merits. By putting in a set of doped standard you could identify applications that align with the dope standard in much more quantitative manner. However I don’t believe that this would actually fix your problem of the peer-review process. The true problem with peer review is its complexity. Grants that often even involve multi disciplinary research are extremely difficult to review and to assemble a panel of reviewers who can speak to each of the different areas within the grant is very challenging. Additionally the size and scope of the grants make reviewing a long and daunting task and the sheer number of grants that one must review on granting panels leads to burnout and an inability to coherently evaluate each grant. What we really need is a system that does not involve a lengthy process of prospective granting where the applicant tries to predict the future and convince the reviewer that thier version of the future is valid enough to warrant money. What this leads to is a constant demand for increased pilot data or data to prove an aim of the proposal. But where is the money in the time to come for a project which has no base funding? Several of my friends at the NIH and at the CIHR joke that we have a research reimbursement program, not a grant program, as so much work needs to be completed before you can be funded. To make a truly quantitative metric we unfortunately need to apply business rules. We need to develop a retrospective granting system. In this system one evaluates a set of quantifiable metrics such as number of students trained number of papers published number of lectures invited or types of metrics these can field specific. There would no longer be a proposal, rather a short and summary what you have accomplished.

  5. Bryan D Fantie / November 9, 2018 at 17:17

    Could we please stop calling grant proposals “grants”? People write proposals, funding entities provide grants.

    Thank you…I feel better now.

Click to fill out a quick survey