Skip navigation
The Black Hole

We need to examine the dark corners of peer review

BY DAVID KENT | SEP 30 2015

One of the best peer-review processes that I have experienced is one that I didn’t get a great outcome from. I was applying for one of my first research grants with the U.K. Medical Research Council and did not get an interview based on the reviews of six experts who each had read my research proposal and submitted their comments. The scores were out of 6 (1 = poor, 6 = exceptional) and I received the following results: 6, 5, 5, 5, 4, 1. Heartbroken to not get an interview, I started asking questions about the person that scored me a 1 – somebody really did not want me to get that money.

However, when I looked more closely at the “6” score, I realized that they’d done just as poor a job at reading my proposal and this should have elicited a similar, albeit opposite, reaction: “to whom should I be sending a bottle of champagne?” It turns out that the MRC panel (according to the feedback they sent) were more inclined to look at the median of these reviews and my proposal was just on the wrong side of of the fence in this instance – part of me still blames the “1”, but one must move on.

Why am I telling you this story? It’s because the vast majority of big decisions (papers, grants, fellowships, etc.) do not have such a strong review process – rather, they rely on fewer reviewers/editors and do not give any access to the “panel discussions” that are held. Yet, the stakes are incredibly high – we’re talking about making and breaking careers, disbursing hundreds of millions of dollars in research funding, and (by extension) directing which research gets translated into real benefits to society. Should this all be left to a system with such high amounts of subjectivity? How can we improve what we don’t understand?

One thing that became abundantly clear from my MRC story is that the variability of peer review is quite startling. Random selections of two reviewers from my experts would have resulted in anything from a 1+4 to a 5+6. Some might say that you may as well not even send it for review. The MRC recruited six reviewers, allowing them to see which reviews were outliers (e.g., the 6 and the 1). However, this sort of process is a tall order and the supply of peer reviewers would be quickly exhausted, so I can appreciate the difficulties in applying this across the board to all peer review.

Another of the main components of how a scientist is evaluated within these panels is their track record (e.g., where have they published in the past) and this brings me to the darkest of corners within peer review – the process of evaluating a research manuscript at an academic journal. Papers are triaged by an editor and, if successful, are sent to two or three experts for assessment. The assessment returns and all manner of hell breaks loose. In some cases, it is very clear – unanimous rejection of a poor manuscript is relatively common. However, darkness in the form of subjectivity then creeps in: the reviewer’s do not agree on the paper’s merits and the editor’s opinion factors in. Which reviewer counts for more? What have they said in the “private comments” to the editor? What has the editorial board discussed?  e-Life has a sort of process for this which I think is great but still doesn’t solve all the problems:

What is forgotten (or at least not discussed much) is that a journal editor chooses who to send the paper to. Conceivably, they know who is a hard reviewer, they know who is an easy ride and they can select based on their preference. Some editors are non-confrontational and will select a hard reviewer in order to give them ammunition to reject a paper; other editors have favourites within the scientific research community and will always send their papers to soft or friendly reviewers. I’ve heard a lot about particular journals and particular practices, but I also know several journal editors with very high standards for their profession – of course these differences exists, but how can we manage it?

We have three main problems:

  1. supply of reviewers
  2. variability of reviews
  3. subjectivity of editors

For the first, it has been suggested (and undertaken by some granting agencies) to pay people for peer review. I have no doubt that this would increase the number of peer reviewers, however, I’m really not convinced it would increase the quality and, in fact, I think it would more likely lead to the opposite.

For the second, personally, I think the best way is to unmask the reviewer, making them accountable for the review they put on paper. Many of my peers think this is an unreasonable request to make of reviewers and will cause an even bigger strain on the supply of reviewers. Perhaps, but one can dream, right? A happy medium could be to publish the names of the reviewers on successful grants/papers. It doesn’t solve the “1 score” problem, but it would do something to address the “6 score” since their name would be forever associated with a paper, fellowship or grant that was successful (i.e., has their seal of approval).

The third is the most difficult because it tries to tackle a very powerful establishment: the academic publishing system. Editors are not accountable to anybody except within their own journal, which operates for the success of that journal. This can result in horrendous practices to a) increase a journal’s impact factor; b) delay publication of work against the reviewers suggestion; or c) skew the evaluation process by selective peer reviewer assignment. One way that has intrigued me lately would be to have a scoring system (perhaps similar to the MRC process of 1-6) where papers with the highest scores are published and if there is a discrepancy (e.g., a lower scored paper gets published in favour of a higher scored one), a formal editorial assessment would need to accompany that paper to explain why it was still published.

Overall, it seems that the scientific community tolerates all of these problems, mostly because those that benefit are the ones that already have significant power. In the eyes of this young researcher who needs to spend the next few decades inside the system, peer review is the sector of academic science most in urgent need of serious restructuring.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Prof. Jack M. Miller Cancel reply

Your email address will not be published.

  1. Susan Rvachew / October 2, 2015 at 09:47

    If I understand your story correctly you did not get the grant because the median of your scores (5) was below the cut-off. So in this case only applicants with 6/6 scores = 6 would be eligible. And you call this the best system? I don’t think that the problem is with the peer review system, the problem is with shortage of research finds relative to the numbers of people competing for them. When the cut-off scores are that high and the success rates are as low as they are right now there is no possibility of achieving score variances that are “low enough” – many perfectly good proposals must be cut out and so the system is designed to yield arbitrary results by necessity. Focusing on the “1” score cannot be healthy. People keep suggesting solutions that are focused on the reviewers even though this is not the locus of the problem. Increasing the workload of the reviewers (which is just one side effect of the excessive competition in the system currently) is not going to improve the quality of the reviews.

    • David Kent / October 7, 2015 at 16:42

      Hi Susan – I guess my median was 4.5 officially and people who had 5, 5.5, or 6 would have gone forward. Perhaps all the 4.5s were discussed at the panel and some made it while others did not. I don’t know the exact process, but while I didn’t get the funding, I could see much more about the process and understand the criticism – an answer of “good job, but could have been better”.
      That said, I completely agree that increasing the workload is not a productive way forward – the quality will (and has probably already) suffered.
      Ideally, I would unmask the reviewers – make people stand behind their opinion… for every person you might offend, there will be several others who agree with you and will score you extra points for integrity and high quality reviewing. I don’t understand why more researchers are hesitant to unveil the dirty secrets of peer review – if you think a paper or grant has a fundamental flaw, then say so. If not, then don’t be so careless with your words and your review.
      It will take an incredible amount of effort to change things, but hopefully the current strain on the system can be used to achieve a positive outcome (improved review and improved career progression).
      Thanks for your comments!
      Dave

  2. Prof. Jack M. Miller / October 7, 2015 at 15:31

    Just a comment on both journals and granting agencies. Spme of each will in the case of disagreement between reviewers send the manuscript or application out to a third or whatever referee to adjudicate the conflicting reviews. I served for 5+ years on a panel that only met (in conference call) to deal with grants on which the referees were split. Interestingly we were a multidisciplinary panel and as often as not it was the person most remote from the field that provided the best evaluation. I have also adjudicated many journal papers where the referees were split. Many Chemistry journals hae this policy. It is over 50 years since I submitted my first manuscript as a graduate student. You win some and loose some. If you are convinced you are right send it to another journal. I once had 2 papers rejected by 2 international journals. Other than makin g the obvious minor corrections suggested before their rejection, I simply turned the 2 papers around to the opposite journals and they were accepted without any comments.

    • David Kent / October 7, 2015 at 16:35

      Hi Jack – to me, your experience (the turning around of a paper without changes to different journals) underscores one of the fundamental problems with the system – surely there should be consistency? What an incredible waste of time (both author and reviewer) if the whole process boils down to “getting lucky” with the reviewers… I think more transparency is the way to go, if those reviewers had assigned their names to the reviews, would they be more consistent?

  3. Donald Forsdyke / October 7, 2015 at 16:04

    This is the standard letter we have read hundreds of times over decades. Just change a few details and one could very well write the letter using a template form. If often comes from people at an early stage of their career. Sometimes it comes from the survivors – those who have long managed to beat the system, as in the recent turbulence in the USA.
    It will no longer do to just get the problem off your chest and then go back to the laboratory. Many of us took a lot of ‘time off’ in the latter decades of the 20th century to try to get some reforms. Sadly, to no avail. For more, please check out some of the ideas in my webpage.

    • David Kent / October 7, 2015 at 16:31

      Hi Donald,
      First off thanks for the comment, it’s important to get these comments from colleagues who have seen and heard the stories over the years. I also had a browse through your site, many similar themes indeed. What I wasn’t clear on was what the actual solution would be – how would you suggest we go about making the process better? For every needle in the haystack (i.e., a brilliant researcher who does not get funded) I would assume that there are dozens if not hundreds of sub-par ideas/researchers who have not been funded as well – how does one identify the needle?
      That said, I completely agree with the few baskets, many eggs argument you put forward – this is very common in life sciences research and often results in multiple funding agencies funding the same idea(s) because they are fashionable and utilise the most recent sexy technologies. The number of useless genome sequencing projects stands out as a clear example.
      I was interested to see the outline of your book – Jonathan and I are just now embarking on a book of our own based on the Black Hole writings and other bits and bobs. If you have any advice, we’d certainly appreciate it (contact@scienceadvocacy.org).
      Again, thanks for the comments.
      Dave

  4. Reuben Kaufman / October 8, 2015 at 13:39

    I do frequent reviews for journals and occasional ones for granting agencies. I always compose my reviews as if my identity were to be revealed, even though I know it won’t be. So I really don’t think the quality of MY reviews would change one way or t’other. However, it would be interesting to know the extent to which other reviewers feel the same about revealing their identity. I would hope it would be the (vast?) majority; although it would be naive to think it to be 100%! After all, one has an ethical responsibility here. Aren’t the (vast?) majority of us ethical? That said, I agree 100% that the reviewer’s identity should be revealed. Perhaps there’s a research project here in determining the statistics on how anonymity of the review process really does affect the nature/quality of the reviews….. I volunteer to be a reviewer for such a project! 🙂