Skip navigation

Let’s remove editorial subjectivity from peer review

In the spirit of openness, we need a quantifiable, binding scoring system.

By DAVID KENT | JUL 20 2015

The process of peer review (and its consequent inequities) is a perennial issue on this site and the solutions we have previously pitched have words like “impossible,” “unreasonable” and “idealistic.” In this article, I aim for a micro-revolution, something that is easy to implement and delivers more objectivity to a largely subjective process. I want to see a binding scoring system in peer review.

For non-academics reading along, it is important to highlight why peer review is so important to get right. It determines the disbursement of billions of dollars internationally, it determines which scientists’ careers progress and which come to an abrupt halt, and it is the theoretical linchpin to maintaining high-quality unbiased research. Peer review is a core component of research centre renewals, academic promotion, article/book publishing, research grant awarding, and even conference presentations – in short it has the power to direct all aspects of the research we undertake and it has fundamental problems.

Over the last decade I have been involved in numerous peer-review decisions for my own grants, papers, conferences, etc. Sometimes, you get terrible reviews that doom your chances and other times you get a free pass through to publication or pots of money because of a sympathetic ear. Sometimes, three reviews at one journal are completely different from three reviews at another similarly impactful journal.

The current system for journal articles tends to have categorical values for peer reviewers: “minor revisions,” “major revisions” and “reject.” but the individual assigning those categories has no way of quantifying how close they were to changing their mind. I want to see this quantified and put some more power into the hands of the peer reviewers – if something is terrible, give it a score of 1 to emphasize how bad it was, make it difficult for the editor to overturn and have the journal publish the reviewer scores.

As a short aside, one of the most satisfying procedural peer-review processes that I have experienced was for something I was not actually awarded – a Medical Research Council Career Development Award. Months after the application, I received a notification that I would not be interviewed and it was accompanied by six reviews with individual scores – did I agree with the reviews? No, not really – I obtained both a “1” (lowest) rank and a “6” (highest) in addition to 4’s and 5’s. But did I respect the process? Absolutely.

Some journals have begun to publish the actual reviews themselves (and this is great!), but at a minimum the journal should have to publish the number of reviewers and their scores on the final review – that way, readers can assess the journal for its consistency and even compare their own article scores to know whether they have been unfairly treated. As scientists, we are constantly asked to be open source, open access, and generally open – why not ask the same of the journals?

In my opinion, the editor’s main job is before peer review – they make the assessment of “should this work prove scientifically sound and novel, we would be keen to publish it in our journal.” If the reviews come back with low scores, the journal should be compelled to not publish it and if they come back with unanimously high scores, the journal should be compelled to publish it.

I would also suggest that some element of this is binding (with journals setting the bounds and giving clear instructions). I realize that this may put unfair power into the hands of unscrupulous peer reviewers so I would also suggest that a maximum or minimum score would trigger an extra set of questions requiring additional justification (e.g., fundamental scientific flaws or field-changing study). This would allow an editor to consult further and have clear justification for an over-ruling of other reviewers.

While I don’t think we can change everything at once, I do think that individual journals, granting agencies, and academics can adopt small policies like quantitative scoring that help the field remove some of the subjectivity from the process, which is chewing up careers and unfairly pushing bad science through on the back of a single (often very loud) opinion.

David Kent
David Kent is a group leader at the University of Cambridge in the Cambridge Stem Cell Institute. His laboratory's research focuses on fate choice in single blood stem cells and how changes in their regulation lead to cancers. David is currently the Stem Cell Institute’s Public Engagement Champion and has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

« »