One of the best peer-review processes that I have experienced is one that I didn’t get a great outcome from. I was applying for one of my first research grants with the U.K. Medical Research Council and did not get an interview based on the reviews of six experts who each had read my research proposal and submitted their comments. The scores were out of 6 (1 = poor, 6 = exceptional) and I received the following results: 6, 5, 5, 5, 4, 1. Heartbroken to not get an interview, I started asking questions about the person that scored me a 1 – somebody really did not want me to get that money.
However, when I looked more closely at the “6” score, I realized that they’d done just as poor a job at reading my proposal and this should have elicited a similar, albeit opposite, reaction: “to whom should I be sending a bottle of champagne?” It turns out that the MRC panel (according to the feedback they sent) were more inclined to look at the median of these reviews and my proposal was just on the wrong side of of the fence in this instance – part of me still blames the “1”, but one must move on.
Why am I telling you this story? It’s because the vast majority of big decisions (papers, grants, fellowships, etc.) do not have such a strong review process – rather, they rely on fewer reviewers/editors and do not give any access to the “panel discussions” that are held. Yet, the stakes are incredibly high – we’re talking about making and breaking careers, disbursing hundreds of millions of dollars in research funding, and (by extension) directing which research gets translated into real benefits to society. Should this all be left to a system with such high amounts of subjectivity? How can we improve what we don’t understand?
One thing that became abundantly clear from my MRC story is that the variability of peer review is quite startling. Random selections of two reviewers from my experts would have resulted in anything from a 1+4 to a 5+6. Some might say that you may as well not even send it for review. The MRC recruited six reviewers, allowing them to see which reviews were outliers (e.g., the 6 and the 1). However, this sort of process is a tall order and the supply of peer reviewers would be quickly exhausted, so I can appreciate the difficulties in applying this across the board to all peer review.
Another of the main components of how a scientist is evaluated within these panels is their track record (e.g., where have they published in the past) and this brings me to the darkest of corners within peer review – the process of evaluating a research manuscript at an academic journal. Papers are triaged by an editor and, if successful, are sent to two or three experts for assessment. The assessment returns and all manner of hell breaks loose. In some cases, it is very clear – unanimous rejection of a poor manuscript is relatively common. However, darkness in the form of subjectivity then creeps in: the reviewer’s do not agree on the paper’s merits and the editor’s opinion factors in. Which reviewer counts for more? What have they said in the “private comments” to the editor? What has the editorial board discussed? e-Life has a sort of process for this which I think is great but still doesn’t solve all the problems:
What is forgotten (or at least not discussed much) is that a journal editor chooses who to send the paper to. Conceivably, they know who is a hard reviewer, they know who is an easy ride and they can select based on their preference. Some editors are non-confrontational and will select a hard reviewer in order to give them ammunition to reject a paper; other editors have favourites within the scientific research community and will always send their papers to soft or friendly reviewers. I’ve heard a lot about particular journals and particular practices, but I also know several journal editors with very high standards for their profession – of course these differences exists, but how can we manage it?
We have three main problems:
- supply of reviewers
- variability of reviews
- subjectivity of editors
For the first, it has been suggested (and undertaken by some granting agencies) to pay people for peer review. I have no doubt that this would increase the number of peer reviewers, however, I’m really not convinced it would increase the quality and, in fact, I think it would more likely lead to the opposite.
For the second, personally, I think the best way is to unmask the reviewer, making them accountable for the review they put on paper. Many of my peers think this is an unreasonable request to make of reviewers and will cause an even bigger strain on the supply of reviewers. Perhaps, but one can dream, right? A happy medium could be to publish the names of the reviewers on successful grants/papers. It doesn’t solve the “1 score” problem, but it would do something to address the “6 score” since their name would be forever associated with a paper, fellowship or grant that was successful (i.e., has their seal of approval).
The third is the most difficult because it tries to tackle a very powerful establishment: the academic publishing system. Editors are not accountable to anybody except within their own journal, which operates for the success of that journal. This can result in horrendous practices to a) increase a journal’s impact factor; b) delay publication of work against the reviewers suggestion; or c) skew the evaluation process by selective peer reviewer assignment. One way that has intrigued me lately would be to have a scoring system (perhaps similar to the MRC process of 1-6) where papers with the highest scores are published and if there is a discrepancy (e.g., a lower scored paper gets published in favour of a higher scored one), a formal editorial assessment would need to accompany that paper to explain why it was still published.
Overall, it seems that the scientific community tolerates all of these problems, mostly because those that benefit are the ones that already have significant power. In the eyes of this young researcher who needs to spend the next few decades inside the system, peer review is the sector of academic science most in urgent need of serious restructuring.