A number of weeks back I had reviewed a paper for the Journal of Experimental Medicine (JExpMed) and shortly after submitting I received an email that said “Reviewer collaboration” had been started. After opening the email, it was apparent that the journal was allowing a two day window for each peer reviewer to read the comments of other reviewers and to (if they so desired) make changes to their initial review. All of this took place prior to communicating the decision to the study authors. Typically, I like to read other reviewer comments in order to see where my comments stack up compared to those around the world so I obviously had a flick through – though I did not end up making any changes to my review in the end. I then began to wonder whether people took advantage of this and whether it resulted in major changes so I decided to reach out to one of the higher-ups at JExpMed and see why they’d initiated such a program in the first place and, after the first six months, how it was going.
Dr. Teodoro Pulvirenti, executive editor of JExpMed, was pleased to be contacted about the initiative and we arranged a Skype chat to discuss reviewer collaboration from the journal’s perspective. Several interesting items emerged:
First and foremost, it seems that JExpMed is interested in improving the quality and effectiveness of peer review – Dr. Pulvirenti said that the journal recognized the numerous imperfections in the system and was entering a phase of experimentation to try and figure out ways of improving the system without being too onerous on the reviewers themselves. Considering the expanding burden of peer review across biomedical sciences, this is a very laudable goal. Taking advantage of “reviewer collaboration” is incredibly easy, so from that point of view, it’s a great thing to try.
Secondly, it turns out that JExpMed doesn’t do a whole lot in the way of internal monitoring of how frequently reviews are updated/changed which I found a little disappointing. It would be incredibly interesting to know how often reviewers use it, whether it homogenizes reviews, whether authors get defended from “unfair” comments, whether it results in increased reviewer/author satisfaction, and whether papers have a shorter/longer time to print as a result.
JExpMed is not the only journal trying new things and other journals have tried to tackle the problem of peer review in a variety of ways. The most elaborate of these seems to be the method employed by eLife where reviewers consult with one another in an open online consultation. The editor then writes a single letter with a single set of comments for the authors to receive. This process ensures that a consensus is reached, but it is quite labour-intensive (for the reviewers and the editor). eLife is heavily invested in trying to assess its peer review experiments, and a nice summary of the benefits was written last year, including the statistic that ~70 percent of papers only go through one round of peer review (not bad!).
Faculty of 1000 Research also takes a unique approach – authors submit an article to “the community” and it is reviewed rapidly without the (sometimes heavy-handed) steer of an editor (academic or professional). Rather, the articles are reviewed openly by members of the academic community and new versions that address the concern are uploaded in response to reviewer comments. I’ve not seen as much in the way of assessment here – it would be interesting to know what author and reviewer feedback is like.
The final example I’ll give is EMBO Journal which takes a totally different approach. They don’t change anything about the process itself, but they believe that the reviews are an important part of the academic discussion. So they simply put them online with the paper – see an example file here. This requires very little in the way of setup and rolling out – and to be honest, what could be wrong with publishing the reviews?
Overall, the JExpMed and EMBO Journal approaches seems like they are a happy medium – straightforward, non-burdensome and most in line with the traditional peer review process. Making reviews open and sharing reviewer comments early are things that all journals could do instantly without very much effort and therefore represent an easy (and potentially doable!) change. But as many scientists know, the peer review system is really messed up so perhaps more revolutionary approaches like eLife and F1000 Research are required – it will be interesting to see a thorough analysis of authors and reviewers opinions on such approaches. However, other journals have not been running to adopt these policies, which to me means that they’ve either not been well-received or well-promoted… either way, the community needs to engage with these experiments so we can fix the process that governs who gets to do science in the future.
The team at Journal of Experimental Medicine is contacting its partners to further explore monitoring reviewer collaboration. We are committed to improving our processes, and we thank you for your input, David!
Rory Williams, Communications
Journal of Experimental Medicine, Rockefeller University Press
Fantastic Rory – we’ll definitely look forward to hearing more about it!
Science (and its sister journals) has a process very similar to JExpMed’s, which they term “cross-review”. From the website:
“Science now gives the opportunity for cross-review. Once all reviews are in, you will be invited to read the other reviews and make additional comments within two business days. Cross-review is not required. If we do not receive comments we will proceed based on the reviews in hand. In cases where the decision is clear we may proceed without cross-review, in these cases you will be sent the other reviews for your information.”
When assessing an Olympic event (say diving) the judges independently display their scoring numbers and, at least when I last watched, maverick scores that depart from the rest are excluded. In a way, this is how journal peer review operates, since an Editor is likely to be influenced by the consensus voters. However, the author has the benefit of the maverick viewpoint and, indeed, may argue for it in his/her response. By manufacturing consensus in advance of informing the author something important may be lost.
Enjoyed this article!
My research concerns climates in the geological past, so very removed from biomedical, but some parallel experiences as an author, reviewer and as associate editor for 2 journals.
A few of the journals I have published in, or reviewed for, employ the online open review model, e.g. Climate of the Past https://www.climate-of-the-past.net/peer_review/interactive_review_process.html
I also publish in / review / associate editor for several journals that use the traditional anonymous by default peer review system.
Over my 30 year publishing history I have seen various abuses of the traditional model; from editorial heavy-handedness, to unethical reviewers and authors. Arising from those experiences – as an author, reviewer and associate editor – I find the most fair and ethical to be the open online review model.
A related but separate topic is that of author contribution statements. I increasingly like these in all the roles I have filled.
See also https://www.sciencemag.org/news/2017/09/few-authors-choose-anonymous-peer-review-massive-study-nature-journals-shows