A colleague (thanks Elisa!) tipped me off to a new development at one of the most prestigious scientific journals: At some point over the last year, Nature magazine has begun to reveal the identities of some reviewers for their published letters and research articles. This is a big step toward transparency and openness in peer review, and I applaud the journal for taking this first step. There is obviously much more that can be done, but just like open access, peer review transparency is a “journey”, not an “event”.
I thought I would try a few quick experiments to see what kind of data might be available, so I dug out the last year’s worth of Nature articles with reviewer identities in my field (39 by my count) and had a closer look. I want to put a firm disclaimer out there before rolling on with this post: this is a selected and small analysis in one field of research, restricted to just a tiny number of papers. That said, there are some interesting things to note from my poking around that I hope will stimulate a proper study.
1) The gender gap is alive and well
One of the most striking things about the reviewer list is that men dominate the inner circles of peer review at this leading journal. Over 85 percent of the listed reviewers of Nature papers in my field were men. If multiple reviews from the same reviewer are counted, the total breaches 90 percent.
To be fair, since not all reviewer identities have been revealed, it is possible women preferentially choose to remain anonymous (or men preferentially want to see their names in black and white).
N.B. I was recently chatting to a colleague who has a policy of suggesting at least one female reviewer for every paper they submit. This seems an easy first step that every scientist could take and I hope readers will begin to do the same for papers, grants, conference panels, etc.
2) Some reviewers are on speed dial
Another observation from my tiny study of 39 papers was that some reviewers are grossly over-represented. For example, three of the reviewers had reviewed three to four papers apiece, meaning that their seal of approval is on ~10 percent of the papers appearing in Nature. This is an even more staggering single-person contribution when you consider how many papers might have also been rejected by these same reviewers.
To my knowledge, Nature doesn’t reveal its “reviewed paper” success rate, but they do indicate that only eight percent of papers submitted are accepted. Even if you assumed that 75 percent were rejected without review, it would translate to these reviewers either handling 10-15 papers for Nature in 2017, or that the papers they review are more likely to be accepted. Either way you shake it, I can see several interesting future analyses about how peer review really works…
3) Indirect back-scratching?
One of the more cynical thoughts I had while going through this mini-experiment was whether reviewers might stand to gain a considerable amount of influence by listing themselves as reviewers. If so, this could potentially alter a reviewer’s ability to assess the paper objectively (since they could stand to benefit by getting a return favour). Similarly, if you were one of the “three to four published papers per year” reviewers, it would stand to reason that people hoping to publish in Nature might try and curry favour with you.
4) Flagrant fouls on conflict of interest
Two upsetting things came up in the ~39 papers I analysed. The first is that two people who did postdoctoral fellowships in the same lab at the same time could review each other’s papers and not have it considered a conflict.
The second (and more serious issue, in my opinion) is that one paper was reviewed by two reviewers, each of whom had recently published with one or both of the senior authors whose work was under review – and by recent, I mean in 2012, 2013, and 2015 (e.g., proper collaborators). How this could be considered impartial expert review is beyond me and if this is commonplace, it needs to be identified and halted as a practice.
These criticisms and potential drawbacks aside, the movement toward open and transparent peer review is a good one, and it is great that Nature is (at least partially) shedding some light on how its review process works. I hope that 2018 brings more steps in the right direction and look forward to the analyses of Nature reviewer datasets.
If our readers want to try a similar experiment in their own field, we would love to see the results – is the gender gap as prominent? Are there networks of people that are the power-brokers of peer review? Are there more flagrant fouls that crop up in your analyses? We’d love to hear from you in the comments below.