Earlier this month I was evaluating scientific abstracts for an international stem cell conference (ISSCR). For readers who do not know the process, international experts from across the world get assigned to evaluate the work of their peers to help select which scientists are asked to come and present their work at the meeting. This is typically done by a panel where average scores are calculated and the best ones are invited to give talks or present posters. The incredible thing about this particular set of abstracts was that after I’d logged on and printed them out to begin my evaluation, I noticed that there was no way of identifying who the scientist was or where they worked, meaning I couldn’t let any biases creep in to my evaluation. Amazing, right? That’s the way it should be.
I spent the rest of my afternoon smugly sipping some coffee, enjoying my membership in such a progressive society. After my task was complete, I logged back on to enter the scores and my heart sank as I saw that next to each abstract score box was the authorship list and the institution – how very disappointing. I’m not writing this article to shame the ISSCR, they are only doing what every other organization undertaking peer review does. In fact, they’re doing it better by having the default print option forgo the names of the scientists. Rather, I hope to encourage the people running these societies (and/or funding agencies) to consider blinding as a legitimate tactic to ensure fairness. Why shouldn’t these things be based solely on the science?
The benefits of blinding:
- Scientists in smaller institutions get evaluated for their work, not their location
- Peer reviewers do not need to struggle with evaluating colleagues on a personal level
- Peer reviewers cannot determine if they have a conflict of interest
- Reviewers cannot make an assessment based on the previous track record of scientific group
The solution to me seems perfectly obvious and I hope that scientific societies and granting organizations take note – keep the blindfold on until after the science has been evaluated. In the example of abstracts, you could collect the scores and after they’ve been submitted, display a list of authors and ask: “Do you have a conflict with any of these authors?” and “Are there any scores you would like to change? If so, please detail why in the box below.” I’d bet most boxes would be empty since most reasons for changing an evaluation would not be legitimate.
Strategies like this will help the scientific community break down the (sometimes disgusting!) nepotism that goes hand in hand with artificially building up people’s careers, and focus the efforts on the research rather than the person completing it. I’m confident that we’ll all be rewarded come conference time when we see some new exciting stuff from groups we’ve never heard of rather than the same old stuff from the same old institutions. Bring on the change.