Skip to main content

Opening up peer review

As journals test the waters of open peer review, authors and editors remain divided over the merits of tinkering with a tried-and-true system.

by Rosanna Tamburri

open_peer_review_448
Illustration by Alberto Antoniazzi.

Academics, as the saying goes, must publish or perish. And part and parcel of scholarly publishing is peer review, a centuries-old custom that’s undergoing some revolutionary experiments in the Internet age. As social media become more entrenched in daily life, academic journals and scholars are beginning to test the boundaries of the traditional review model and explore the merits of online “open peer review” – so far with mixed results.

In an open peer-review process, articles and manuscripts are posted online where everyone is invited to weigh in; the identities of the authors and reviewers are not concealed. This contrasts with the traditional peer-review system, where a manuscript for a journal article or book typically is subjected to the scrutiny of leading experts in the field. Chosen by the editor, these experts usually remain anonymous and help decide a manuscript’s fate. Their decision acts as a quality control mechanism, giving published articles a stamp of approval.

In 2010, Shakespeare Quarterly, a prominent U.S. journal of literary studies published by Johns Hopkins University Press, became one of the first humanities journals to conduct an open peer-review trial. For eight weeks, the journal posted several articles on MediaCommons Press, a scholarly website. Katherine Rowe, guest editor of the special SQ edition, declares the experiment “an ecstatic success.” About 40 participants, including a filmmaker and a media historian, posted more than 350 comments. The contributors were mostly academics but represented “a much wider sampling of scholarly expertise than we would have gotten had we gone through the traditional process,” she says.

Alan Galey, assistant professor at the University of Toronto’s faculty of information, was one of the authors who participated in the experiment, but not without some initial misgivings. Dr. Galey isn’t a big user of social media – he doesn’t have a Facebook page or Twitter account – but what really worried him was the type of comments he might receive: “You can either get flame wars or people hanging back.”

Eventually he was won over by the journal’s prestige, and in the end he was pleased with the “range and depth” of the feedback and the way the process unfolded publicly and in real time. “It was like a public conversation,” he says, similar to one you’d have around a conference table.

The editors, too, worried at the outset that a lack of anonymity might inhibit frankness among reviewers. That didn’t happen, but they confronted other challenges. For one thing, the process was more time-consuming for editors, authors and reviewers, says Dr. Rowe. For smaller journals, the increased workload could be a deal-breaker. The experiment also failed to attract the participation of junior scholars, as the editors had hoped. One of its goals was to expand the pool and ranks of potential referees, to bring new perspectives to the field and to help editors recruit reviewers. “But unsurprisingly,” explains Dr. Rowe, an English professor at Bryn Mawr College in Pennsylvania, “few junior scholars wanted to comment and respond critically in public to a senior scholar’s essay. … That’s a future task that needs to be grappled with.”

On the other hand, she says, the traditional review model has drawbacks, too. Some say the conventional process can lead to conformity. Dr. Rowe says it tends to reference existing knowledge rather than new or emerging thought, and has a conservative effect on scholarship. The secrecy of closed reviews is supposed to allow referees to comment frankly without fear of repercussions, but in small fields with few researchers, it’s still possible for authors and referees to identify one another.

Anonymity also means reviewers get little credit for their work, a factor that makes it tough to recruit referees. And, as anyone who has gone through the process will attest, anonymity guarantees neither fairness nor objectivity on the part of the reviewers. “Our traditional review processes are not a gold standard,” she says. “They are a good enough, sustainable model.”

It’s partly in response to these criticisms that journals have begun trying out alternate forms of peer review. While some of the most recent experiments with open review have been in the humanities, the concept was embraced much earlier by the science community. In 2006, the prestigious journal Nature launched a trial of open peer review – with disappointing results. Few authors agreed to participate, and just half of those who did received critiques. The editors found most of the comments lacked substance and were of little use in making publishing decisions.

The physics community adopted the practice – albeit unofficially – earlier still. Since the early 1990s, physicists have posted pre-published drafts of their research papers on arXiv, an online repository of scholarly articles now hosted by Cornell University. Rob Myers, senior faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, says the idea behind arXiv was to make research findings available to a broad community as quickly as possible, rather than have researchers wait for months for the findings to appear in a journal. “It really wasn’t a question about peer review,” he explains.  ArXiv doesn’t include an online forum for commentary, but it is common practice for readers to e-mail their feedback to authors.

It may be that certain fields or subfields are better suited to open review than others. Gunther Eysenbach, editor of the Journal of Medical Internet Research, a Canada-based, online and open-access journal, says its readers have readily accepted open review. “Our audience is very tech-savvy and intrinsically interested in this kind of experiment.”

Dr. Eysenbach says the journal adopted the system in 2009 to expand its pool of referees and make its articles accessible to lay readers. In some ways the experiment has fallen short of its goals: most of the self-selected reviewers are still academics, and sometimes the ones who aren’t don’t fully grasp the idea of what’s required.

But overall the experiment has been a success, and the journal intends to continue the practice, mainly because authors seem to prefer it (see “Comparing open vs. closed reviews” at the bottom of the page). Another benefit has been that the number of self-selected referees helps Dr. Eysenbach gauge reader interest in topics the magazine covers. “We will never fully replace the role of the editor in assigning reviewers,” says Dr. Eysenbach, an associate professor in the department of health policy, management and evaluation at U of T. “But we can complement it.”

In the humanities, the success of the Shakespeare Quarterly experiment has spurred other journals to follow suit. Postmedieval, a quarterly published by Palgrave Macmillan, conducted an open peer review last year with similarly positive results. Jen Boyle, assistant professor of English at Coastal Carolina University and guest editor of the edition, was pleased that so many commentators took part and with the quality of their feedback. But she admits the comments lacked the critical heft of private reviews. “It’s not that there weren’t any negative comments,” she says. “They just weren’t quite as feisty as you’d see in a private peer review.”

Canadian scholars are also testing the waters. Researchers at the University of Victoria’s Electronic Textual Cultures Laboratory recently posted on Wikibooks (a Wikipedia sister site) an electronic version of the Devonshire Manuscript, a collection of 16th-century English poetry and verse. The manuscript contains an introduction, biographies and genealogical tables, all of which can be edited and commented on by readers.

“We thought we might try to engage not only traditional academic sensibilities in review of materials but also the general public, especially what are increasingly becoming known as citizen scholars,” explains Ray Siemens, head of the lab. “By opening up in this way, we have many more reviewers than we might have otherwise, which should lead, I think, to a better end-product,” he says. The materials will remain on Wikibooks for a year and then the revised manuscript will go through a traditional peer-review process before it’s published.

Despite such recent innovations, open review trials remain less common in Canada than in the United States. A number of Canadian journal editors said they were hesitant about trying it, in part because they weren’t sure how it would affect their funding.

Gisèle Yasmeen, vice-president, research, at the Social Sciences and Humanities Research Council, says the council’s policies don’t prohibit open peer review. “We’re agnostic on that,” notes Dr. Yasmeen. “What we ask them to do is to demonstrate how they do the review and how they assure quality. We don’t dictate how it should or should not be done.”

Still, the debate over the merits of open review continues among scholars and editors alike. Stevan Harnad, who holds the Canada Research Chair in cognitive science at Université du Québec à Montréal and a prominent advocate of open access, remains strongly critical of open peer review. He says the traditional peer-review system, while imperfect, has a proven track record for producing quality results. Qualified referees “not only protect the quality of the research, they also protect researchers from unfiltered content,” he says. “If you take research seriously, it needs to be vetted by qualified experts before it’s put out as being ready for consumption.”

A former editor of two international journals, Dr. Harnad would like to see more journals adopt the practice of open peer commentary. In this model, articles that have been vetted through the classical peer-review process and published are subsequently posted online and open for comment. This, Dr. Harnad says, can be an important supplement to closed review and can mitigate some of its drawbacks.

Whatever the pros and cons, open review represents an important “philosophical change” in the history of peer review and, in particular, in the definition of a peer, says Kathleen Fitzpatrick, professor of media studies at Pomona College in Claremont, California, and author of Planned Obsolescence: Publishing, Technology and the Future of the Academy. In modern times, a peer has been someone with the necessary credentials to make evaluative judgments about a colleague’s research. In the social networking age, the role of the peer is being supplanted by that of the community, says Dr. Fitzpatrick, who’s leading a study on open peer review funded by the Andrew W. Mellon Foundation.

The difficulty is that with so much more being published, the number of manuscripts can easily outstrip the ability of reviewers to evaluate them. The system, to be sustainable, may need some type of “pay to play system” in which authors would commit to reviewing a certain number of papers in order to publish within a community, she says.

What ultimately will decide the fate of open review is whether it can provide the same degree of quality assurance on a sustained basis that the classical model delivers. Dr. Siemens, at UVic’s Electronic Textual Cultures Lab, notes that peer review has continually evolved and has adapted to meet the needs of diverse fields of study. Making the leap to the electronic age seems the next logical step. That’s not to say that the traditional blind review will fall by the wayside, but the next generation of scholars, those who are at ease with social media and open technologies, will be more inclined to give it a try. “Ultimately,” he predicts, “that’s the direction we’re headed.”

Rosanna Tamburri, based in Toronto, is a freelance journalist who primarily covers postsecondary education.

Comparing open vs. closed reviews

The Journal of Medical Internet Research uses a mix of open and closed peer reviews to adjudicate the articles it publishes. Editor Gunther Eysenbach and journal authors rate the quality of reviews on a scale of one to five, with one being of poorest quality and five the best.

So how do the two processes stack up? It depends who you ask. The editor gives a slight advantage to closed reviews. His mean rating for open reviews was 3.9, compared to 4.4 for closed reviews. But the authors seemed to prefer open reviews, giving them a mean rating of 4.3 compared to 3.9.

Maybe that’s because self-selected referees are more interested in the articles they review and offer more constructive criticism, he speculates. Or maybe their reviews are simply more encouraging. Either way, he adds, the journal strives to provide authors with constructive feedback that will help bring their manuscripts to publishable form, and it’s clear that open reviewing fits the bill.

Roots of peer review

Peer review is believed to date back to the mid-18th century when the Royal Society of London created a “Committee of Papers” to oversee the review of articles in its journal, Philosophical Transactions. The process we use now came along much later. It wasn’t until the mid-20th century that academic journals like Science adopted the formal, blind peer-review process still in use today. In a single-blind review, the manuscript’s author doesn’t know who the reviewers are. In a double-blind review, the identity of the author and reviewers are both concealed.

Other stories that might be of interest:

Print Comments (1) Post a comment
Email Reprint Share Share

Comments on this Article

OPEN PEER REVIEW: A SUPPLEMENT, NOT A SUBSTITUTE

Some references for what I suggested:

Harnad, S. (ed.) (1982) Peer commentary on peer review: A case study in scientific quality control, New York: Cambridge University Press

________ (1985) Rational disagreement in peer review. Science, Technology and Human Values, 10 p.55-62. http://cogprints.org/2128/

________ (1990) Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry. Psychological Science 1: 342 - 343 (reprinted in Current Contents 45: 9-13, November 11 1991). http://cogprints.org/1581/

________ (1996) Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing: The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-118. http://cogprints.org/1692/

________ (1997) Learned Inquiry and the Net: The Role of Peer Review, Peer Commentary and Copyright. Learned Publishing 11(4) 283-292. http://cogprints.org/1694/

________ (1998) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242. http://cogprints.org/1646/

________ (2008) Flight-test before you fly. Comment on “A XXI-century alternative to XX-century peer review ”, real-world economics review, issue no. 47, 3 October 2008, pp. 252-253, http://www.paecon.net/PAEReview/issue47/CommentsIettoGillies47.pdf

________ (2009) The PostGutenberg Open Access Journal. In: Cope, B. & Phillips, A (Eds.) The Future of the Academic Journal. Chandos. http://eprints.ecs.soton.ac.uk/15617/

________ (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8). http://eprints.ecs.soton.ac.uk/21348/

Posted by Stevan Harnad, Mar 16, 2012 9:34 AM


Post a comment

University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.