Skip navigation
In my opinion

The problem with scientific publishing

The referee system needs to be changed, and the solution is simple.

BY ERIC HEHNER | MAY 21 2014

The publishing industry developed in a very different technological age. In the past, scholarly publishing was complicated, and scientists needed all the expertise and resources of a publishing company to do so. In the scholarly world, publishing ran the gamut from soliciting submissions to assessing and editing them; commissioning artwork ; taking care of the production process from printing to binding to distribution; and keeping track of financial and legal details.

Times have changed. Today, publishing is easy. Almost everyone does their own typesetting, finds or creates their own artwork and diagrams, does their own spell-checking, and puts their papers on their own website where anyone in the world can find them and read them. There’s no need for printing, binding, warehousing and physical distribution. There are still some useful services that publishers perform, from deciding which submissions are worth publishing, to making editorial suggestions and protecting copyright.

Perhaps the most important remaining function of scientific publishers is to decide which books and papers are factually sound and sufficiently well-written to be worth publishing. The publisher cannot afford to pay the costs of editing, printing and distributing books and journals that will not be bought, so this role is essential for them. University hiring, tenure and promotions committees judge scientists based on how many books they have published and how many papers they have in high-quality journals; thus the publishers determine the scientific merit and career advancement of academic scientists. Scientists gain some assurance that a published paper is worth reading because publishers separate those worth publishing from the junk. No one has the time to read through all the junk to find the good papers.

But in my opinion, publishers do a miserably poor job of selecting the best books and papers from among those submitted. The reason is the system of refereeing. Among the many problems associated with refereeing are these:

Refereeing is voluntary. There is no compensation, not even academic credit. Some scientists don’t do their share, leaving the more conscientious scientists to do more than their share.

Refereeing is anonymous. Since refereeing is both anonymous and uncredited, a referee may not give the paper the time needed to do a good job. A referee who does a good job suffers by taking time away from their paid, credited activities; a referee who does a bad job suffers no ill.

Referees may have a conflict of interest. Referees often have their own research territory and results to protect, and may give a negative report to competing or conflicting research in their own self-interest.

The referees available may not include the best choices for the subject area. A scarcity of willing, qualified referees too often means that fewer than three referees determine the paper’s fate. Moreover, in a narrow specialty, when the author sends their rejected paper to another journal, the probability is quite high that the same referee will be used.

Many people are gaming the system. The importance of publication count to academic scientists causes large groups of them to add each other as authors to all their papers. There are authors who haven’t even read the papers they are supposed to have written. Some university administrations count a full credit, rather than a fractional credit, for a fractional contribution. But any publication count, even with fractions, is a very poor measure of a scientist’s contribution because it does not consider the quality of the papers.

The solution to all these problems is easy. First, let everyone publish their books and papers on their own website; they do anyway. No other form of publication is necessary.

Second, they can send their (electronic) books and papers to an archive service; its only purpose is to date and store books and papers in case anyone needs to prove that a book or paper said some particular thing at some particular time. (Perhaps an archival web-crawler does that automatically.)

Finally, there must be independent reviewers who act like book reviewers and movie reviewers in the newspaper. Reviewers would not be anonymous; they sign their name to their reviews and build their reputation on the honesty and thoroughness of their reviews. They are paid by selling their reviews to subscribers. Authors and anyone else may send them papers in the hope of having them reviewed, but with no hope of influencing the review. A scientist gets academic credit by getting good reviews from reputable reviewers.

Scientists are paid by their companies, universities or government departments, and publishing is part of the job they’re paid to do. Scientists want their papers to be read, and payment isn’t a factor for most of them. It is not in scientists’ interest, or in the interest of science generally, to reduce the potential readership to those who are willing to pay.

The publishers’ interest and the scientists’ interest conflict. But publishers, who were essential to scientific publication in the past, are now superfluous, clinging to territory they once owned. Copyright protects publishers; it too should disappear. There is no reason to restrict copying of published scientific works, nor is it possible to do so. We just need to insist, by law, that an author is always given credit for their work.

Scientific publishing companies are a relic of the past. At present, they are an irritant. In the future, I wouldn’t mourn their disappearance.

Dr. Hehner is professor emeritus at University of Toronto, with 40 years of experience in scientific publication as author, editor and referee.

 

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Andrew Gow Cancel reply

Your email address will not be published. Required fields are marked *

  1. Mark Mercer / May 21, 2014 at 09:38

    Very good!

  2. Eric / May 21, 2014 at 15:57

    Some quite insightful observations on the peer-review process!

    (And no, I wasn’t paid to write this comment.)

  3. Ernie / May 21, 2014 at 19:10

    Many good ideas but I would disagree with the need to select only the “best” articles. In many cases, a mediocre article may have the key bit of information you need. I would even like to have access to summaries of things that didn’t work and could not be published. I think a more open publishing system as described in this piece could provide access to this type of information and that would be of great value. For example, as a chemist I could spend a lot of time on a synthesis that never works and as such is not publishable. Someone else could then attempt the same synthesis, then wasting their time which would have been saved if they had access to a description of my failed attempt.

    It’s not about the “best” article, it’s about finding the information you need.

  4. Andrew Park / May 22, 2014 at 12:29

    In recent years, I have seen a growth in the “nastiness quotient” of reviews. That is, some reviewers seem to confuse the role of peer review with blocking a publication, rather than offering constructive suggestions for improvement. And they are not shy about getting personal with their comments. I have asked colleagues and many concur with this perception.

    Could be that these reviewers are simply inexperienced and that they have been given no guidance as to how to review a paper. Could be that a request for a review comes in to a professor, who then passes it on to their grad. student.

    What is the solution. Publishing first and criticizing later has some appeal. But a double blind process with anonymity for both reviewer and author is also appealing, since we know for certain that author ID has an impact on acceptance rates.

  5. Andrew Gow / May 22, 2014 at 15:29

    A radical solution to a vexed problem. Very tempting, in fact… but also very flawed! Free-market forces (private, for-profit reviewing) are not going to produce better reviewing outcomes than the existing system, I’m afraid. Maybe a rotating body of reviewers selected by their peers in each discipline for a set period would be a better solution?

Click to fill out a quick survey