Skip navigation
The Black Hole

Impact factor ‘eligibility window’ skews the system

BY DAVID KENT | SEP 05 2013

For the past year, I have been sitting on the publications committee for a society-run journal and in the journal’s quest to improve its impact factor (IF), it became clear to me that one of the system’s dark secrets is the “window of IF eligibility.” It single-handedly disadvantages journals whose science stands the test of time and favours journals that have speedy public relations’ campaigns.

For those not aware of it, a journal’s IF is based on two numbers for year X:

  1. The number of times articles published in the two years prior to year X are cited during year X
  2. The number of citable articles published in the two years prior to year X

The IF is simply the first number divided by the second.

This means that all articles published more than two years before the year being evaluated do not count for anything in the IF world. The IF metric significantly impacts the careers and fundability of scientists, but the practice of only counting two years post-publication is damaging for several reasons.

No benefit to standing the test of time

Not many people regularly scroll through the table of contents in journals with an IF of 3, so when very good papers are published in low-ranking journals, it sometimes takes a little time for them to get on people’s radar. Most scientists would agree that a paper needs to stand the test of time, and indeed many papers in low IF journals do exactly that, but they do not end up helping the IF of that journal.

A case in point, there was a 2003 paper in my field published in Experimental Hematology (IF < 3) that was cited 6.5 times per year in the eligibility window and an average of 10 times per year since. By contrast, a paper on the same topic in the same year in Nature Immunology (IF > 20) was cited 17 times per year in the eligibility window and an average of 11 times per year subsequently. The papers end up in the same place (i.e., cited about 10 times/yr) but they differ dramatically in their contribution to the journal’s IF (17 vs. 6.5).

Bias against poorly promoted journals

Big journals have big public relations teams – they do big press releases, and they carry an already big name. Obviously, getting quick publicity for articles will lead to citations earlier than papers that aren’t read immediately.

Artificial bias toward being trendy

Surely, in the age of reddit, Twitter and things “going viral,” we can appreciate that trendiness does not equate importance. However, our metrics for evaluating scientists almost fully rely on how trendy their research is (or at least on how trendy the journal they’ve published in is), and not how important or good quality it is. This translates into a culture that rewards buzz words and style over substance.

Moving forward

So, what can we do to improve the situation? Scientists could start picking up tables of contents from low IF journals (unlikely) or we could actually read beyond the abstract to see if the paper we’re citing actually proves a point and hasn’t been published somewhere else (also unlikely). I think the easiest way is to measure articles (and journals) by average citations per year since publication. I’d love to see what proportion of “high impact” papers crash and burn after the first few years post publication. It would not surprise me if the number was very large.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Dean / September 5, 2013 at 11:46

    Dave

    Adoption of services such as Mendely could help. As more scientists “like” papers in less prestigious journals and that like is broadcast they may get picked-up more broadly by the community.

    Of course that may lead to questions about science and social media . . .

Click to fill out a quick survey