Skip navigation
The Black Hole

New Metrics for Assessing Scientists: Let’s Accessorize

BY DAVID KENT | FEB 07 2011

Quick Hit: Paul Krzyzanowski, another post doctoral fellow who writes for the Stem Cell Network blog just posted The underused academic in which he discusses funding levels of Canadian postdoctoral fellows and the balance of funding International imports vs. Canadian exports.

New Metrics for Assessing Scientists: Let’s Accessorize
(see also: New Metrics for Assessing Scientists: Collaboration Networks)
While fashion analogies are typically not my strong suit, I think there is some utility here.  Just as an outfit is not simply a top and bottom, a research publication also has many different qualities (brand, fit, trend setting, etc).  Similarly, just as a specific Gucci or an Armani outfit can be a terrible waste, a complete farce, or a bad idea, so too can a single Nature or Science article be those things.

One thing we haven’t done enough of in our evaluation of publications though, is to study the accessory bits in the same depth as the fashion analysts.  This means treating publication records a little more qualitatively in hopes of identifying the best scientists to fund and continue leading research.  Here, I identify some of these bells and whistles and some clever ways that have been contrived of how to quantify them:

  1. Influence of work – There are a number of ways to imagine this, but the simplest is certainly to ask how many citations a particular paper receives.  When trying to assess a particular scientist, however, number of years and number of papers need to be taken in account, so most people would suggest an average number of citations per paper.  Perhaps the most innovative evaluation tool I have seen recently is  the H-index which was created in 2005 and discourages multiple low impact publications by decreasing their relative value.  In essence, a scientist’s “H-index” is the number of publications they have with the same or greater number of citations  (e.g.: 8 papers, only 4 of which have been cited more than 4 times, so H=4).
  2. Reputation amongst the community – From a publication perspective, this seems intuitively difficult to assess, however, there have been attempts such as the Faculty of 1000 which is basically post-publication peer review by experts in the field.  Currently, their library of evaluations is over 100,000 papers big and it seeks the opinion of experts in the field to measure the impact of a paper in simple categories (recommended, must read, and exceptional).  It has very well respected scientists as “Faculty Heads” including Julian Davies (UBC) and Tony Pawson (UofT).
  3. Field of (crushed) dreams – In these “impact factor” dominated times, it is crucial to recognize that fields such as mine (stem cells and medical research) are at a massive advantage when it comes to impact factors.  Not only do articles from the well-established fields find themselves in top tier journals more often, but their specialised journals are also rated higher in terms of impact factor simply because they are popular (i.e.: there are more articles written, so more citations are expected).  Maybe someone could come up with an elaborate normalisation system that pulls medical science back down from the stratosphere. Until then, equal credit needs to be given to top level specialised journals (e.g.: Blood vs. Experimental Eye Research) to recognize top tier work that is from equivalently able/intelligent researchers.

In the end, the fashion gurus (Gucci, Armani, etc) gain their longevity from producing consistently good work with the occasional splash of the exceptional, but one hit wonders ((my fashion knowledge is at its real limits here)) (e.g.: Matthew Williamson and his “Boho” look) can nail a big “look” and then fade away.  Hopefully science funding bodies will be wary of the one hit wonder Nature or Science published applicants and make sure that the consistent track record and reputation are in tact as well.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. SB / February 8, 2011 at 06:16

    I love metrics! So was very excited to see this post.
    However, I’m not sure I agree that funding bodies should be wary of what you’ve described as “one hit wonders” ! First off, I doubt that anyone in a position to apply for grants would have one C/N/S paper listed as their sole contribution to science. But I can imagine that some investigators working in emerging fields might have a very small number of publications, albeit with extremely high impact, as their “track record”. Asking people to have lots of solid pubs as well as a few outstanding ones might tilt the playing field towards those working in already established fields, and away from researchers who pick truly novel, groundbreaking projects, the kind that might lead to one super high impact pub every 5 years (the effect being especially pronounced on young investigators).
    Whether we should even be funding very high-risk, exploratory research is a question to be debated, but it’s worth considering its potential to shed light on some of the basic models we have to describe the world.

  2. Dave / February 8, 2011 at 13:37

    Hi Sonja,
    As always, great comments! I guess I was being a little extreme to make the point regarding one hit wonders (though there are definitely people who get interviews/jobs off the strength of one such paper). Mostly I’m trying to encourage the idea that a consistent track record of good science is something that we should consider funding a little more reliably. I just think that when someone’s CV has a big paper, it needs to be accompanied by co-authored publications (especially in fields like medical science which are increasingly reliant on big team science) and/or other first authorship papers in decent journals. Furthermore, the difference between a Nature paper and a top tier specialized journal often has more to do with the non-science aspects of the paper (popularity, politics, biased peer review, home university, etc etc) and can lead to poor judgment about the research potential of an individual.
    All that being said though – I have to agree with you that big papers are big things and most of the time, they deserve substantial credit. Nature and Science are reliable indicators of important topics in many fields and will continue to be that. This type of assessment is really just asking “does that hat go well with that dress” (i.e.: a secondary assessment that will enhance the evaluation). Occasionally, one would hope that this type of secondary assessment would work in favour of the consistent track record instead of being blinded by the big lights of Nature and Science.
    Finally, your comments around “funding very high-risk, exploratory research” are definitely important. It has been debated extensively and will continue to be a theme that crops up in an age where everything seems to be attached to a “deliverable” – more will certainly be said on this sort of issue on this site.

  3. Erika / February 17, 2011 at 20:31

    Great article Dave.
    My field (comparative animal physiology) produces very few Science and Nature or other high impact factor pubs. Our “premier” journal has an impact factor of ~2.7. As a result, people tend to publish as many articles as possible, rather than fewer articles of high quality. I think this is problematic for (at least) 2 reasons. First, too many papers drone on and on for pages about a single result. The ability to write clearly and succinctly is undervalued. And second, when a few experiments are published together in a single high-quality article, the scope, importance and relevance of the work can really be appreciated.
    I applaud the idea of better evaluating publications, rather than simply counting the overall number (my field) or the number of high-impact (medical field) publications.

  4. […] to quantify research strength, the introduction of new metrics in science is something that we have banged on about over the last several years and it is very nice to see some of the surprises that ensue from […]

Click to fill out a quick survey