Quick Hit: Paul Krzyzanowski, another post doctoral fellow who writes for the Stem Cell Network blog just posted The underused academic in which he discusses funding levels of Canadian postdoctoral fellows and the balance of funding International imports vs. Canadian exports.
New Metrics for Assessing Scientists: Let’s Accessorize
(see also: New Metrics for Assessing Scientists: Collaboration Networks)
While fashion analogies are typically not my strong suit, I think there is some utility here. Just as an outfit is not simply a top and bottom, a research publication also has many different qualities (brand, fit, trend setting, etc). Similarly, just as a specific Gucci or an Armani outfit can be a terrible waste, a complete farce, or a bad idea, so too can a single Nature or Science article be those things.
One thing we haven’t done enough of in our evaluation of publications though, is to study the accessory bits in the same depth as the fashion analysts. This means treating publication records a little more qualitatively in hopes of identifying the best scientists to fund and continue leading research. Here, I identify some of these bells and whistles and some clever ways that have been contrived of how to quantify them:
- Influence of work – There are a number of ways to imagine this, but the simplest is certainly to ask how many citations a particular paper receives. When trying to assess a particular scientist, however, number of years and number of papers need to be taken in account, so most people would suggest an average number of citations per paper. Perhaps the most innovative evaluation tool I have seen recently is the H-index which was created in 2005 and discourages multiple low impact publications by decreasing their relative value. In essence, a scientist’s “H-index” is the number of publications they have with the same or greater number of citations (e.g.: 8 papers, only 4 of which have been cited more than 4 times, so H=4).
- Reputation amongst the community – From a publication perspective, this seems intuitively difficult to assess, however, there have been attempts such as the Faculty of 1000 which is basically post-publication peer review by experts in the field. Currently, their library of evaluations is over 100,000 papers big and it seeks the opinion of experts in the field to measure the impact of a paper in simple categories (recommended, must read, and exceptional). It has very well respected scientists as “Faculty Heads” including Julian Davies (UBC) and Tony Pawson (UofT).
- Field of (crushed) dreams – In these “impact factor” dominated times, it is crucial to recognize that fields such as mine (stem cells and medical research) are at a massive advantage when it comes to impact factors. Not only do articles from the well-established fields find themselves in top tier journals more often, but their specialised journals are also rated higher in terms of impact factor simply because they are popular (i.e.: there are more articles written, so more citations are expected). Maybe someone could come up with an elaborate normalisation system that pulls medical science back down from the stratosphere. Until then, equal credit needs to be given to top level specialised journals (e.g.: Blood vs. Experimental Eye Research) to recognize top tier work that is from equivalently able/intelligent researchers.
In the end, the fashion gurus (Gucci, Armani, etc) gain their longevity from producing consistently good work with the occasional splash of the exceptional, but one hit wonders ((my fashion knowledge is at its real limits here)) (e.g.: Matthew Williamson and his “Boho” look) can nail a big “look” and then fade away. Hopefully science funding bodies will be wary of the one hit wonder Nature or Science published applicants and make sure that the consistent track record and reputation are in tact as well.