Skip navigation
The Black Hole

UBC tops in Canada? Rimouski 7th in sciences? New metrics for measuring research

BY DAVID KENT | SEP 04 2012

A report was released last week by Toronto-based Higher Education Strategy Associates that came as a surprise. Using their self-stated “alternative metrics” for measuring the research strength at Canadian universities, the Measuring Academic Research in Canada: Field-Normalized Academic Rankings 2012, report uses a combination of different metrics (h-index, research funding, etc) in combination with what they’ve termed field normalization in order to remove the biases created by high citation, money consuming fields (medicine, physics, etc). It is well worth a read, and while much further work clearly needs to be done to parse out exactly how to quantify research strength, the introduction of new metrics in science is something that we have banged on about over the last several years and it is very nice to see some of the surprises that ensue from such novel assessment tools.

The results are that UBC pops out number one in both sciences/engineering as well as social sciences/humanities. Perhaps more surprisingly, Université du Québec à Rimouski is ranked 7th overall in sciences/engineering. The study does not include medical sciences (citing university-affiliated institutions as the source of data crunch consternation) so there are clearly some missing elements to the overall interpretation of this report. What is abundantly clear, however, is that the current metrics for assessing the quality of a university’s research strength deserve to be challenged.

Why do I say that?  Firstly, there is a lot of interesting stuff that comes out of this method of assessing research strength. An example highlighted by the authors is that of UQTR and Laval when compared in the sciences/engineering:

Trois-Rivières’ field-normalized H-index scores in NSERC disciplines are slightly better than Laval’s (1.05 vs. 0.99), yet Laval receives field-normalized granting council funding at over well over twice UQTR’s rate (1.27 vs. 0.59). It is not at all clear why this happens.

If this trend continued over multiple years, it would seem reasonable that UQTR would deserve to be funded at much higher rate in future competitions if research impact is what the granting councils wish to support.

Finally, while I think the authors might lose a little credibility with their sarcasm to conclude the report, it is certainly worth sharing their words as the intention of the report is to flag alternative metrics as useful for identifying institutions that are punching above their weight:

… if one can be bothered to look behind the big, ugly institution-wide aggregates that have become the norm in Canadian research metrics, one can find some little clusters of excellence across the country that are deserving of greater recognition.

Perhaps some schools are coasting on an undeserved reputation and others are struggling to be recognized amongst the shadow of the U-15? We will definitely be curious to see the follow-up data from future years in any case.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Reuben Kaufman Cancel reply

Your email address will not be published. Required fields are marked *

  1. Reuben Kaufman / September 6, 2012 at 00:41

    See my comment to the related article in UA this month about this inane ranking exercise. Dr. Kent: I too am a biologist, and I ask you to reflect that you probably have much higher standards of scholarship when you publish on your research than do the authors of most ranking studies. There are probably fairly objective criteria for evaluating simple things like automobiles, TV sets, and mobile phone providers. There is no meaningful way to rank complex institutions.

  2. Dave K / September 6, 2012 at 09:00

    Hi Dr. Kaufman,

    Thanks for your comment on the post – I’ve not seen the comment from you on the related post (these are moderated separately from mine and it might still be in the queue) but I thought I should respond in any case.

    I completely agree that this study falls short in many respects (the affiliated institute exclusion, the exclusion of medical science as a category, the use of google scholar, the arbitrary weighting, etc), however, it equally applies criteria to institutions and introduces important metrics to decision makers (h-index, field normalization, etc), so for that reason, I thought it useful to highlight here.

    As for your comments concerning the meaningful ranking of institutions, I agree that it is more difficult and possibly not objective, but I do not think this means that we should not try at all.

    As biologists, we often create similar criteria (some objective, some not) for evaluating things like the effect of removing gene X or adding compound y and we do this in (arguably) much more complex systems than universities. Just have a look through any number of neurobiology or behavioural biology studies that establish scales of performance that chart recovery or progress.

    While I do try to hold my own work to the highest of standards, I think we would be foolish to assume that our experiments and tools for assessment are completely objective.

    However, I do agree with you overall that this is not a peer reviewed social science paper and would hold substantially more weight if it was. It does at least start the conversation about such things as strong research clusters at smaller schools and field normalization.

    Again – thank you for your comment, I do hope we will hear more from you in the future!

  3. Reuben Kaufman / September 6, 2012 at 14:27

    Hi Dave,

    Yes, cellular systems are complex, but we usually do run reasonable controls, or at least arrive at conclusions that are objective enough in light of our controls.

    Perhaps you’ve seen my diatribe on the other article by now…. Maybe I was a bit over the top, and could have used more temperate language, but this whole field of “ranking” irks me.

    Surely there is a way to highlight the “strong academic areas” of various universities, without ranking them. That would serve students much better in the long run.

    “If your interest is in academic domain X, here is a list of universities [nationally] that have a strong reputation in that domain.”

    I agree, not 100% objective, of course not, but at least not giving the false impression that as you go down the ranks from 1 to 2 to 3 to X, you are getting progressively “worse” in your choice of universities. That is a terrible model to be promoting.

    Cheers,
    Reuben

  4. Dave K / September 7, 2012 at 06:05

    Hi Richard,
    Thanks again for your comments. I agree that rankings are unfairly denigrating a school and its accomplishments and would happily support the kind of system you propose (highlighting strong areas/departments at particular schools). The compounding factor is that the difference between placements (e.g.: 9th and 11th) is much more psychological than actual with one school being “top 10” and the other not when it can sometimes come down to “how many trees are on campus” to place one school above the other (not that trees aren’t important, but you get my point I imagine).

    I look forward to hearing more from you in the future, thanks for reading.

    • Reuben Kaufman / September 7, 2012 at 14:49

      Hello again Dave,

      I remember years ago that Macleans Magazine used as one indicator to evaluate a university library, the number of holdings divided by the number of students. On that metric, the library at the University of Brandon, Manitoba, ranked higher than the library at Harvard University. I don’t know either library, but I’m sure you’ll agree that using number of holdings per student as a metric for the library makes absolutely no sense. There were/are other nonsense metrics used in the Macleans rankings.

      I don’t blame Macleans: they are in the business of selling magazines, and their November issue is a goldmine. I blame our senior university administrators for supplying data that they know are worthless. That’s the travesty.

      Have we exhausted this issue yet?

      Cheers,
      Reuben

Click to fill out a quick survey