Skip navigation
News

New Canadian research ranking study reveals a few surprises

Some small universities rank higher than much larger research-intensive schools.

BY ROSANNA TAMBURRI | AUG 31 2012

University rankings have come to be somewhat of a sport in higher education circles, and institutions alternately love them or hate them. Now, a new report by the Higher Education Strategy Associates, a Toronto consulting firm, provides a different way of measuring research strength at Canadian universities.

Rather than using stand-alone publication or citation counts as other measures do, the HESA index uses individual researchers’ H-index scores plus grants they receive from federal granting agencies. The H-index score is a measure of a scholar’s productivity based on a combination of publication and citation information.

The problem with traditional research rankings, argue the study’s authors Paul Jarvey and Alex Usher, is that they favour institutions that are strong in disciplines in which researchers tend to publish more and receive larger grants. Physicists, for example, tend to publish and cite each other more often than historians and also receive larger grants, the study noted.

The HESA index tries to eliminate these biases and “ensure that schools do not receive an undue advantage simply by being particularly good in a few disciplines with high publication and citation cultures.”

“These aren’t simply about raw money and publication totals,” wrote Mr. Usher in his daily blog. “Our methods help to correct some of the field biases of normal research rankings.”

As well as providing an overall ranking, the study, “Measuring Academic Research in Canada: Field Normalized Academic Rankings 2012,” divides the data into two categories: natural sciences and engineering, and social sciences and humanities. It excluded medical and health related disciplines because the authors said they couldn’t distinguish between university researchers and those on staff at hospitals. They acknowledged that this was detrimental to the rankings of some schools, particularly the University of Toronto, which would likely have scored higher had the data been included.

Not surprisingly, big research intensive institutions including the University of British Columbia, McGill University and U of T ranked highest overall. UBC came out on top in both categories: science and engineering, and social sciences and humanities. In science and engineering it was followed closely by Université de Montrèal and U of T; the University of Ottawa and McGill placed fourth and fifth respectively. In the social sciences and humanities, UBC was followed by McGill, U of T, and the University of Alberta. (Where an institution, such as U of T, had multiple campuses of sufficient size, they were ranked separately.)

But there were some notable exceptions too. Université du Québec à Rimouski scored exceptionally well in the sciences and engineering category, coming in seventh place and ahead of such heavyweights as the University of Waterloo, Université Laval, Alberta, McMaster and Western. Rimouski’s ranking reflects the high productivity of its marine sciences researchers, the study noted.

Another outlier was the University of Guelph, which ranked fifth in the social sciences and humanities category despite its reputation for being stronger in science disciplines. Trent University also stood out, performing the best among small universities because its professors had good publication records. Simon Fraser University ranked among the top 10 in both categories (sixth in sciences and engineering and 10th in social sciences and humanities) ahead of many leading research-intensive universities.

The study commented that among the schools with substantial research presences in the natural sciences and engineering, Laval, Ottawa, Calgary, Saskatchewan, Sherbrooke, Guelph and Alberta all receive substantially more money in granting council funds on a field-normalized basis than one would expect given their bibliometric performance. Conversely, Simon Fraser, Concordia, York, Manitoba, Trent, Toronto (St. George), UQAM and Memorial all receive substantially less funding than expected, given their bibliometric performance.

In the social sciences and humanities, the study said that McGill, Laval, Guelph, Alberta, Montreal and McMaster receive substantially more money in granting council funds on a field-normalized basis than one would expect, given bibliometric performance. And Queen’s, Trent and Toronto (Mississauga) receive substantially less.

Overall, Ontario institutions, although “they’re funded abysmally,” ranked highly because they perform substantially better on publication measures than anyone else in the country, Mr. Usher said. The report noted that Quebec institutions fared worse in the social sciences and humanities because research papers written in French are cited less often. This doesn’t apply to the engineering and sciences because researchers in those areas, including Francophones, publish in English.

“Perhaps the main thing that has been learned in this exercise is that stripping away the effects of institutional size and field-normalizing bibliometrics and grant awards results in a slightly different picture of university performance than what we are used to,” the study concluded. “If one can be bothered to look behind the big, ugly institution-wide aggregates that have become the norm in Canadian research metrics, one can find some little clusters of excellence across the country that are deserving of greater recognition.”

HESA spent eight months compiling a citation database containing publication records for about 50,000 Canadian researchers and derived an H-index score for each using the Google Scholar database. It also used grants from federal granting councils to calculate an average grant per professor.

Ian Clark, professor at U of T’s School of Public Policy and Governance, said gathering data on so many researchers was “an impressive feat” which no one else has done. “The reason that I find the HESA technique so intriguing is that it builds up an institutional result from professor level data,” Dr. Clark said, calling it a superior methodology. He believes the data will be of interest to provosts, deans and department heads who want to see how their department stacks up against others in the country. “I think that is something that is not available currently,” he said. “Despite what everybody tries to tell themselves, most people are fascinated by rankings.”

Glen Jones, Ontario Research Chair in Postsecondary Education Policy and Measurement at U of T’s Ontario Institute for Studies in Education, said there are some drawbacks to the study. For one thing, it relies on university websites to calculate the number of faculty but, he said, what universities do and don’t put on a website and how they account for faculty vary considerably. That’s the reason why the authors weren’t able to include medical researchers but the difficulty would extend to other fields too. Another downside, he noted, is that the study takes into account only research funding from federal granting agencies. That works against fields with a history of strong industry and commercial support. But nonetheless, said Dr. Jones, there’s probably no better available data that the authors could have used. “We’re all looking for better rankings. We’re all looking for different metrics,” he said. The HESA study makes some headway in this area and certainly “holds some potential.”

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Dr. RTFM Cancel reply

Your email address will not be published. Required fields are marked *

  1. Alex Stewart / September 2, 2012 at 12:28

    The h-index is a flawed measure of publication impact. It is insensitive both to very highly cited works and large numbers of modestly cited works. Therefore, two scholars can have the same h-index even though only one of them has some very highly cited works, and one of them (possibly the same one) also has many more respectable publications. This flaw is scarcely a secret in bibliometric research. It is also not speculative. For example, in our College one professor has only twice the (Publish or Perish) h-index of one other one, yet has more than 20 times the number of citations.

  2. Paul Brant / September 4, 2012 at 21:34

    I am confused. They used university websites to get a list of the 50,000 full-time professors conducting research at Canadian universities. However, StatsCan reports that there are less than 45,000 full-time faculty (and that includes medical faculty that were not included in this study). Doesn’t this suggest a huge problem in terms of accuracy? If the number of faculty included here is off by 10-15% how can we take this study seriously?

  3. Dr. RTFM / September 5, 2012 at 12:13

    It’s worse than Paul Brant describes, since it deals with “the average professor” as if this is all that matters in a research institution.

  4. David Greenwood / September 5, 2012 at 12:40

    Yes, metrics such as the H-index are flawed, as are the metrics used in the popular university rankings seen in Canadian media such as the Macleans and Globe and Mail university rankings. But whether we (academics) like it or not, the decision makers, whether within our institutions, provincial or Federal governments, as well as students choosing a grad school or undergraduate degree, funding agencies and any other ‘consumers’ of such rankings, do take notice.

    The advantage of this research performance ranking, flawed as it may be, is that HESA derived their own data, although I question the use of Google Scholar – why not Web of Science and/or Scopus? The rankings used in some other surveys (as cited above) rely on data supplied by the institutions, or voluntary surveys of academics, administrators and others, and so are highly qualitative and subjective.

    A possible flaw (at least for my small primarily undergraduate university) is that some of my colleagues are in the unfortunate habit of listing their affiliations on publications not as their place of employment, but rather the more prestigious research centre (at another university) where they are adjuncts and host their graduate students. This practice, which in the case of some small universities (such as Brandon) is specific to a single discipline within the natural sciences with high research productivity, may serve to lower the scores for the institution unless (as noted in their methodology) they searched by listed professors and included records listing the ‘other institution’. Maybe surveys like this one will encourage clarity in declarations of affiliations.

    On the plus side, this survey decries the myth (at least in MB) that a small university like Brandon (and institutions like it in other provinces) doesn’t have a pool of active researchers who publish and who get funding. But room for improvement.

  5. Reuben Kaufman / September 6, 2012 at 00:27

    Like all other ranking exercises, what’s the @#%$^ point?

    We all know that all ranking exercises are far worse than just “flawed”. So it’s unconscionable to feed the public garbage that they then depend on to make important life choices.

    Oh well, I suppose it doesn’t really matter: There clearly is no meaningful difference among the 10 “top” universities as far as a student looking for somewhere to study is concerned. It’s just that the lack of academic integrity irks me no end. No reputable journal would accept this type of garbage, yet we are responsible for keeping the industry alive; this is much to our shame. Will we ever learn?

Click to fill out a quick survey