Skip navigation
In my opinion

University rankings: The Emperor has at least some clothes

BY KIM BLANK | NOV 30 2016

Yves Gingras, in his recent opinion article, argues against the general validity of academic rankings. He writes that they do not tell the whole story and should not be taken seriously by “well-educated academics.” This begs at least one question: What exactly should be taken seriously as we assess or scrutinize universities and colleges for their relative quality?

Yes, there’s no doubt that universities that get decent rankings often parade their high placement and may try to use it to their advantage. Why not? And, there’s also no doubt that universities with low rankings find ways to at least look good – a snazzy but friendly website is generally the way, along with a sloganized discourse (a.k.a. branding) that promotes great experience, great ambiance, a place that really cares, blah, blah, blah.

Sometimes gross hyperbole spills over from the communications consultants. For example, at my own institute – the University of Victoria, which is a pretty good place – the singular up-front claim on its home page promotes UVic as “Canada’s most extraordinary academic environment” that will “provide an Edge that can’t be found anywhere else.” Double sigh. The very best institutions, though, generally don’t need to promote too much beyond the obvious, though uncertain economic conditions over the last decade or so have sometimes brought out the worst of the best.

The undiscoverable or misleading truth about the quality of universities is actually not that hard to figure out, with or without those apparently anxiety-inducing rankings. Yes, there are all kinds of factors that can be statistically stirred into those formal assessments, like citations (or citation impact), peer review publications, student/faculty ratios, diversity, international research collaboration, research income, financial stability, patent filing and (the dreaded) reputation.

But let us boil it down to the obvious without these variously and at times arbitrarily weighed measures: better universities have better facilities (libraries, labs, classrooms, computing facilities, etc.), better faculty (check their credentials, the importance of their research, their Nobel Prizes), better students (acceptance rates are a very obvious indicator), better student support and even, yes, better administrators. Most of this should be fairly recognizable for “well-educated academics,” and even for prospective students, their parents and the public.

Maybe something else is lurking behind these apparent fears of rankings. In the academy, we don’t like to speak about the very obvious qualitative distance between the top and bottom academic institutes. We are, on the surface, basically polite and well-meaning sods. But, when speaking of other institutes, we can also be exceedingly defensive and, at moments, a little passive-aggressive. We don’t want to face the certain truths: that, for example, the vast majority of academically oriented PhDs are not good enough and will never be good enough to get a job at, say, Princeton; and the vast majority of students are not smart enough to get into Yale.

Would it really be a terrible thing to note that the University of Toronto, for example, is much better than any number of medium-sized, regional universities when it seems to be perfectly obvious, ranking system or not? Hard work is not the same as excellent work; participation is not the same as performance; and limitations are often limitations, no matter what dream you follow.

Let’s be honest here: not everyone is good enough to play for the Montreal Canadiens, no matter how hard they try; not everyone can hit a 90-m.p.h. curve ball; not everyone can write a discipline-changing book or think of and carry out a ground-breaking study; and not everyone is a brilliant communicator or teacher.

The ranking systems are not perfect, but neither are they, as Professor Gingras writes, fully “illusory.” It is not hard to figure which are the superior schools, and these places almost always appear toward the top of most ranking systems, despite their diverse and at times quirky methodologies.

I confess I have not read Professor Gingas’ new book about research evaluation, published by the press of MIT, which is clearly one of the top universities in the world, and the rankings say so. Presumably he knew something about its quality as an elite university and the advantages of being associated with it.

Kim Blank is a professor of English at the University of Victoria.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Marc Spooner Cancel reply

Your email address will not be published. Required fields are marked *

  1. Reuben Kaufman / November 30, 2016 at 17:27

    Kim,

    There is nothing at all wrong with evaluating university quality. The toxic aspect is trying to reduce each metric to a single number, then take the average, and then ranking them individually. Nobody involved with university life can believe that the first ranking university (in any league) is objectively better than the 2nd which is better than the 3rd etc. Then there is the usual problem with interpreting any given metric. If university A spends more per capita on “student services” than does university B (and the difference in number is often trivial), does it really mean that student services are in any sense “better” at university A than B, or should it be interpreted as university B having greater efficiency in use of funds? Or (more probably, IMO) does it have no substantial meaning at all?

    I think it would be marginally better to give universities some sort of letter grade based on a suite of metrics, so that there is no attempt to distinguish among the dozens which get an A grade, or among the hundreds that get a B grade, etc. Or maybe categories like “Excellent”, “Very good” …… etc.

    But even the latter has problems. One’s experience at any university is affected by so many issues that can’t even be measured. EG: What if a student is particularly interested in tick biology. Should they try to get into a top-rated school, or should they try to get into the university where the tick biology program is very strong, even if that university has a much lower overall rank? Basically, universal rankings, no matter how popular, have no value other than giving employment to people who administer them….

    I remember reading about secondary school rankings throughout Britain some years ago, where thousands of schools are ranked. There was one school in London, a Jewish religious school, it so happens, that was very near the top rank within its league. But one year, the upper grade students decided not to answer the question on Shakespeare in the literature exam; this was because of a perception that Shakespeare was more anti-semitic than other boffins of the time (yeah, sure!). The Head Master of the school praised them for their stance. However, this resulted in a slight reduction in average grade for that one course in the curriculum. The newspaper in which I read the story stated that the school tumbled HUNDREDS OF SLOTS in their national ranking that year! Hundreds of slots because a dozen students didn’t answer one question in one course!

    I doubt you could find a better example than that to show how ridiculous these metrics are in the end.

  2. Marc Spooner / December 1, 2016 at 16:25

    I find it interesting the article uses sporting analogies.

    There are world-class researchers at every Canadian university, just as there are mediocre ones at the leading American ones mentioned.

    Above all else, the politics of evidence, it appears to me play the most significant role in what approaches and forms of scholarship are top-rated and which are sometimes overlooked altogether.

    To play in your game with sporting analogies, even the best hockey and baseball players are drawn from the most unlikely of places; just as the Montreal Canadiens or the Edmonton Oilers have had their share of unskilled enforcers.

    Rankings are misleading, toxic, and coercive to the degree they are worshiped.

    Respectfully,

    Marc

  3. Bob Lane / December 2, 2016 at 13:22

    You mean “raises the question” don’t you, Kim?

Click to fill out a quick survey