Skip to main content

The trouble with university rankings

It’s Maclean’s magazine time again.

by Paul Axelrod

rankings_210

Buyer beware!

Maclean’s magazine's annual rankings of Canadian universities has hit the stands, providing readers with simplified, easy-to-digest ratings of some four dozen institutions (only about half of the total number of degree-granting institutions in Canada). As Maclean’s best-selling issue, the rankings edition obviously matters in the magazine marketplace. Complaints by academics about the meaning and value of the survey will be dismissed (as usual) by Maclean’s, but the skeptics should still have their say.

Here are six reasons (unranked) for questioning the usefulness of this ritual grading exercise:

1. In a frenetic world where consumers face information overload and have too little time to explore important issues in depth, the Maclean’s survey panders to the craving for instant, simplistic and formulaic answers to complex questions.

2. University experiences are highly individualistic. Students are unlikely to have entirely positive or negative encounters at their institution of choice, however that institution is ranked by Maclean’s. Some courses will be stimulating, others less so. Some professors will be accessible and engaging, others will be distant and possibly condescending. Students will have both exhilarating social lives and relational disappointments. No university has a monopoly on one type of academic (and life) experience or the other.


3. Similarly, universities are comprised of scores of academic offerings of varying character and quality. One institution’s faculty of arts may include departments whose educational cultures and teaching practices differ tremendously. Aspiring university students (and their families) need to do their homework – they should look closely at the specific programs that interest them and ignore Maclean’s aggregate bottom line. It tells them nothing about an individual university’s component academic strengths, or about an institution’s fit with a student’s interests and abilities.

4. It is legitimate for Maclean’s, and other inquisitors, to ask questions about and comment on any aspect of university life. How big are the classes, how many books are in the library, how many professors have PhDs, who gives out the most scholarships? But to grade each of these variables and then total the scores is a deeply flawed evaluation process. As David Naylor, president of the highly ranked University of Toronto, pointed out in 2006, “I learned to be wary of aggregate rankings of institutions. Imagine a hospital that was superb at heart surgery but had a mediocre obstetrics program. The combined rating for these two programs would be useless for heart patients and expectant women alike. It’s much the same when complex universities are reduce to a single score.”

5. There are many aspects of university life that Maclean’s could measure, but never has, that would very likely change the overall rankings. For example, how do our institutions grapple with the important issues of cultural diversity and community outreach? Surely, this matters in a country with such a rich racial, ethnic and religious mix. The current rankings issue, however, does have a startling lead article on the perceived over-representation of “Asians” in the Canadian student population. This is entirely reminiscent of the fears expressed in the 1920s and ’30s about the growing presence of Jews in our universities, which was followed by the introduction of ethnic (and racial) admission quotas on a number of campuses. Maclean’s certainly doesn’t advocate new quotas, but the article can only encourage those who do. Furthermore, the vast majority of students identified as “Asian” in the article are actually Canadian, who will rightly feel diminished by this incendiary piece. Foreign students still comprise a relatively small proportion of the overall university-student population.

6. These formulaic ranking schemes are hardly objective. Their values and priorities (i.e. what the authors think matters most) are embedded in the questions they ask. Like Maclean’s, they often rely heavily on “reputational” surveys, which are simply a way of quantifying impressionistic and often ill-informed opinions about an institution’s performance or status.

However superficial and defective they are, there clearly is no end to the kinds of ranking exercises that now pervade all levels of education. Read them, enjoy them (or denounce them), but don’t rely on them in the important exercise of planning one’s educational future.

Paul Axelrod is a professor in the faculty of education at York University and the author of several books on the history and policy development of postsecondary education.

Print Comments (2) Post a comment
Email Reprint Share Share

Comments on this Article

For an aspiring graduate student or postdoctoral fellow, it is the mentor rather than the university 'brand' that WILL make the biggest impact. I have been fortunate to work for excellent research mentors during my training and the rankings of the institution have had only a modest effect. Macleans rankings may sell more copies (totally understandable from the publisher's point of view) but will provide precious little regarding career advancement.

Posted by S Chakrabarti, Dec 9, 2010 1:52 PM

Professor Axelrod uses "superficial" and "defective" to describe the Maclean's annual rankings of Canadian universities, and I am inclined to agree. Firstly, with the variation and differentiation amongst and within Canadian universities (i.e., courses, programs), simplistic comparisons will not suffice. There's so much differentiation with institutions WITHIN provinces that to measure, for example, monetary awards given to students amongst various provinces will depend on how much support that province's government will give to their institutions. Also, is institution size (i.e. population) taken into account? Furthermore, if these rankings are published annually, they're providing only a snapshot of universities and their activities (with whatever formulaic measures that they employ) and may not always consider changes that occur within these institutions throughout the year as the "data" is gathered. How are these data gathered in the first place? Perhaps what is needed are transparent explanations of how data are accumulated, inclusion of caveats that the research has, and an outlet for public feedback. With the above in mind, perhaps Maclean's ranking system will acquire more legitimacy.

Posted by Lourdes Villamor, Dec 2, 2010 2:15 AM


Post a comment

University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.