The trouble with university rankings
It’s Maclean’s magazine time again.
Maclean’s magazine's annual rankings of Canadian universities has hit the stands, providing readers with simplified, easy-to-digest ratings of some four dozen institutions (only about half of the total number of degree-granting institutions in Canada). As Maclean’s best-selling issue, the rankings edition obviously matters in the magazine marketplace. Complaints by academics about the meaning and value of the survey will be dismissed (as usual) by Maclean’s, but the skeptics should still have their say.
Here are six reasons (unranked) for questioning the usefulness of this ritual grading exercise:
1. In a frenetic world where consumers face information overload and have too little time to explore important issues in depth, the Maclean’s survey panders to the craving for instant, simplistic and formulaic answers to complex questions.
2. University experiences are highly individualistic. Students are unlikely to have entirely positive or negative encounters at their institution of choice, however that institution is ranked by Maclean’s. Some courses will be stimulating, others less so. Some professors will be accessible and engaging, others will be distant and possibly condescending. Students will have both exhilarating social lives and relational disappointments. No university has a monopoly on one type of academic (and life) experience or the other.
3. Similarly, universities are comprised of scores of academic offerings of varying character and quality. One institution’s faculty of arts may include departments whose educational cultures and teaching practices differ tremendously. Aspiring university students (and their families) need to do their homework – they should look closely at the specific programs that interest them and ignore Maclean’s aggregate bottom line. It tells them nothing about an individual university’s component academic strengths, or about an institution’s fit with a student’s interests and abilities.
4. It is legitimate for Maclean’s, and other inquisitors, to ask questions about and comment on any aspect of university life. How big are the classes, how many books are in the library, how many professors have PhDs, who gives out the most scholarships? But to grade each of these variables and then total the scores is a deeply flawed evaluation process. As David Naylor, president of the highly ranked University of Toronto, pointed out in 2006, “I learned to be wary of aggregate rankings of institutions. Imagine a hospital that was superb at heart surgery but had a mediocre obstetrics program. The combined rating for these two programs would be useless for heart patients and expectant women alike. It’s much the same when complex universities are reduce to a single score.”
5. There are many aspects of university life that Maclean’s could measure, but never has, that would very likely change the overall rankings. For example, how do our institutions grapple with the important issues of cultural diversity and community outreach? Surely, this matters in a country with such a rich racial, ethnic and religious mix. The current rankings issue, however, does have a startling lead article on the perceived over-representation of “Asians” in the Canadian student population. This is entirely reminiscent of the fears expressed in the 1920s and ’30s about the growing presence of Jews in our universities, which was followed by the introduction of ethnic (and racial) admission quotas on a number of campuses. Maclean’s certainly doesn’t advocate new quotas, but the article can only encourage those who do. Furthermore, the vast majority of students identified as “Asian” in the article are actually Canadian, who will rightly feel diminished by this incendiary piece. Foreign students still comprise a relatively small proportion of the overall university-student population.
6. These formulaic ranking schemes are hardly objective. Their values and priorities (i.e. what the authors think matters most) are embedded in the questions they ask. Like Maclean’s, they often rely heavily on “reputational” surveys, which are simply a way of quantifying impressionistic and often ill-informed opinions about an institution’s performance or status.
However superficial and defective they are, there clearly is no end to the kinds of ranking exercises that now pervade all levels of education. Read them, enjoy them (or denounce them), but don’t rely on them in the important exercise of planning one’s educational future.
Paul Axelrod is a professor in the faculty of education at York University and the author of several books on the history and policy development of postsecondary education.