Skip navigation
In my opinion

University rankings – a game of snakes and ladders

When it comes to public talk of rankings, it’s a matter of playing at games to gain reputation and engage in advocacy.

BY GARY BARRON | NOV 12 2014

Cries of disappointment and alarm have echoed across Canada in response to the release of Times Higher Education’s World University rankings this fall. Such dismay marks a sharp contrast from the shouts of celebration heard coast to coast when the QS Worldwide University Rankings were released only weeks earlier.

Most of Canada’s universities fell in the THE rankings and rose in the QS rankings. If rankings measure how good universities are, then why do Canada’s universities climb in one ranking, but slide down in the other?

The primary reason is that each ranker has a different approach to making their rankings. As a result, each ranking will emphasize different aspects of university activities. These differences in ranking method allow advocates to pick and choose which ranking they will use in order to promote a cause, and which strategies they will use in order to deal with the rankings where they are seen to be falling. Such games are much like playing the children’s board game snakes and ladders, where one can choose ladders to get ahead of competitors, or find oneself on a snake falling to the bottom of the board.

When it comes to the QS and THE rankings each has a different approach, though both claim to measure the level of world class excellence that universities achieve. For QS, 50 percent of the measures relate to a university’s reputation among academic experts and employers. In the THE rankings 33 percent of a university’s overall score is based on reputation. This measure is primarily a matter of brand recognition: if your name is out there and people recognize it, then you will likely have a higher score on reputation surveys.

The next largest piece of each ranking is that of research publication citations, 20 percent in QS and 30 percent in THE. Each ranking uses a different source to calculate their citation scores. QS uses Elsevier’s Scopus database, and THE uses Thomson Reuter’s Web of Science. Scopus has 8,432 unique titles in its database across a broader range of disciplines, journals, and books, compared to Web of Science’s 934 unique titles. This means that universities with a strong emphasis on natural sciences, medicine and related fields likely will rank more highly in THE than QS. More comprehensive universities will be disadvantaged by social sciences and humanities, which produce outputs that are not as easily captured by databases, such as books, works of art, community engagement, or social and economic impact.

The THE rankings also measure the amount of industry income received by universities, providing an advantage to technology-focused universities and disabling those engaged in other types of activities.

There are other pieces to each ranking methodology, but I think I have made my point: each ranking measures universities differently, allowing a single university to do well in one ranking and very poorly in another. There is no true measure of universities and any such system is always arbitrary. Anyone can easily create an alternative.

The outcome of such diverse ranking methods means that people can pick and choose between rankings to use as evidence in making arguments to suit their needs. When Canadian universities recently fell in the Times Higher Education rankings, that outcome was used to argue that Canada’s universities are in trouble and Canadians must take action to transform their universities and related policy, see this example and this one.

Even more interesting is how individual universities play this game of snakes and ladders, to argue that their schools are not failing. McMaster University, by referring to the QS, THE and Academic Ranking of World Universities, argued that by looking at all of the rankings, readers can see that McMaster is generally doing well. The University of Alberta’s president argued in favour of the QS rankings (in which U of A rose) and against THE rankings (in which U of A fell) on the basis that the QS method is better than the THE method.

In reality, both methods are arbitrary and arguments aimed at research methods are attempts to couch matters of preference as matters of science. For any given question there are some methods that are better than others, but when it comes to public talk of rankings, methods are a matter of playing at games to gain reputation and engage in advocacy.

The temptation to play the rankings game is very strong. A good argument against using any of them would target the heart of the matter: rankings are based on each system’s definition of what a university should be, not on any specific university mission. Rather than do the work of climbing ladders and risk a slide down windy-snakey paths, universities can choose not to play such games at all.

Gary Barron is a PhD candidate in sociology at the University of Alberta. His research profile can be found at http://www.gbkb.ca/profile.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Sheri O Cancel reply

Your email address will not be published. Required fields are marked *

  1. Sheri O / November 12, 2014 at 09:17

    University rankings are an opportunity for publications, like MacLeans here in Canada, to capture an audience, sell some magazines, and serve the public good. Ranking methodologies differ of course by weighting but publications that rank universities shine a light where there is none.
    Universities market themselves, cultivate their brands, advertise and actively recruit new students with these efforts. Students and society needs some other measures by which to make decisions. We don’t know how or even if the universities go about ensuring quality and living up to their marketing promise. At least we have the rankings. The dearth of comparison information available to students and the public means that other enterprising publications can fill the void.

  2. Gary / November 13, 2014 at 09:45

    Greetings Sheri,
    I agree rankings are an efficient means of sharing information about institutions that are too often at a distance from people interested in them. You merely need to glance at a vertical line and know which university is positioned where. But that does not mean they are a good source of knowledge or useful for each individual’s purposes, particularly not for undergraduate students. For example, I am not aware of any ranking that has an effective measure of the student educational process/experience. Instead, if they try to measure teaching/learning at all, class sizes or professor/student ratios are used as a proxy. Such proxies do not measure learning, and are rife with problems. Faculty/student ratios, fro example, are problematic because universities often categorize faculty differently, and report them differently. So there is a real problem with knowing which university reported only on full-time teaching staff, which reported on all research and teaching staff, and so on. There are also ways to game data on class sizes, and the like. Paul Axelrod at York University recently wrote a good piece about rankings, and warned students to just ignore them

    Most rankings focus their measures on research, so that may make them a bit more useful for graduate students, but choosing where to do an MA or PhD based solely on rankings is a mistake. The decision should be made in consultation with the department, potential supervisors, and by contacting former students of potential supervisors. Knowing what your getting into with a supervisor is probably the key to having a successful and rewarding graduate experience, and whether a student will drop out of a program depressed and defeated (I could write pages about this, but will leave it here).

    Research measures, such as citations, are rife with problems as well. Richard Holmes operates a “University Rankings Watch” blog, and recently wrote a great piece that critiques citation methodology in rankings. Suffice to say, citations are not a straightforward measure of research impact, excellence, or influence. They may also be a measure of a scientific community’s indictment of a professor, research done at a different university on the other side of the globe and so on: http://www.universityworldnews.com/article.php?story=20141105105941193

    Malcolm Gladwell recently wrote a great piece arguing persuasively that most measures in rankings can actually be reduced to measures of wealth, but that leaves many more questions for us to ask about universities, the process of education, and the role of rankings and certainly does not mean they are valid, good, useful, or necessary.
    http://www.newyorker.com/magazine/2011/02/14/the-order-of-things

    But the nature of this piece was not to deal with all of the details in issues with rankings. I’m not whole heartedly against them, but they are misused and misunderstood. Here I only wanted to point out that what we see in the media when rankings are released each Fall, are games being played by people with particular interests, managing reputations, and doing related work. Such work often attempts to use scientific language/method as rhetoric. Rankings may use scientific techniques, but they are fully and completely determined by people with particular preferences, and the public or advocates of various sorts can pick and choose from rankings for their own purposes. So university admin can choose to use whichever rankings they like, or choose to ignore rankings, there is nothing inherently wrong with either of these options, but there are consequences associated with each. If students choose to use rankings, I would hope they inform themselves as to what their rankings of choice are actually measuring, speak to people at the university of their interest, friends, family, and so on in order to inform themselves in many other ways.

Click to fill out a quick survey