Cries of disappointment and alarm have echoed across Canada in response to the release of Times Higher Education’s World University rankings this fall. Such dismay marks a sharp contrast from the shouts of celebration heard coast to coast when the QS Worldwide University Rankings were released only weeks earlier.
Most of Canada’s universities fell in the THE rankings and rose in the QS rankings. If rankings measure how good universities are, then why do Canada’s universities climb in one ranking, but slide down in the other?
The primary reason is that each ranker has a different approach to making their rankings. As a result, each ranking will emphasize different aspects of university activities. These differences in ranking method allow advocates to pick and choose which ranking they will use in order to promote a cause, and which strategies they will use in order to deal with the rankings where they are seen to be falling. Such games are much like playing the children’s board game snakes and ladders, where one can choose ladders to get ahead of competitors, or find oneself on a snake falling to the bottom of the board.
When it comes to the QS and THE rankings each has a different approach, though both claim to measure the level of world class excellence that universities achieve. For QS, 50 percent of the measures relate to a university’s reputation among academic experts and employers. In the THE rankings 33 percent of a university’s overall score is based on reputation. This measure is primarily a matter of brand recognition: if your name is out there and people recognize it, then you will likely have a higher score on reputation surveys.
The next largest piece of each ranking is that of research publication citations, 20 percent in QS and 30 percent in THE. Each ranking uses a different source to calculate their citation scores. QS uses Elsevier’s Scopus database, and THE uses Thomson Reuter’s Web of Science. Scopus has 8,432 unique titles in its database across a broader range of disciplines, journals, and books, compared to Web of Science’s 934 unique titles. This means that universities with a strong emphasis on natural sciences, medicine and related fields likely will rank more highly in THE than QS. More comprehensive universities will be disadvantaged by social sciences and humanities, which produce outputs that are not as easily captured by databases, such as books, works of art, community engagement, or social and economic impact.
The THE rankings also measure the amount of industry income received by universities, providing an advantage to technology-focused universities and disabling those engaged in other types of activities.
There are other pieces to each ranking methodology, but I think I have made my point: each ranking measures universities differently, allowing a single university to do well in one ranking and very poorly in another. There is no true measure of universities and any such system is always arbitrary. Anyone can easily create an alternative.
The outcome of such diverse ranking methods means that people can pick and choose between rankings to use as evidence in making arguments to suit their needs. When Canadian universities recently fell in the Times Higher Education rankings, that outcome was used to argue that Canada’s universities are in trouble and Canadians must take action to transform their universities and related policy, see this example and this one.
Even more interesting is how individual universities play this game of snakes and ladders, to argue that their schools are not failing. McMaster University, by referring to the QS, THE and Academic Ranking of World Universities, argued that by looking at all of the rankings, readers can see that McMaster is generally doing well. The University of Alberta’s president argued in favour of the QS rankings (in which U of A rose) and against THE rankings (in which U of A fell) on the basis that the QS method is better than the THE method.
In reality, both methods are arbitrary and arguments aimed at research methods are attempts to couch matters of preference as matters of science. For any given question there are some methods that are better than others, but when it comes to public talk of rankings, methods are a matter of playing at games to gain reputation and engage in advocacy.
The temptation to play the rankings game is very strong. A good argument against using any of them would target the heart of the matter: rankings are based on each system’s definition of what a university should be, not on any specific university mission. Rather than do the work of climbing ladders and risk a slide down windy-snakey paths, universities can choose not to play such games at all.
Gary Barron is a PhD candidate in sociology at the University of Alberta. His research profile can be found at http://www.gbkb.ca/profile.