Skip navigation
Speculative Diction

Rank and File

BY MELONIE FULLICK | OCT 16 2014

With the new academic year comes a new round of headlines and tweets promoting the Times Higher Ed rankings (THE) results for 2014; and each year, along with THE, other prominent international rankings such as the QS World University Rankings and the Academic Ranking of World Universities (ARWU) are part of the same cycle. There are also national rankings like those produced by Macleans and the US News and World Report, as well as speciality rankings – the THE, for one, engages in a kind of salami-slicing of its data to produce a number of themed rankings throughout the year. These include separate rankings on reputation, internationalization, universities in BRIC nations, and universities under 50 years old.

The major rankings’ results aren’t usually much of a surprise because the same universities tend to dominate consistently. In fact if there were really significant short-term changes, it would likely indicate an alteration to (or problem with) the methodology, since such changes to quality and reputation tend to happen over long periods. This means the rankings providers must find a spin on each year’s results that generates interest by making it sound like there’s something dramatic happening. For example, this year the THE has announced that Asian universities are displacing those in Europe and North America (still), due to stagnating or reduced government funding in the latter regions. As others have discussed in more detail than I can offer, the actual difference in position for these universities may or may not be significant; but that’s not really the point when coverage requires a strong narrative to help drive attention to the new results.

Media themes relating to rankings results are different around the world, depending on what’s happening locally and what the results of the rankings are for those areas. Canada’s story this year is that its universities are dropping in the (THE) rankings. When I saw this news on Twitter, I joked that soon we’d see a headline or article claiming that Canadian rankings “failure” should be a reason to change the way universities are funded – little did I know, this exact argument had already appeared the same day in the Globe and Mail (and it came from Phil Baty). The reason all this is predictable is that it’s the argument used by the U15 group, and a drop in the rankings helps make the case for pursuing “excellence” through changes to funding. On the other hand when Canadian universities are doing well in the rankings, this is seen as proof that such universities could compete on the same field as the Harvards and Oxfords of the world – but only if they receive the right support.

The U15’s argument is a reminder that the use of rankings as a justification for particular funding strategies is closely tied to the argument for building “world class research universities”, which present themselves as engines of national economic development. High-ranking Canadian universities aren’t alone in this; we can observe a kind of revolt of the elite universities (sorry, Lasch…), where institutions that already lobby together nationally are now forming international networks and sharing strategies across borders. The Australian example is instructive, since the Group of Eight universities seem to have been pushing particularly hard for systemic changes that would presumably privilege their end of the market (and using rankings results as part of their argument).

The ways in which rankings influence the global conversation about university quality are troubling not just because they are fodder for some questionable recommendations, but because this is another example of something I discussed in my posts on PhDs and “student engagement” – where the ranking itself, not the quality it represents, becomes the justification for various kinds of change. This isn’t about quality per se, but more about rankings results being embedded in a competitive system where their symbolic value becomes tied to the allocation of real resources. That’s certainly what we see with the Canadian argument, where rankings results are used to justify the targeting of resources to universities that have a chance at being world-class, instead of wasting scarce resources on those that are unlikely ever to be truly “competitive”.

What’s being mobilized in this rhetoric is the spectre that haunts every striving university, i.e. the condition of its international competition. There’s a perpetual emphasis on being ever-closer to the top of the pile – in comparison to other institutions – as if this is the most meaningful outcome for everyone, as if these assessments reflect the kind of quality that inherently matters most. Rankings too often assist in producing and reinforcing the same hierarchies of merit that they appear merely to describe.

But if you make the system into a kind of game, you should expect participating agents to try to game the system. Misreporting of data is one way we’ve seen this happening; or simply not rewarding the work that won’t “count”; or skewing of the relevant variables through other dodgy practices. Yet another sure way to beat the system is to throw a lot money around (as we see in this direct dispatch from Captain Obvious). Thus the representation of quality can be altered through rigging the performance outputs; a high ranking may signify “quality” but the signal isn’t without noise.

The notion of quality that’s invoked is also different for each ranking, and of course there have been many criticisms of the methodologies employed. One recent government-commissioned Norwegian study of the ARWU and THE rankings concluded that they “say almost nothing about education.” Meanwhile, most universities around the world are not represented in the major rankings’ hierarchies. If rankings are tools for “consumer information”, surely it’s still a relatively elite group of consumers who are making use of them.

That won’t stop new players with new methodologies from entering the market (the rankings market, that is). LinkedIn has just produced its own ranking system based on user data, while the US News will shortly release its “Best Global Universities” ranking. Another recent entrant – and one that stands out for its difference – is U Multirank, which seeks to address problems with the lack of diversity (of institutions, of measures, etc.) as well as the over-emphasis on hierarchization, by taking “a user-driven, multi-dimensional approach comparing different kinds of institutions across a range of activities”.

Because universities are now assumed to compete in a global market, rankings aren’t just comparisons – they’re comparisons in the service of particular goals. I think ultimately our goals need to be related more to the pursuit of a kind of quality that serves the many roles universities play. This should relate to local as well as international contexts and to a diverse array of “outputs”, rather than merely the most prestigious or most readily measurable ones. And if government funding to universities is to be changed or increase, then it shouldn’t be primarily in the service of the few institutions that can align themselves with narrow parameters of “excellence” favoured by major international rankings agencies.

ABOUT MELONIE FULLICK
Melonie Fullick
Melonie Fullick is a PhD candidate at York University. The topic of her dissertation is Canadian post-secondary education policy and its effects on the institutional environment in universities.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey