Skip navigation
The Black Hole

Is there an effective way to evaluate universities that takes a unique approach to education and social impact?

A potentially interesting approach would be to consider substantially less standardization in the metrics that assess academic institutions.

BY DAVID KENT | APR 23 2020

When we evaluate academic institutions at a national or international level, we seem to have worked out a system that recognizes the fact that not all universities aspire to become the 27th best Oxford in the world. Some universities are recognized for research excellence, others for specific specialist research topics, others for teaching excellence and others for graduate versus undergraduate programs. Embracing this diversity in how research institutions are organized and the values they promote is incredibly important to help develop the full range of academic productivity that can improve our society.  That said, we do seem to limit ourselves to thinking about research and teaching when it comes to universities and I’ve always wondered whether this really reflected the overall value of an academic institution.

Where do we get a sense of a university’s commitment to and leadership in specific areas of national or international importance (e.g., sustainability/climate change)? How about universities who promote public engagement among their researchers? Are these areas part of teaching or research? A university at the leading edge of public engagement would almost certainly be harshly penalized in a ranking system for having fewer research grants/papers, so how does it make the business case for investing in such areas? In fact, our obsession with being evaluated (and performing well in said evaluation) can be very distracting and may not actually lead to better performance by those climbing the tables, especially if it simply becomes a tick box exercise.

There are entire research fields – and commercial enterprises – dedicated to understanding and designing how to best measure research and academic institutions. I will not purport to be an expert in evaluation and would happily hear from experts in these areas, especially if any of the comments to follow are way off base. My comments are coming from the perspective of someone that has recently become part of what is being evaluated and are observational at best. Nonetheless, I think there are several areas worth highlighting in these early exposures to how the United Kingdom evaluates its institutions that might be pertinent to Canada as well.

First, some quick information – the U.K. undertakes a national research evaluation (currently dubbed REF 2021) approximately every five to 10 years that greatly informs how much funding each institution obtains from the government. There is a more recently established TEF that evaluates teaching at U.K. institutions. These frameworks are not straightforward and take months to years of preparation before a final submission is made. This level of evaluation comes at a price and some aspects are rarely discussed. Three key areas are worth a mention:

1) An incredibly high administrative burden – legions of administrators are hired to help navigate the evaluation metrics once they are established and still others to ensure that a particular institution’s battle plan is executed. At Cambridge alone, there are 16 central administrative positions dedicated to REF Management and REF Impact. This does not include the significant amount of departmental administrator time nor the time of academics enlisted to run through mock REF evaluations, etc. It is a very expensive endeavour.    

(Side note: the institutions sufficiently resourced to wage this battle will almost certainly come out on top with no necessary correlation to having the best research, teaching, etc.)

2) Strain on academic research/teaching ­– despite their best intentions, the administrative squad described above cannot do all the jobs themselves and significant numbers of academic staff are required to undertake substantial roles in the process. Jobs range from collating data, evaluating colleagues’ work, developing exemplar “Impact” studies, and even running full fledged mock REF competitions to assess how well a department might perform on current data. All of this time takes away from actually completing research and teaching. Again, institutions that can better support their academic staff to reduce this burden will come out on top.

3) Unwanted streamlining of the university sector – perhaps the most difficult to monitor is the impact such evaluation exercises have on the decisions a university makes with respect to who it hires, what it builds, and how it positions itself internally and externally. If the boxes need to be ticked to get future monies in, then universities will strive to tick those boxes. If a program, person, or investment doesn’t align with one or more of those boxes, it risks getting cut – especially in a year when the evaluation is pending. In dual teaching/research institutions, this has major implications for how people are evaluated – research grants, papers and patents can suddenly become more valuable than rigorous teaching and training.

If we return to my original question of how we can find an effective way to evaluate universities that take a unique approach to education and social impact, there are several possibilities. One mechanism would be to incorporate even more strands of potentially evaluable activity in any such exercise – this would likely cost more and more administrative time to get it right (or wrong!). A second possibility would be to opt for more breadth and less depth across definable spaces that reach beyond teaching and research to give a more overall assessment of a university and its role in society. Finally, a potentially interesting approach would be to consider substantially less standardization in the metrics that assess academic institutions – fewer boxes to fill and fewer rules around those boxes to allow each university to make its own case for why its strengths are important to invest in.

Any evaluation framework will require an assessment of raw academic research productivity and teaching quality – these are the bread and butter of academic researchers. There is a specific need to assess whether researchers are producing respected work in their fields and students are being educated to a high standard. Beyond this though, there is such incredible potential for diversity in how universities push the boundaries for education and research, and the current system (in the U.K. at least!) seems to shunt universities onto one big ladder without valuing these different approaches – and it seems to cost an enormous amount oft time and money to do so.  It may be time for a change?

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey