Recently the Higher Education Quality Council of Ontario (HEQCO) released a report (PDF) on a study “designed to measure the teaching loads of faculty members in the Ontario university system and the relationship of this variable to others, such as research output and salary.” The study, comprising 10 of Ontario’s 20 publicly funded universities, looked at faculty teaching in three disciplines (economics, chemistry and philosophy). Results were compared separately for each discipline and institution, as was teaching activity at different levels (assistant, associate and full professor). The study is framed in these terms: “the rationale for analyzing teaching loads was to inform the discussion about opportunities for greater differentiation and productivity in the Ontario university system.” It seems that a goal was to show whether an uneven spread in teaching exists, and to highlight where and how teaching loads could be increased to improve “productivity.”
As is clear from the some of the responses it’s received, this study prods at a sore spot (the quest to translate faculty work into measurable units) with what seems like a blunt instrument – but why is that the case? In this post, rather than succinctly analyze the methodology or the policy implications I want to make a few comments about the context of this paper, and what this context tells us about its construction, its interpretation and its use. Firstly:
Measurement of what? Alex Usher pointed out on Twitter that the study is not designed to measure “how much” research professors do, but merely whether they are “research-active” at all, in comparison to how many courses they are teaching. Even this measure leaves out all activity other than tri-council grants and peer-reviewed journal articles. There’s also (as critics have pointed out) no attempt to show anything other than the number of courses taught, so the differing amount of work per course is left out entirely, as are other non-course related activities that provide support for student learning. Isn’t it important to make a distinction between number of classes taught, and actual time used in teaching work?
Massification and public opinion. The focus on dealing with teaching “load” highlights one of the ongoing problems of massification. One reason that expansion of enrolments is a political issue is because more people now have personal experience with the university; they are therefore more likely to have stakes in the discussion and in its material outcomes. We see an example of this when Margaret Wente in her column plays for populism by referencing what parents expect from universities, i.e. that their primary role is to teach students. If most people see universities as teaching-focussed institutions, this expectation must be addressed and it becomes more visible in rhetorical and political strategy. Hence we also see the Ontario Confederation of University Faculty Associations and the Council of Ontario Universities developing communications campaigns that seek to change public perceptions through highlighting the value of university research. Which also brings us conveniently to the next issue:
The professional prioritization of research in academe. Do we have a problem with an unequal spread of “teaching loads?” If we do, then surely part of the issue is that teaching is seen as a “load” to begin with. When we consider what is most valued in the academic career ladder, unfortunately this makes sense. While the public expects teaching to be valued, research is what still generates the most prestige, as is clear from the hierarchies and divisions of labour in academic work. This is why it’s so interesting that the Ontario government’s differentiation discussion often seems to operate within a kind of vacuum where teaching and research are seen as “different but equal” in the academic economy.
Missing data: In this Maclean’s article, we’re told that this is the “best [HEQCO] could do with the data available.” That highlights the old problem that many higher ed researchers in Canada could tell you about: we so often just don’t have any numbers to work with. But in this case, HEQCO’s paper states that university administrations do have much more detailed data, but that it’s not publicly available. The implication is that institutions aren’t willing to share the numbers, and that if they would provide the information in detail, the government could do a better job of policymaking. So are universities refusing to participate in this policy-making process, and if so, why is that the case?
Calculating knowledge. Admittedly my bias is that I do qualitative research, partly because I think it’s important to question where numbers come from and what they actually tell us. The HEQCO report provides an interesting example wherein what can’t be measured, is not “seen” even though it may be acknowledged as absent: “We recognize that faculty members have other teaching responsibilities besides credit courses, such as unassigned courses, preparing for lectures, office hours, student advisement, and undergraduate and graduate student supervision. Information on these activities is not publicly available and rarely even measured, and is therefore not included in our construction of faculty workload.”
So there’s a lot missing. But even if there weren’t, what does “productivity” mean in the context of university teaching, and can all of the contributing factors be captured in an ever-widening net of numbers? This is not the goal of HEQCO’s report, but I think it’s an important question to ask in the face of the possible assumption that “more (quantitative) data” would be the answer to the problems highlighted in this study. There are things that can’t be shown with “more data;” what are they and what will count as data? I think there’s a connection here to the current and growing obsession with “big data” as the answer to big problems, including of course the lack of efficiency of the learning process.
In a case like this where so much information is not available, we get a limited understanding of what’s going on. The resulting image of “productivity” also reminds me of this excellent post by Kate Bowles wherein she discusses the unseen, unappreciated work that is necessary for a university to be “productive” in the ways that can be measured and slotted into existing models. This is a qualitative gap I think we should address. HEQCO’s study is not disingenuous – its limitations are pretty thoroughly described throughout – but we’re still presented with specific recommendations, such as having research-inactive faculty teach 50 percent more courses.
I think an important question is how much influence this kind of research has on PSE policy, given the amount of data (both quantitative and qualitative) that aren’t included. The self-affirming logic of the various differentiation reports has the air of a foregone conclusion; since Ontario’s universities are already differentiated, then it’s “natural” for government policy to reflect that fact and to create greater efficiencies by doing so. Provision of data is framed as an opportunity for participation in governance, but for universities, handing over more detailed information may be perceived as a loss of autonomy in decision-making. This is the context of governance in which HEQCO’s report was produced, and it will be interesting to see if the research ends up being revised and expanded (as I’d say it needs to be), or if there’s any attempt to implement the recommendations.