Skip navigation
From the admin chair

We need a more complex model of faculty productivity

Back to the drawing board.

BY MAUREEN MANCUSO | NOV 06 2013

In my last column, I argued that, like it or not, we as faculty members need to participate in and lead the effort to define and measure our own productivity. The alternative is to allow others to determine the tools and scales of measurement, and try to live with the inevitable distortions, oversimplifications and skewed incentives that result.

The first step is to insist that productivity measurements acknowledge all three types of faculty effort: teaching, research and service. Attempts to assess productivity that do not account for all three categories will be misguided and inaccurate; worse, such unbalanced metrics will in turn create perverse incentives that undervalue and thus undermine the types of effort that are overlooked.

But measuring the productivity of people who don’t produce tangible goods will never be straightforward, because the outputs are so varied in form and manifestation, and even in timeframe: the full impact of what faculty “produce” may not be felt for years as their students, projects and contributions mature and evolve in the cumulative world of knowledge. The temptation is always to measure what can most easily be counted; we must supplement the obvious numbers with assessments of what really matters.

In research, publications are easy to count and so are research dollars.  All other things being equal, more publications and more funding are suggestive indications of greater impact, accomplishment and “productivity.” But all other things are never equal: publications vary in quality and impact, funding varies in availability and competitiveness by discipline, or across the theoretical/applied spectrum.

Moreover, measuring an input like funding ignores what the funding led to – what did it produce? Constructs like the H-index can help fine-tune assessment of research impact, but in some fields research outputs take other forms, less amenable to pure citation analysis: from patents, disclosures and partnerships to paintings, plays and poems. We must also account for the significant productive effort that goes into supervision and training of graduate students.

Teaching has several simple-but-incomplete metrics: student-faculty ratios assume that each instructor has only a fixed amount of “teaching” to divide among learners; higher graduation rates can reflect lowered standards or increased student success; student course evaluations can devolve into a thinly veiled popularity contest. There are nuggets of utility in numbers like this, but to characterize (and incentivize) truly productive teaching we ought to recognize outputs like student engagement, learning outcomes assessments and curriculum innovations (which, like foundational research, have effects that grow in time). We also should capture the impact of efforts outside the classroom, like thesis and pro-ject supervisions, experiential and service-learning activities, and the informal but productive interaction that makes up unassigned teaching.

Service doesn’t mean only internal committee work. Faculty engage in community service, both local and global. As specialists in their field, they advise governments, industry, non-profits and other institutions; and they serve their disciplines by reviewing or editing the “productions” of their colleagues.

These and other aspects of productivity require interpretation, and some rely on self-reporting. Informal activities are always harder to quantify than formal ones, but we still could identify proxies that at least take those otherwise overlooked efforts into account.

In my experience, this struggle between objectivity and interpretation, between reductive quantification and subjective characterization of quality, is very familiar in peer review and tenure-and-promotion assessment. Those processes are far from perfect, but we do as a profession aspire to assess the whole faculty member, and to balance output numbers with more ineffable evaluations.

I’m advocating that instead of relying on a few easy-to-acquire but easy-to-misinterpret numbers we must commit to building a more complex model of faculty productivity – one that incorporates what is good about how we already assess one another for career progress but is also accessible to external stakeholders demanding numbers. Instead of endlessly arguing the implications of incomplete and misleading simple counts and averages, let’s commit to an argument up front about what goes into a few composite numbers that in the end we can accept as an approximate representation of productivity. And then we can turn our attention to the comparatively simple chore of improving what lies beneath.

ABOUT MAUREEN MANCUSO
Maureen Mancuso is provost and vice-president, academic, at the University of Guelph and a 3M National Teaching Fellow.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey