Skip to main content
Speculative

Mental health in academe: let’s get our stories straight

Posted on November 21, 2014 by

Post to Twitter

On Monday while scrolling idly through my Twitter feed, I learned that the Guardian’s Higher Education Network in the U.K. has been nominated for an award for its coverage of mental health in academe.

The Guardian’s recent coverage of this topic started with an anonymously written article on March 1 2014, which discussed the “culture of acceptance” of mental health problems in academe. The author argued that depression and anxiety due to over-work are normalized in academic contexts, even as mental health problems are still stigmatized. The article received a lot of attention (i.e. a lot of pageviews); apparently seizing on this, the Guardian then published a series of columns and articles (and even ran a survey) on the same theme.

The Guardian’s oeuvre on mental health is an example of the unevenness of coverage on this topic: at its best, it brings attention to important issues that are having direct effects on students and staff in higher education. At its worst (perhaps in the name of “balance”?) it’s superficial and even offensive, reiterating familiar perspectives rather than challenging mainstream views.

Some of the problems we see in the Guardian series, covered in more depth in this excellent post by @JessicaRdctd, include:

  • Conflation of short-term stress effects with mental health issues, and of the so-called “everyday stresses” that students experience and need to learn to deal with (or being “resilient”), with the more extreme problems that some students may experience;
  • Advice column” pieces that provide tactics for students to fend off stress and its effects, tactics which may or may not be applicable or accessible to readers;
  • Focusing on personality traits like perfectionism, and related attitudes such as having high expectations;
  • Telling students to access services or “just ask for help” when these things may not be options (or may in fact cause more problems);
  • Assuming that the discussion has been happening nowhere else other than in the Guardian – and accordingly, avoiding dialogue or exchange with people who responded directly to the articles in their blogs or on Twitter.

To be clear, the Guardian isn’t the only publication where these themes can be found. For example, Times Higher Education has a piece in which “unrealistic expectations” are cited as a key cause for anxiety. Science gives us an example of the emphasis on personality traits like perfectionism, and individual choices. Nature actually solicited input from @Zaranosaur – who set up the website PhDisabled – then ignored all her comments and constructed a narrative about the benefits of available supports. Here in Canada, Macleans published an article that refutes one unfortunate argument with another, and a blog post that ‘splains mental health to us so we’ll know what’s a “real” problem and what isn’t; the Globe and Mail’s Margaret Wente chipped in with (surprise!) “university’s not meant to be easy”.

The point here is that articles that get lots of attention are likely to spawn more articles – or an entire series, as in this case. But the coverage is shaped as much by the need for attention as by consideration of the voices being heard; this in turn is driven by the needs of advertisers. Clickbait coverage of mental health is more likely to dramatize, and simultaneously trivialize, the problems under discussion, and to amplify themes that already dominate rather than challenging or bringing nuance to them. I know that some might argue any coverage is good coverage, but personally I take the view that if media attention is reinforcing negative and unhelpful stereotypes about mental illness, this is detrimental, even if it is “provoking a debate”. We also need to remember that these narratives can have an effect on policy and practice.

Mental health at universities is far from being a new issue, but it’s a potent one, as I discovered myself when I wrote a blog post about it three years ago. That post now has 90 comments (usually I see five or ten at most) and still receives more pageviews than anything else I’ve written. Many of the commenters agreed that there’s a “culture of silence” that exacerbates all these problems. The message I got from this was that not only is mental health a huge concern for academics, it’s also one they feel they can’t discuss.

Another thing I learned from the response to my post is that in a lot of media coverage, there’s a focus on individual factors (and individuals’ responsibilities) while institutional and societal problems that affect mental health are downplayed or ignored.

Like the larger society in which it’s situated, the university as an institution already treats students as exceptional if they don’t fit a certain model. This goes not only for those with disabilities and chronic illnesses but also for those with financial problems, with dependents, or without the cultural/educational capital to effectively navigate the bureaucracy. The assumed (ideal?) student, the one who’s the baseline on which the institution models everything else, is one who therefore doesn’t need anything “extra” from the university. This means that while services may be available, students who do have these needs are expected to perform the additional work involved in managing their relationship with the institution. As others have pointed out, self-advocating around mental health, chronic illness and disabilities is ongoing work and it compounds the everyday difficulty of accessing education.

While the system changes slowly we need ways to get by within it, and that’s where coping tactics help. But coverage that emphasizes “what YOU can do” to take the edge off your stress, can also de-emphasize the question of why you (and so many others) might be stressed in the first place. Reiterating the need for techniques of self-management also serve to reinforce the idea that you should be able to deal with your problems through such techniques – but what if you can’t? What if feeling “responsible” also makes you feel guilty for needing more help? If you’re assuming it’s all your own fault because you don’t know how to cope – that too is a problem, and it’s one that helps exacerbate the silence that’s already so destructive. Even the responsibility for “reaching out” is a burden when there is nowhere to turn for support.

We need to get by in the short-term, but without parallel critiques and activism around the institutional factors involved, nothing will change in the future. We need to get past the undue emphasis on personal mitigation of systemic factors; and since those factors exacerbate existing inequalities as well, we need to be paying attention to this as an issue of social justice. The “privatization of stress” is political.

For more nuanced commentary on all that I’ve discussed in this post, I would recommend a site like PhDisabled, where these issues are being tackled head-on in a critical and thoughtful way by people who have experienced them firsthand. Contributors bring a wide range of critical viewpoints, and there’s also an emphasis on practical suggestions and solutions for increasing accessibility in academe, which is task not only for those with disabilities but for the institution and all its members. This is part of how the work of change happens: through collaboration, support, and knowledge-sharing. As Naomi Lawson Jacobs writes, problems will be solved through talking more, not less, about the institutional factors involved.

Representations matter. Representations of disabilities matter because they shape people’s perceptions of what kinds of disabilities are recognized as legitimate, which ones carry the most stigma, and who is “deserving” of help. Representations of mental illness matter because in the media we’re still seeing ignorant and damaging stereotypes being presented regularly. All these things translate into material outcomes, through their effects on decisions about whether and how events, buildings, programs, will be made accessible. This is why I agree with Sara Ahmed that being a critical “killjoy” is important when it comes to addressing institutional and societal responsibility. If we don’t, the words might end up left unsaid and that would be much more of a problem.

The times, they are (always) a-changin’

Posted on November 6, 2014 by

Post to Twitter

Reading through the higher ed news over the weekend, I came across an item that began with the line, “The newspaper and book businesses have been transformed in recent years. But not education.” That illustrious beginning (as well as the rest of the article) points to a problem with coverage of higher education: the way the issue of change is addressed.

Partly because of its concern with the future, we also frame education as a means of change; it’s seen as a way to change the future by changing or shaping people into different people (or better yet, by getting them to change themselves), and a way to change society such that it reflects specific ideals or goals. In this way education is a form of governance of the future, which is probably why (tech) futurologists target it so often.

Educational institutions, too, are expected to change themselves so they can somehow be one step ahead of (or just catch up with) where people already are. Resistance to change is presented as resistance to what is natural and inevitable, like fighting a rising tide or an avalanche (yes, these are the same metaphors used in MOOC-hype articles – no coincidence). Universities are depicted as recalcitrant in the face of changing external circumstances, the latest of which is the ascent of the digital, as we see in this article by Paul Rigg. A Darwinian narrative – universities must adapt or face extinction! – is a common one in education futurology. Predictions of imminent doom are too often the stock in trade of those who want to peddle their own visions for how education should “evolve”.

The framing of education and change in these terms frustrates me because the issue of changing higher ed institutions (or making them change) is the focus of my research. My logic was this: if we’re going to have all these expectations, if the university is going to be required to change, and if critics are saying that education isn’t responding quickly enough to societal needs, then surely we should try to understand why? What does it mean for universities to change, and what does that process look like (in some of its myriad forms, at least)? Are there dynamics and patterns that we can identify, and if so, how do they play out?

I think it’s important to ask these tricky questions. Who gets to “make things happen”, does it relate only to their role in the organization? If one person is ineffective and another person is not, what’s the difference in how they’re thinking or acting? What does “effective” look like? How can we see “what happened” anyway? Where is the institutional memory, in documents or in people or both? What happens when there is no documentation – how much responsibility is there for the organization to document itself? Then there’s the issue of credibility: everyone has a story about events. Which perspective is “valid” and which is not?

I chose to create a case study because I wanted to get at these questions within the scope of a single institution. I’ve assembled thousands of bits of text for this purpose, ranging from mainstream media and student press articles to meeting minutes, internal and external public relations materials, newsletters, reports and financial documents, proposals, anything and everything I could find, alongside a series of interviews.

I’m always looking for traces of decisions, the small trail that marks their progress, not just the most obvious (and usually sanctioned) representations of them. Experience has taught me that most of this journey is off the record, the building blocks hidden by the smooth render of institutional words. Gossip, for example, is not just unverified or salacious personal information that’s passed around informally; it’s a form of talk that helps people to construct a version of the truth, which in turn affects how they respond to situations, and becomes part of the ongoing communicative context. It’s part of the unofficial record, made up of stories and anecdotes and what “everyone knows” (tacit knowledge).

It’s those kinds of organizational knowledge and communication that are the hardest to uncover – the stories and rumours and jokes and relationships that generate new dynamics or reinforce old ones, the politicking and alliance-building that can escalate into toxicity. In other words, informal aspects of organizational culture that are part of the context that both reflects and shapes the official records.

This is what I mean when I argue that communication is an inherent aspect of governance, not merely a representation of governance practices that have already been put to work (though it can be this too). Language is not only a way of describing, but also a means of framing, imagining, and limiting the ways we might think about an issue.

None of this complexity is reflected in the ongoing, shallow narrative of universities as institutions that simply have not changed, either over the course of the last century or even since Medieval times. Even if you want to focus in on “the lecture” as a relic of primitive past pedagogy that survives to this day, you’re still ignoring all the other, creative work that’s happening in postsecondary teaching and learning. And to support a change to innovative pedagogical forms, you’re going to need to know what makes those work (or prevents them from working) at the micro-level; even the greatest plan could be derailed by factors that aren’t immediately evident, or are not contained within the scope of the plan. So how do we make plans and policies that work in context?

As for the articles I cited above by Levine and Rigg, the very problems they decry reflect the ongoing process of change that has occurred in universities, and both articles use many of the standard tropes employed in deontic rhetoric about preferred change. These include applying the popular machine metaphor (“broken” and “repair”); using false comparisons to the music industry, bookstores, news media or other “disrupted” sectors; use of the word “consumers” interchangeably with “students”; the references to “digital natives” and “millennials” and a generation gap; and the framing of the current situation as a “crisis” (if there’s a crisis, it’s been ongoing in different forms for a long time).

Universities already have changed, over the decades and centuries. It’s just that they’ve never changed enough for the present moment. Thus we keep charging them with the task of changing more, to adapt to circumstance in particular ways, to adopt one form or another, for a broadening range of purposes. Ironically, we’re also asking universities, as educational institutions, to produce the future for which they are accused of being so ill prepared (or the future that will destroy them). But based on the research I’ve done so far, I’d say the question is not whether universities will change – since this is ongoing – but what those changes will look like, how they will happen, and whose needs they will serve best.

Thoughts on beginning blogging

Posted on October 23, 2014 by

Post to Twitter

I’m taking some time out in this post, to talk a bit self-indulgently about my approach to blogging – partly because I’m still asked pretty regularly about how and why I got started with writing a blog. I suppose it seems a bit random; this still isn’t really a common activity in academic circles, so it’s not something most people think about doing. On a related note, I’m genuinely surprised to find that this blog is a finalist (for the third time, even!) in its category in the Canadian Online Publishing Awards. So that’s prodded me to ponder how I ended up “here”, as it were.

This is how blogging began for me: because of having participated in Twitter chats and read others’ blog posts, I knew there was a conversation going on beyond my “local” academic environment. I wanted to participate in that conversation because I felt it was relevant to my research, and I knew I could contribute commentary that exceeded several 140-character tweets. Blogging seemed like the logical next step, and there were plenty of good examples to follow. All this happened as a response to something ongoing and fairly nebulous, rather than as part of any deliberate strategy.

At the time, I had not thought about the “risks” of having a public space where I discussed academic issues, nor the perception of blogging within academic culture (all that came later!). It was just something I wanted to do because it was a way of sharing ideas with others who might be interested, and it was accessible to me, and relatively quick. It seemed like I could jump right into the fray just as other bloggers had done.

So now I’ve been writing these posts for over four years. Before and during that process, I’m lucky to have been motivated, provoked, encouraged, inspired, promoted and supported by a lot of different people who have participated in that bigger (higher education) conversation through blogs and Twitter, and some of whom I’ve been lucky enough to collaborate with and/or meet in person. I’m extremely grateful to have been a part of all this. Here are a few of them (shout-outs ahoy):

Back to the award nomination, briefly: the reason I’m surprised by it is that I know the supposed “rules” for finding an audience via blogging, and I know I’m not following most of them. Short posts (around 700-750 words is considered ideal) produced regularly (every day is ideal, but several times a week will do). Focus on one or two main points and flesh them out clearly. Make the points consistently relevant to the audience you seek. If you can send posts out via some kind of daily or weekly email list, that’s even better. Share each post through all your social media profiles; add the request to “please share”. Ask readers a question at the end of the post, soliciting responses; engage with readers in the comments.

My posts, on the other hand, reach 1,500 words on a regular basis. I struggle to post regularly, because I keep wanting to think a bit more and add a few more words. I often cover a lot of different points in a single post (though I try to tie them all together coherently). I like sharing posts through social media, but I don’t like tweeting something repeatedly or bothering people with requests. I’m no good at responding to blog comments, because each one of them makes me want to write a whole other essay, and eventually I run out of hours in the day. I don’t tend to write in a deliberately “punchy” way, and I’m not into invoking dramatic metaphors or superlatives or the language of exaggeration, or hitting people repeatedly over the head with a blunt argument (see what I did there!).

The posts I write have actually gotten longer over time, and that’s partly because blogging (like other forms of writing) changes as the writing becomes more about building on what you’ve already said. It’s boring to simply repeat oneself, and personally I want to go somewhere new with the analysis, so this affects how I approach each topic. In that sense, blogging’s been great for research ideas because it’s a way push myself in directions I might otherwise avoid. I still keep a virtual scrapbook full of snippets of ideas that I’ve thrown into documents “for later”.

The length and relative infrequency of my posts is part of why I no longer get annoyed when I receive “you didn’t include the point I think is important” comments. It’s a blog post, and it can’t cover everything – there is “more” to every issue and I have to be OK with drawing a line somewhere. On a related note, blogging teaches you that you have to have some faith in your own arguments, and the flip side of that is learning when and how to stop caring about every person who disagrees with you and/or criticizes you – because someone always will. That doesn’t mean ignoring other people’s comments or critiques, it just means that the point of writing is not to try to please everyone or agree with everyone, since that only leads to paralysis.

After all that, I’m really thankful to find that some people are still reading, commenting on, and sharing my posts. Below I’ve provided a bit of a re-cap past work – a list of some of the posts I’ve written since I last took a look back (which was more than 2 years ago). These posts reflect a number of themes that I’ve seen recurring in media coverage and public debates about higher education, and in my own research on universities and organizational change:

Lastly, I want to offer my thanks to University Affairs for giving me the opportunity to share my posts in this space, with additional kudos to UA’s skilled and long-suffering editors – Peggy Berkowitz, Léo Charbonneau, and especially Tara Siebarth – since between them they have proofread every one of these blog posts, and have been very supportive of my writing. I’m looking forward to another year of blogging and the surprises it always brings.

Rank and File

Posted on October 16, 2014 by

Post to Twitter

With the new academic year comes a new round of headlines and tweets promoting the Times Higher Ed rankings (THE) results for 2014; and each year, along with THE, other prominent international rankings such as the QS World University Rankings and the Academic Ranking of World Universities (ARWU) are part of the same cycle. There are also national rankings like those produced by Macleans and the US News and World Report, as well as speciality rankings – the THE, for one, engages in a kind of salami-slicing of its data to produce a number of themed rankings throughout the year. These include separate rankings on reputation, internationalization, universities in BRIC nations, and universities under 50 years old.

The major rankings’ results aren’t usually much of a surprise because the same universities tend to dominate consistently. In fact if there were really significant short-term changes, it would likely indicate an alteration to (or problem with) the methodology, since such changes to quality and reputation tend to happen over long periods. This means the rankings providers must find a spin on each year’s results that generates interest by making it sound like there’s something dramatic happening. For example, this year the THE has announced that Asian universities are displacing those in Europe and North America (still), due to stagnating or reduced government funding in the latter regions. As others have discussed in more detail than I can offer, the actual difference in position for these universities may or may not be significant; but that’s not really the point when coverage requires a strong narrative to help drive attention to the new results.

Media themes relating to rankings results are different around the world, depending on what’s happening locally and what the results of the rankings are for those areas. Canada’s story this year is that its universities are dropping in the (THE) rankings. When I saw this news on Twitter, I joked that soon we’d see a headline or article claiming that Canadian rankings “failure” should be a reason to change the way universities are funded – little did I know, this exact argument had already appeared the same day in the Globe and Mail (and it came from Phil Baty). The reason all this is predictable is that it’s the argument used by the U15 group, and a drop in the rankings helps make the case for pursuing “excellence” through changes to funding. On the other hand when Canadian universities are doing well in the rankings, this is seen as proof that such universities could compete on the same field as the Harvards and Oxfords of the world – but only if they receive the right support.

The U15’s argument is a reminder that the use of rankings as a justification for particular funding strategies is closely tied to the argument for building “world class research universities”, which present themselves as engines of national economic development. High-ranking Canadian universities aren’t alone in this; we can observe a kind of revolt of the elite universities (sorry, Lasch…), where institutions that already lobby together nationally are now forming international networks and sharing strategies across borders. The Australian example is instructive, since the Group of Eight universities seem to have been pushing particularly hard for systemic changes that would presumably privilege their end of the market (and using rankings results as part of their argument).

The ways in which rankings influence the global conversation about university quality are troubling not just because they are fodder for some questionable recommendations, but because this is another example of something I discussed in my posts on PhDs and “student engagement” – where the ranking itself, not the quality it represents, becomes the justification for various kinds of change. This isn’t about quality per se, but more about rankings results being embedded in a competitive system where their symbolic value becomes tied to the allocation of real resources. That’s certainly what we see with the Canadian argument, where rankings results are used to justify the targeting of resources to universities that have a chance at being world-class, instead of wasting scarce resources on those that are unlikely ever to be truly “competitive”.

What’s being mobilized in this rhetoric is the spectre that haunts every striving university, i.e. the condition of its international competition. There’s a perpetual emphasis on being ever-closer to the top of the pile – in comparison to other institutions – as if this is the most meaningful outcome for everyone, as if these assessments reflect the kind of quality that inherently matters most. Rankings too often assist in producing and reinforcing the same hierarchies of merit that they appear merely to describe.

But if you make the system into a kind of game, you should expect participating agents to try to game the system. Misreporting of data is one way we’ve seen this happening; or simply not rewarding the work that won’t “count”; or skewing of the relevant variables through other dodgy practices. Yet another sure way to beat the system is to throw a lot money around (as we see in this direct dispatch from Captain Obvious). Thus the representation of quality can be altered through rigging the performance outputs; a high ranking may signify “quality” but the signal isn’t without noise.

The notion of quality that’s invoked is also different for each ranking, and of course there have been many criticisms of the methodologies employed. One recent government-commissioned Norwegian study of the ARWU and THE rankings concluded that they “say almost nothing about education.” Meanwhile, most universities around the world are not represented in the major rankings’ hierarchies. If rankings are tools for “consumer information”, surely it’s still a relatively elite group of consumers who are making use of them.

That won’t stop new players with new methodologies from entering the market (the rankings market, that is). LinkedIn has just produced its own ranking system based on user data, while the US News will shortly release its “Best Global Universities” ranking. Another recent entrant – and one that stands out for its difference – is U Multirank, which seeks to address problems with the lack of diversity (of institutions, of measures, etc.) as well as the over-emphasis on hierarchization, by taking “a user-driven, multi-dimensional approach comparing different kinds of institutions across a range of activities”.

Because universities are now assumed to compete in a global market, rankings aren’t just comparisons – they’re comparisons in the service of particular goals. I think ultimately our goals need to be related more to the pursuit of a kind of quality that serves the many roles universities play. This should relate to local as well as international contexts and to a diverse array of “outputs”, rather than merely the most prestigious or most readily measurable ones. And if government funding to universities is to be changed or increase, then it shouldn’t be primarily in the service of the few institutions that can align themselves with narrow parameters of “excellence” favoured by major international rankings agencies.

Milking the crowd

Posted on September 30, 2014 by

Post to Twitter

You may have seen some of the articles on “crowdfunding” that have been bouncing around over the past couple of years. They’re generally positive accounts of researchers tapping into this newfound source of cash that enables them to work on projects that wouldn’t be eligible for support from the usual agencies (such as the Tri-Council here in Canada).

Crowdfunding (as opposed to crowdsourcing) involves the use of (usually) online tools to solicit funds for a project, from the “crowd”, i.e. from as many people as possible. It’s true enough that there have been some remarkable successes, which is part of what’s feeding into the growing popularity of the practice. But as usual, I’m going to be a party pooper (sorry/not sorry!) and argue that this new form of funding is a mixed blessing. And since the only criticisms I’ve seen thus far have tended to focus on the lack of prestige attached to crowdfunding (when compared to competitive grants), I think it’s time to play devil’s advocate and dig into this a bit.

To start, I think the issue isn’t (just) whether or not crowdfunding will become more popular, but rather why or why not this might happen, and what the consequences might be if it did. So what “problem” does crowdfunding solve, and how does this relate to the primarily positive attention that it receives?

I think it’s easy to see where the impetus comes from. Funding is distributed in and among research areas within universities and other research organizations, in particular ways. The agencies responsible for this funding can set the priorities, and if one’s work does not match these, the opportunities are reduced. If the number of applicants increases over time without a corresponding rise in research dollars, this too affects each applicant’s chances. Crowdfunding offers an “alternative” to this cycle, bypassing the agencies completely and soliciting support directly from the (online) masses. If the bulk of research funding is already competitive and the chances of winning are low, then why not turn to the public for support, especially when new online tools can enable the process?

In this sense, crowdfunding devolves decision-making about funding allocations, removing the organizational filter and leaving the decision to the “crowd” – it’s another form of marketization. This is important because it seems to address a primary problem of governance: the distribution of scarce funds, which has been managed more and more through competitive mechanisms.

Crowdfunding removes the need for academic peer review, which can, in a sense, be freeing. But do non-expert publics have the same priorities as peers in one’s field? While the needs of these publics are important, their members aren’t generally equipped to make the same kind of decision as someone who has trained for years in a particular field. As much as I don’t believe in any kind of pure meritocracy, I still wonder under these circumstances – what kind of research gets funded? For example while a disease like cancer “is a problem for everybody”, what about diseases that aren’t? We might also see the opposite problem – where issues that do affect many people are not “marketable” enough to attract the funding they require.

Directly related to this is the nature of the communication required. A change to the presumed audience also means changes to the nature of the “pitch” and to the means of its delivery. It could be a problem that the best PR campaign, and not necessarily the most innovative or important research, wins the day. While I agree that researchers should learn to communicate better with different publics, I don’t think this should be framed primarily through requests for donations. There’s also the question of whether broader audiences can be reached over time through proprietary platforms including widely used social media tools, such as Twitter and Facebook. Given the decay of “organic reach” on Facebook (as an example), unless your campaign catches on and spreads beyond your personal network there, you’ll likely be paying for ad space anyway.

I was reminded of all this recently when I saw an article from the Guardian UK about TED talks. The author describes a funding pitch where an astrophysicist was told he should be “more like Malcolm Gladwell”. However you want to interpret that, I think it raises a real problem with the idea of “pitching” specialized research in this way. The author argues that “astrophysics run on the model of American Idol is a recipe for civilizational disaster”; he could have been talking about crowdfunding. After all, you can just as well earn $55,000 for a potato salad party as for a crucial research initiative – that’s the Internet for you.

As Canadian scientist Jim Woodgett notes, even in the regular funding system “an investment in the future…can be a tough sell”; and governments becoming less willing to back projects that have no solid outcomes attached. Will the “crowd” be willing to step in and assume the risks?

Grant-writing for competitive government funding is known to be time-consuming; crowdfunding, especially if it becomes used more widely, would also take time, not only at the outset where new skills must be learned but also on an ongoing basis. If every project requires a publicity campaign to extract money from the public directly, this will take time out of research work. Will more specialist staff need to be hired to help, or take over, the communications aspects of these campaigns as they become more institutionalised? This process of seeking funding, in itself, could easily be spun into yet another industry. Will smaller, cheaper campaigns be able to match the successes of those that hire professionals to fine-tune the process?

What might be some longer-term consequences of crowdfunding as a means of supporting research? Not only does further privatization of funding change the process and nature of research, it could easily be used by the government as a justification for cuts to research funding. After all, if the money can come directly from the public, why waste scarce government funds unnecessarily? While this might sound like paranoia, we’re already seeing crowdfunding being used not just to launch small, unusual or one-off projects, but to generate what I’d argue is the kind of ongoing support that governments should provide (for example, research on social and environmental issues).

In crowdfunding campaigns, “rewards” are offered in return for donations; for one University of Alberta team, “Giving $25 will get donors a shout out on Twitter or Facebook, giving $100 will get a shout out with the donor’s name on a one-centimetre square chip inside the satellite” with the possibility of adding more names as your donation increases; “$1,000 will get a company logo engraved inside the satellite”. This is really not much different from the named buildings already so common on campus, yet it isn’t being decried in the same way. Rather, it’s presented as an opportunity for entrepreneurialism, a means of filling in the gaps left by the government.

It’s also important to remember that the larger public is not an unlimited source of additional cash, and some may feel that the tax dollars they already contribute should be enough. Crowdfunding for research is a philanthropic venture; when research funding becomes another charity, as we already see with massive campaigns for breast cancer (“Pinktober”) and other causes, how long before the public cannot give anything more? How much money do people have available for such donations – how far can you get by asking friends and family to boost your work? Will people still have money to give, if and when crowdfunding campaigns become more widespread?

Funding has to be sought out somehow, and most available funding is competitive in some way; the competition excludes much of the work that could be done. From that work, we will never know what insights and innovations might have emerged, and what new ideas could have turned out to be crucial to the development of a field. But it’s still important to consider the possible long-term systemic effects of any form of funding.

Yes, there are positive aspects of crowdfunding, but since these already receive so much attention, I’m hoping I’ve been able to convince you that there’s a flip-side to the deal. A focus only on the campaigns that succeed is unhelpful because it created a false impression of feasibility where there may be none. Expressing these critiques is not about siding with elitism or specialization, denying that we must learn to better communicate with non-specialist publics about the work that researchers do. It’s a serious consideration of the funding landscape and of the long-term shape of research support – because at the heart of this is the crucial issue of how we decide what we need to know, who is allowed to find out about it, and how that endeavour will be undertaken.

Student engagement and the PhD, part 2

Posted on September 3, 2014 by

Post to Twitter

This post is the second part of a longer piece that address the issue of student engagement in the context of doctoral education. You can read the first part here.

—————————————————————————————————

I think one of the biggest challenges of education policy is that we’re trying to get things to happen on purpose that often seem to happen by accident. Sometimes it’s as if the more we try to pin down and reproduce the “right” results, the further we get from allowing learning to happen. But there are a lot of different and useful ways of approaching this problem. After all, it isn’t really “by accident” that these things happened. There was (and is) an environment produced by choices that shaped who could participate, and how, in the university’s activities. Bearing that in mind, there are a few ideas I would like to throw into the mix.

Culture. The issue of student engagement raises the perennial problem of policy and culture. These elements need to complement one another, but what is it that steers organizational change? We face a chicken-and-egg situation, a question of what comes first, and therefore, how we should put decisions into practice. Culture is more difficult to change, but also more pervasive and effective. More than just a mission statement, it’s enacted by every person participating in the institution or in the learning experience itself. Difficult as it is, without a culture-based, long-term approach the university is a lot less likely to create lasting relationships with students/alumni, with employees, and with other groups and communities of which it is a part. That’s why for every change we want to make, we should be asking “how does this culture support this (or not)? Does it support the kind of culture we want?” This isn’t a call for homogeneity; goals can be interpreted in diverse ways in a cultural context where we acknowledge some common ground that is not imposed “from above”. I’d also argue that cultural change brings a focus on process and environment rather than solely on outcomes.

Holism. It’s important to think about how goals aren’t necessarily achieved one by one through specific programs that target isolated facets of achievement; change happens holistically, even as each step we take contributes to its form and practice. We’re trying to cover a lot of bases in university education, and some of them seem contradictory. Education itself is a complex process, and fragmenting the process can, ironically, make it less “efficient” at achieving the real goal. For example, specialization of services in the university makes things easier to manage from within, but it can make life difficult for students (and discourage them from using said services when they’re available). We’re shooting at moving targets, since the needs of students are constantly changing, and there are also many paths to the same goals. We need to achieve multiple goals simultaneously rather than focusing only on targeted policies. This requires thinking about how the entire learning environment contributes, not just the elements that are most directly and obviously relevant.

For doctoral education this is a salient point because we cannot simply keep expanding the amount of bases we try to cover during the degree – not without affecting times to completion. But if we’re more thoughtful and deliberate about the process, and the different goals, needs and proclivities of PhD students, we can address this problem. There is no one solution, but we can push for a more iterative approach, where students think and talk about their intellectual development and the possibilities for their futures, making decisions along the way instead of sticking to a predetermined or assumed path. Being “engaged” in various (academic & non-academic) communities can help students learn not just how academe operates and what faculty work looks like, but also what’s “out there” in terms of other kinds of careers and mentors. Trusting relationships are key to this process, but we cannot assume the right conditions will simply develop in the absence of explicit support.

Openness. Can universities as organizations become more “open” or porous in ways that will facilitate better relationships (and further engagement) not only with their students, but also with other groups, both internal and external? Is this a part of the answer – and if so, what would facilitate it? If the university itself is to become more open, this will be a process facilitated by communication with students and with other publics and participants as well. I believe the current ways in which universities treat communication are revealing of assumptions about organizational control. There is a boundary between what happens in the academic organization and what happens elsewhere, but this is a negotiable boundary; in changing it, we see organizational relationships change in ways that are perceived as a loss of control. Because such control was an illusion at the outset, we need to face the issue and develop a different approach.

Returning to the example of the doctorate, I think the main reason that student engagement isn’t discussed with regards to PhDs is that they’re assumed to be “engaged” enough already, by the (academic) parameters of the description. We see terms like “integration” to refer to belonging and participation, but it doesn’t capture the broader scope (and diversity) of learning and experience that takes place informally in both academic and non-academic settings, and feeds into doctoral students’ choices and outcomes.

To re-state: students can learn about future possibilities, and make contributions in the present, by being enagaged with/in these different communities. For example, there’s the community of the university, through which students can come to understand academe in general as an institution – and because scholars play an active role in governance, this is even more important. Then there’s the community of the discipline, wherein students learn the norms and values of the research community; there’s the larger community of academe that’s also global (if variable across jurisdictions). Lastly, there are communities in non-academic contexts, which can comprise specialists and/or non-specialists. We’re increasingly seeing calls for PhDs in particular to make connections between their research and non-academic publics. There’s a wide array of possibilities here, and in too many cases they’re still not being explored enough (or coming up at all) during the PhD program.

While navigating all this, doctoral students are dealing with the fact that some kinds of work are valued in the academic environment while others are not. But after attaining this high level of expertise, do we not have a responsibility to engage with publics beyond those limited to academic contexts? And don’t we want to better know the scope of what is possible with a PhD?

One implication here is that the task of supervision cannot and must not be a “black box”, a process where the student is encouraged to rely heavily upon one person to provide them the guidance, information, and input required to make good academic and professional decisions. Formal institutional reward systems may need to change for this problem to be addressed. A more collective approach to responsibility is crucial because even in terms of preparation for academic careers, the doctorate is often lacking – and it’s unreasonable to expect supervisors to know about non-academic careers when they’ve likely never had one. Students need support not only from a strong peer culture but also from a culture of collaboration in professional development.

As I’ve argued before, the university isn’t apart from and/or better than “the real world”. Its boundaries are of our own making, and they can be re-made if we so choose. And let’s be honest: getting involved with our education as a part of that world can be painful, angering, difficult, troubling, just as much as it’s rewarding and productive. Just knowing about what’s going on every day and knowing how little control you have over so much of what happens – if you care about that, it hurts. Then, if we speak up about the things that matter to us, the response isn’t always positive (and in fact it can be punishing). When we see problems and try to make changes happen, we might find ourselves stuck without knowing what to do next, frustrated but in need of perseverance. We may see, not answers, but more questions.

All this reminds me of these words from Buddhist teacher Thích Nhất Hạnh: “Once the door of awareness has been opened, you cannot close it.” But I think, of course, this is a good thing, and I think that in spite of the difficulty we should show students how to find that door – then encourage them to open it and walk through.

Student engagement and the PhD, part 1

Posted on September 2, 2014 by

Post to Twitter

This post is an expanded version of a keynote talk that I presented on August 26, 2014 at the 11th Annual Workshop on Higher Education Reform, at Memorial University in St. John’s, Newfoundland. The post is in two parts, because it’s quite long and I’ve expanded on every point; but hopefully it’s worth reading to the end.

—————————————————————————————————

The theme I’m taking up here is that of of student engagement within and beyond the university, and as such, I found it hard to approach the topic for a few reasons. One is that I have a lot of trouble separating “student engagement” from learning and from the university experience overall; I think it’s just the integral thing that has to happen in all aspects of university life, if students are to have an education at all. Another reason is that the research literature on student engagement is heavily focused on undergraduate students, but my research only includes PhD students in the analysis (for reasons that make sense in the context of the work I’m doing). Lastly, I’m skeptical about the systems of assessment that are applied once we standardize and reify “engagement” as a thing to be catalogued, described, measured, and compared (more on this below). For that reason I want to stress that I take a pretty broad view of the meaning of the term.

I’ve been interested in how knowledge connects to the world, for better or worse, for a long time. My interests didn’t always play out in ways that were viewed positively by others, but the need was there, and I remember a few moments that highlight how this directly shaped and drove my educational and professional choices.

In the first year of my communication studies degree, I learned that a favourite prof might not be back to teach more courses, because she was a contract professor and couldn’t be certain that she’d be re-hired. I couldn’t understand why someone who did a good job would not also have a secure position. This taught me to pay attention to (and care about) the context in which my education was happening – the university.

The year after that, I became a teaching assistant. As an undergraduate myself, I knew what made me bored in a tutorial, and I knew I wanted to make things interesting and accessible for the students in my group. But almost from the first moment in the classroom, I realized there was a divide I had to bridge: I didn’t know what it was, or how it worked, but no one was as keen as me – and I didn’t know where to start with that. I just knew it had to happen, because my enthusiasm wasn’t enough for them. I had to learn about the practice of education.

Lastly, because a lot of people ask me about this, I’ll mention how I started this blog. It’s because I realised through using Twitter that there was a whole other conversation going on about the things I’d been studying, and that it was a lot bigger than a single department or institution. I saw that there was a way I could join that conversation, so I created a blog where I could participate in more than 140 characters at a time. Through watching what happened when we critiqued education in these venues, I was able to pay more attention to the context of knowledge and who gets to “produce” it.

So is that what student engagement looks like in practice? Being more “engaged” in the narrower sense contributes to academic outcomes, which is a big part of why the issue has been recognized as a legitimate object of research. But this is only one version of what it means to be engaged with one’s own education, and my point is that we have to look beyond stereotypical markers of academic achievement.

I don’t think student engagement can be seen as a distinct element of the university’s mission. Why? Because it’s integral to everything happening at an educational institution. If we can say that the university has a mission, it’s more than an academic one – and more than the “production” and “transfer” of knowledge. It’s also more than an economic mission, unless we think the only point of knowledge is its contribution in that regard. We live in a world that needs people to care about it, and they need not only to care, but to be equipped to do something with that. I believe those goals are mutually reinforcing, as is their inculcation every level in the education system.

There are of course contextual factors that make some approaches easier to pursue than others. Over the past 70 years or so, the institutional growth and fragmentation of many universities – their increased size and complexity – has made it more difficult to imagine common elements of an organizational culture, and harder for students to find a place and find their way within academic systems. At the same time, some elements of governance have become more centralized, and it can be harder for students to make real contributions to the process. They may be spending less time in the campus community, working more hours at jobs to provide funds for education expenses. For doctoral students there’s also the element of intense competition for the “right” jobs, which can affect the peer dynamic and fuel the pursuit of only the most strategic activities.

Related to increasing expenses are the external demands for accountability that come not only from governments but also from parents and students, and other groups that have interests in the outcomes of the work universities do. Students are implicitly encouraged to see education as a private good that leads to economic security, while governments want “human capital” and innovation, and businesses demand “job-ready” graduates with more directly applicable skills.

In this (political) context of increased competition, privatization, and accountability, the data that universities produce are not neutral. Particularly when they’re tied to resource allocation, these data become tools of comparison that shift our attention to specific, measurable objects, such that those objects – rather than to the goals we have in mind when we imagine the purpose of the university – become the focus of (and reward for) our efforts. Transparency in itself is not the issue, but rather the technocratic systems we construct that too often come to take their points of reference as entirely internal. Just as “getting things done” isn’t the same as “productivity” that is positioned on a scale of assessment, learning is not the same as “learning outcomes”; and student engagement is not the same as an increased NSSE score. But if these scores are what we reward, they become the drivers of our decisions.

The epitome of the technocratic approach is when we see technology being proposed as the panacea to systemic problems through education (a recurring historical theme), with one of the best recent examples provided by the MOOC frenzy that peaked around 2012-2013. Yes, MOOCs were supposed to increase student engagement, too; and they were positioned against the boring lecture as a means of highlighting the inferiority of existing educational models.

A related theme is that of the use of learning analytics (and/or “big data”) as the answer to discovering how and why students actually learn, a problem that has surely been at the heart of a century of pedagogical theory and education policy. Tech development is of course heavily affected by the same political economic context as education; the demand for scale, efficiency, competitiveness, and demonstrated outcomes. Needless to say I’m far from convinced that the magic tech bullet will be any more successful in the future; and technocratic policies, injected into institutional environments in order to quickly solve narrowly-defined problems, are really no better.

“Student engagement” is also an accessibility issue. It’s clear that some forms of involvement in education and related activities are tokens for the entitled, used as ways of indicating their eligibility for scarce resources. This is true from primary school to the PhD level, where we see how cultural and economic capital operate to aid the replication of privilege. The “new super people” referred to by the New York Times are examples of the process at work at the undergraduate level, but more generally, “merit” is constructed and recognized in particular ways that benefit those who had more to work with at the outset. While undergrads can invest in tutoring to beef up their test scores and pad their CVs with juicy extracurriculars, PhD students with funding and mentorship can access prestigious conferences, build more influential networks, pay for coaching services, and spend more time focussed on professionally rewarding activities instead of scrabbling for opportunities. We also have to question which voices are seen as worth hearing, what activity will “count” on one’s academic scorecard, and who gets to feel “safe” enough to speak up at all, since the risks are unevenly distributed.

Next up: In the second part of this post, I’ll be looking at student engagement in terms of some principles and approaches for reframing of doctoral education.

An update on tools of the trade

Posted on August 11, 2014 by

Post to Twitter

Periodically I try to share some of the tools I’m using in the process of researching, writing, presenting, and muddling about online; I’ve noticed that frequently there are things I’m taking for granted that others don’t know anything about (and could really use!). This time around I’m focusing on tools I’ve worked with in various ways for academic and other research over the past year. I think this about as close as I’ll get to doing a “back to school” post, especially since it’s not even mid-August. Enjoy – and I hope you find something new and helpful…

Topsy. If you’re doing any kind of research involving social media and especially Twitter, Topsy is invaluable for its capacity to pull up tweets all the way back to 2006. That’s right – while tools like Storify and indeed Twitter’s own search fall down at anything beyond a month back, Topsy excels, allowing you to find (very) old tweets and to go to the originals by clicking on the timestamp. You can search posts within a specific range (down to the hour). However, without a Pro account there are limits, including searches that end at the 10th page of results; I haven’t signed up for this one, but I’m hoping to do so eventually (it would be worth it).

Below: a Topsy search within a specific time-frame shows the earliest #PhDchat tweets, from 2010.

Diigo. Even though I’ve probably discussed it before, Diigo makes the list again because it’s a tool that’s constantly being tweaked – in a good way; it’s a social bookmarking tool with a lot of heft. For example, they’ve introduced a new feature called “research mode” where the bookmarking tool will automagically add the same set of selected tags to each link you save in a session. I upgraded to Diigo Pro quite a while ago and I have no regrets, since I do a lot of research online and being able to cache and tag (and add highlights and notes to) news articles and other posts is extremely helpful for this.

I should mention that many people use Evernote in a similar way, but I haven’t really dug into it because the other methods I use have been sufficient. At a certain point in one’s work it also takes so long to figure out and set up a new system that sticking with the current one is a better idea (this is why I never fully got into using Scrivener, though it’s a great tool).

Camscanner. For my research I’m looking at  a lot of documents: everything from meeting minutes to articles in student newspapers to marketing materials and more, and they’re a lot more useful to me in digital form. Camscanner is an app I added to my Android phone about a year ago (it’s available for iThings too), and which I used to make documents until I was able to buy a decent scanner earlier this year. It allows you to use your phone’s built-in camera as a scanner; you simply photograph the documents, and the app converts them into PDFs. Image quality tends to be excellent, and you can add multiple pages as one document.

Because I have a lot of pages to deal with, I’ve moved on to a scanner with an automatic document feeder, and Adobe Acrobat Pro, with which you can apply OCR on the scanned pages (making them searchable) and combine them into larger documents with sections. But there will always be times when taking a photo of something is the most convenient (or only possible) way, and Camscanner serves that purpose perfectly.

Timeglider. I use timelines quite a lot because they help me to put together a visual “story” of what’s happened in a particular period of policy change, or to show events unfolding in a case study. There are quite a few timeline tools floating around, but I’ve been using Timeglider because it allows for visual elements that differentiate the events, including changing the size of an event to match its importance. Timeglider has a free trial but to get the most out of it, you’ll need to upgrade; since I’m using it quite a bit, I decided to go with the $5 per month account. The other tool I would suggest is Timeline JS, which is free and has a nice clean layout and a simple process.

Below: a section of a timeline in progress, showing PSE policy changes in Canada with a focus on Ontario. Coloured threads show the length of provincial and federal government mandates and periods of policy implementation.

f5. If you’re like me and you can’t afford to pay someone to transcribe your interviews, you’ll need to make the task as easy on yourself as possible. I chose f5, one of many programs that allows users to import an audio or video file and create a text file as the transcription attached to it. What I’ve found very useful about f5 is that it inserts timecodes automatically as you transcribe – so instead of just showing turn-taking, the transcript also shows you how long each person spoke for and when they said what. If you want to go back to an earlier part of the interview, you can click on the timecode. You can also slow down the recording without much vocal distortion. f5 is free, and available for Mac and Windows; it works with or without a footswitch.

Dedoose. This is a qualitative research tool that’s been recommended to me by trusty colleagues, but I haven’t tried it out yet. Since the reviews seem overwhelmingly positive, I thought I’d add it here as an alternative to more well-known software like NVivo (which is now available for Mac as well as Windows). Each of these comes with a cost – Dedoose requires a subscription fee that adds up to $100 per year at the student rate; NVivo has more options, including a per-semester rate of $60. But if you have the budget for it and you want to try something more comprehensive, these two are worth looking into for a start.

Zotero. I finally bit the bullet and picked a reference management tool, though right now I really only use it to organize sources since I already have a system for placing citations, (I complete them as I write). Zotero helps with keeping track of the various sources I’ve collected in my online travels, as it were. If I find a good paper but don’t have time to log in to the library page, search for it and download it, I save it using Zotero’s browser add-on (for Chrome). If I need to access my references from another computer, I can either sign in online or install Zotero and sync it with my account. There’s also an organization system that includes tagging and folders, and you can place the same item in multiple folders. I was pretty pleased to learn that labels can also be colour coded (hurrah for visual cues!).

I hope some of these tools will be of use to you, or that they provide some ideas or starting points for ways of getting things done in your work. If you have further suggestions, feel free to leave them in the comments section below!

“A little brains, a little talent…”

Posted on July 24, 2014 by

Post to Twitter

The pet-peeve language issue I’m going to look at in this post is a particular way of using the word “talent,” which isn’t really a metaphor per se but more of a quality or attribute that is nominalized and reified in ways that detach it from actual people, and their lives and work. I’ve discussed this briefly before in a post about international mobility, where I described “the extraction and objectification of ‘talent’ as something apart from those who might have it and use it, and transformation into a product available for sale.” But lately it seems like these expressions are popping up more regularly in the higher ed news articles I’m reading.

Another term that I hear often at the moment is “talent market”, and closely related to these is the expression “talent pool” (don’t go fishing in the shallow end!). We also see the related use of “brains” (brain drain, brain exchange) and in some cases, “minds” (“free trade in bright minds”). Here are a few examples culled from higher ed news articles:

  • global competition for talent
  • countries compete for the world’s top talent
  • cultivating domestic talent
  • race for the ‘best and brightest’
  • attractive destination for foreign talent
  • international students are seen as a fabulous talent pool for Canada
  • tilting the talent balance
  • the global brain race
  • brain circulation
  • a free market in minds

The underlying synecdoche – the one valued attribute standing in for the whole person – is reflected in expressions of an objectified quality that can be traded in an international market. With the market framing come the metaphors of competition, including war and sports imagery (“battle for brainpower”). In other examples (such as the media coverage of the first CERC awards) there are obvious parallels, including the use of hockey metaphors to describe the recruitment of international scientific leaders.

A number of discursive threads are enabled and connected by this framing of “talent”:

Talent as “natural”: The obsession with talent masks the obvious privilege it takes to have one’s gifts identified, nurtured, and brought to their full potential. Talent must be seen in order to exist, and resources are required for it to be visible; it has to be recognisable to those who seek it and within the systems that attach value to it. Like academic “merit”, this kind of talent is not inherent in people but constructed in large part through context.

Talent as a scarce (natural) resource: Framing talent as an object of exchange also seems to presuppose that the quest for “talent” (as with many other “natural resources”) is a zero-sum game. A “war for talent” becomes legitimate when we assume that talent is in limited supply, and our priorities shift to recruiting the “top talent” from other places. Hence there’s also a kind of fetishization of the “most talented” as objects of intense competition, or more accurately as a resource that must be ferreted out from its most obscure locations (diamonds in the rough!) and channelled to the most competitive institutions and nations: “The more countries and companies compete for talent, the better the chances that geniuses will be raked up from obscurity” (Economist).

Talent as economizable: The assumption that talent must be economized, or indeed is that which can be economized; whereas there are many things we could call “talent” that don’t fit this definition. If this sounds familiar, it’s probably because you’ve read something by Richard Florida or one of his acolytes, who have given the same treatment to “creativity” – for example in the 2005 book, The Flight of the Creative Class: The New Global Competition for Talent. In his widespread proselytizing, Florida has named “talent” as one of the factors in fostering a “creative class” environment (the proposed solution for boosting economic development). Similarly to the theorising of “human capital”, this is a way of seeing people primarily in terms of what they can contribute to the economy. And like Florida’s idea of creativity, talent is only recognized in the forms that contribute to economic life in particular ways.

Talent as mobile: An example of this is the language of a “free trade in minds”. Governments seek mobile talent as one of the (human) resources required to build a productive and skilled workforce. International students are a primary source of this, which is why all this ties in with the effort to “brand” Canada as a top location. While there’s an assumption that talent itself, like an aether, can drift freely across borders to the “best” or most competitive nation, the reality is that not all bodies are as mobile as they need to be to compete in this way. Brains don’t move across borders, people do – people with hopes, with problems, with families, with bodies that need care. Minds also don’t move as freely when people want a level of stability and security in their working and personal lives, or when they lack the resources or privilege to follow important opportunities.

All this becomes the logic used to make arguments about which policy solutions are the right ones. Weapons in this “war” for talent are the policies that governments can use to fine-tune the intake of new potential citizens – general immigration strategies but also more targeted policies designed to help with recruitment of the right people (such as the UK’s “exceptional talent” category and China’s “one thousand talents” program). Again and again we see the vacuous imperative to national economic “competitiveness” being invoked, but for what exactly are we competing?

What was for the HR managers an organizational phenomenon (see “talent management”) has become, at least in part thanks to the creative class gurus, positioned as a national problem that governments need to deal with through policy change. The rhetoric of talent is applied to immigration, to governance of competitive research funding, and to international post-secondary recruitment. International students, many of whom now see themselves as “cash cows” for financially needy universities, are also being viewed as a “talent pool” into which the nation can dip for its required quota of desirable immigrants.

You could argue that it’s obvious why economic theorists would see people in terms of their economic value. But focusing too much on this single factor is both alienating and obfuscating; it takes us away from a holistic understanding of the issues. Students, early career researchers, and other potential migrants aren’t merely the plugs that will stop up the supposed skills gap, and there are ethical problems with any argument that treats them as such.

Not only that, but the “talent market” is like all markets, an unequal and constructed one. So what are the costs of competing, and who can pay them? Would “talent” perhaps be less scarce if we tried developing it by dealing with inequities – looking at what holds people back – rather than trying actively to find the few, elite “best and brightest” elsewhere?

By the numbers

Posted on July 3, 2014 by

Post to Twitter

In a recent Chronicle Of Higher Ed article Dr. David M. Perry asked the question, “but does it count”? with regards to public engagement in academe. Perry argues that while there’s a perception that academics don’t communicate with non-expert publics, in fact they’re doing this kind of work all the time. What we really need, therefore, is a means of formal recognition for public work within the tenure and promotion system.

Like Perry (and many others), I’ve written about the issue of public engagement and the lack of recognition for it in academic promotions; I discussed the reasons why it’s hypocritical to ask young scholars to “engage” with broader publics, when clearly this kind of work does not contribute towards a scholarly career in the way that peer-reviewed articles do. If early-career researcher (ECR) workloads increase, they may then reasonably de-prioritize this kind of work since it adds few or no points to their academic scorecards. The work is even more risky for members of traditionally marginalized groups who already have difficulty gaining access to academic capital.

Here I’m going to return to the points I wrote about a few weeks ago, regarding “productivity”; I want to draw some attention to the connection between what we “produce” and aspects of academic work that encourage us to see ourselves in a particular way. For me this is part of an ongoing exploration of what factors affect our understanding of “knowledge work,” in particular the way it happens in universities. In this case, the right kind of self-governance means understanding that if a certain kind of work doesn’t “count” then we are not being “productive” when we do that work.

That’s why I’m more interested in the answer to a second, unasked question that’s implicit in “does it count?”: count for what? In most cases, it’s an academic job, one with some security and stability; so whether something counts towards tenure is the point, with all the implications this brings. This question of “what counts” – whether it’s articulated explicitly or operating as an underlying theme in academic conversation – reveals something about the ways in which academics’ decision-making is influenced by perception of what will be rewarded with advancement in the existing system.

All this probably sounds obvious, but as usual there’s the bigger picture to consider. The current competition for long-term academic jobs means that “to innovate in form means to risk one’s career” (Perry) and that future academics may become more conservative about the work they do, if that is what’s required to remain in the running for scarce positions. Even those who have faculty jobs must compete for research funding, and/or face some form of professional evaluation based on measurable criteria. With teaching, there is a parallel situation wherein precarious employment means that job assignments become more dependent on student evaluations. Making decisions about how we work is not merely dependent upon personal preferences, but also on our need to remain within the bounds of recognizable merit in a “meritocratic” institution.

At the Governing Academic Life conference in London, UK, last week, this context was the focus of the discussion as participants took time for a more detailed critical examination of the form and experience of contemporary academic work; speakers included Dr. Stephen Ball, Dr. Chris Newfield, Dr. Wendy Brown, Dr. Mitchell Dean and Dr. Richard Hall. While I wasn’t able to attend in person, the conference topic was directly relevant to what I’m writing and thinking about, so I followed along with the discussion on Twitter as much as I could.

One example that came up in the discussion was the UK’s Research Excellence Framework (REF, formerly the RAE), a framework and process of research assessment that determines the direction of HEFCE funding through grading “research output items” (e.g. books or articles). The “impact model”, or the assessment of the effects of research beyond academe, was a particular focus of debate.

This performance-based funding model, built on an instrumental notion of prestige, has been critiqued from a number of angles. For example, Dr. Andrew Oswald makes the point about scholarly conservatism when he argues that “People routinely talk in terms of journal labels rather than discoveries…That is a palpable sign of intellectual deterioration…if you design a Soviet-style planning system, you will get tractors.” Oswald argues that the REF discourages risky research and leads ECRs to focus on instrumental publishing rather than the kind of innovative work that might not check the REF’s boxes. This effect is not specific to the UK; in Canada, with no such formal system in place, Dr. Li-Shih Huang writes that she has “been bluntly asked to change [her] priorities by focusing on publishing only in high-impact journals”.

This example brings us back to the question not only of how academic life is governed, but how academics govern themselves. As I mentioned in my previous post on this issue, the conditions of academic work are also the conditions in which knowledge becomes authoritative and is communicated as such. Dr. David Perry argues in his article that “we…have a problem with how we define, count, and value many types of public engagement.” But what effect do we see from this process of having to define and count our work, and are we considering how this this may change what we can “know”?

I’ve had to think about my own position in this context, since I know I haven’t assessed myself based on how my work “counts”, and worse – I don’t really want to. The price to be paid there is exclusion from the system of academic prestige and from the institutions that value it. But just as “productivity” isn’t the same as getting things done, something can “count” within a system without it having meaningful effects otherwise. And it can matter elsewhere without signifying anything in this system: I can care about whether people read what I write, whether it prompted them to think about something differently, or whether someone else is drawing on my ideas and doing something interesting with them; those are some of my goals. But that’s not the same as the “impact factor” of a journal, or the number of citations an author or paper receives.

Based on the Governing Academic Life conference tweets, it seems that there was also discussion about whether there’s a way to appropriate or change the tools and norms that feel as if they work “against” us (or against the kind of knowledge we want to create). How can scholars continue to work in academe but also challenge its norms on an ongoing basis? This is a question about cultural absorption but also one about the limits of professional validation and advancement. In other words, if challenging the system doesn’t allow you to enter into and progress in an academic career, then how will those who want change find a way to stay and make it happen?

We can’t limit our critiques to those that are acceptable within the existing frames. Yet at the same time, as anyone in a marginal position in academe knows, trying to make change takes a lot of time and (emotional) energy; it can drain you to the point where you can’t do the work that…“counts”. So then what? – you’re discounted.

I was thinking about this issue last year when I wrote a post on academic disciplines and what happens when critical work on the university becomes formalized into its own “field” within academe. Formalization can only happen in this way if it’s sanctioned by people who have already achieved success on traditional academic terms. It also leads to further entrenchment of the work within regular professionalization. So how can we effect the change we speak about so often, when its form is being imagined within these restraints? Are there examples can we see outside the institution that might help with the task?

Because my research is about institutional change and how it looks and happens at different levels, I’m interested in these questions of individuals’ self-governance and its relationship to academic structures and norms. The problem of whether work will “count” for advancement within an academic career is important because it tells us what kind of work will likely be prioritized by successful academics, which in turn has an effect on others’ (future) careers and on PhD education and mentoring – and on knowledge.

All these things will shape the academe of the future; change happens not just through grand external “disruptions” and/or engineered unbundling but also through small actions and decisions – and the resistance – made by people every day. Asking how those things occur, and how they’re affected by context, is another step towards figuring out what kind of academic life we’ll have in the future and what will “count” towards it.