Skip to main content
Speculative

Milking the crowd

Posted on September 30, 2014 by

Post to Twitter

You may have seen some of the articles on “crowdfunding” that have been bouncing around over the past couple of years. They’re generally positive accounts of researchers tapping into this newfound source of cash that enables them to work on projects that wouldn’t be eligible for support from the usual agencies (such as the Tri-Council here in Canada).

Crowdfunding (as opposed to crowdsourcing) involves the use of (usually) online tools to solicit funds for a project, from the “crowd”, i.e. from as many people as possible. It’s true enough that there have been some remarkable successes, which is part of what’s feeding into the growing popularity of the practice. But as usual, I’m going to be a party pooper (sorry/not sorry!) and argue that this new form of funding is a mixed blessing. And since the only criticisms I’ve seen thus far have tended to focus on the lack of prestige attached to crowdfunding (when compared to competitive grants), I think it’s time to play devil’s advocate and dig into this a bit.

To start, I think the issue isn’t (just) whether or not crowdfunding will become more popular, but rather why or why not this might happen, and what the consequences might be if it did. So what “problem” does crowdfunding solve, and how does this relate to the primarily positive attention that it receives?

I think it’s easy to see where the impetus comes from. Funding is distributed in and among research areas within universities and other research organizations, in particular ways. The agencies responsible for this funding can set the priorities, and if one’s work does not match these, the opportunities are reduced. If the number of applicants increases over time without a corresponding rise in research dollars, this too affects each applicant’s chances. Crowdfunding offers an “alternative” to this cycle, bypassing the agencies completely and soliciting support directly from the (online) masses. If the bulk of research funding is already competitive and the chances of winning are low, then why not turn to the public for support, especially when new online tools can enable the process?

In this sense, crowdfunding devolves decision-making about funding allocations, removing the organizational filter and leaving the decision to the “crowd” – it’s another form of marketization. This is important because it seems to address a primary problem of governance: the distribution of scarce funds, which has been managed more and more through competitive mechanisms.

Crowdfunding removes the need for academic peer review, which can, in a sense, be freeing. But do non-expert publics have the same priorities as peers in one’s field? While the needs of these publics are important, their members aren’t generally equipped to make the same kind of decision as someone who has trained for years in a particular field. As much as I don’t believe in any kind of pure meritocracy, I still wonder under these circumstances – what kind of research gets funded? For example while a disease like cancer “is a problem for everybody”, what about diseases that aren’t? We might also see the opposite problem – where issues that do affect many people are not “marketable” enough to attract the funding they require.

Directly related to this is the nature of the communication required. A change to the presumed audience also means changes to the nature of the “pitch” and to the means of its delivery. It could be a problem that the best PR campaign, and not necessarily the most innovative or important research, wins the day. While I agree that researchers should learn to communicate better with different publics, I don’t think this should be framed primarily through requests for donations. There’s also the question of whether broader audiences can be reached over time through proprietary platforms including widely used social media tools, such as Twitter and Facebook. Given the decay of “organic reach” on Facebook (as an example), unless your campaign catches on and spreads beyond your personal network there, you’ll likely be paying for ad space anyway.

I was reminded of all this recently when I saw an article from the Guardian UK about TED talks. The author describes a funding pitch where an astrophysicist was told he should be “more like Malcolm Gladwell”. However you want to interpret that, I think it raises a real problem with the idea of “pitching” specialized research in this way. The author argues that “astrophysics run on the model of American Idol is a recipe for civilizational disaster”; he could have been talking about crowdfunding. After all, you can just as well earn $55,000 for a potato salad party as for a crucial research initiative – that’s the Internet for you.

As Canadian scientist Jim Woodgett notes, even in the regular funding system “an investment in the future…can be a tough sell”; and governments becoming less willing to back projects that have no solid outcomes attached. Will the “crowd” be willing to step in and assume the risks?

Grant-writing for competitive government funding is known to be time-consuming; crowdfunding, especially if it becomes used more widely, would also take time, not only at the outset where new skills must be learned but also on an ongoing basis. If every project requires a publicity campaign to extract money from the public directly, this will take time out of research work. Will more specialist staff need to be hired to help, or take over, the communications aspects of these campaigns as they become more institutionalised? This process of seeking funding, in itself, could easily be spun into yet another industry. Will smaller, cheaper campaigns be able to match the successes of those that hire professionals to fine-tune the process?

What might be some longer-term consequences of crowdfunding as a means of supporting research? Not only does further privatization of funding change the process and nature of research, it could easily be used by the government as a justification for cuts to research funding. After all, if the money can come directly from the public, why waste scarce government funds unnecessarily? While this might sound like paranoia, we’re already seeing crowdfunding being used not just to launch small, unusual or one-off projects, but to generate what I’d argue is the kind of ongoing support that governments should provide (for example, research on social and environmental issues).

In crowdfunding campaigns, “rewards” are offered in return for donations; for one University of Alberta team, “Giving $25 will get donors a shout out on Twitter or Facebook, giving $100 will get a shout out with the donor’s name on a one-centimetre square chip inside the satellite” with the possibility of adding more names as your donation increases; “$1,000 will get a company logo engraved inside the satellite”. This is really not much different from the named buildings already so common on campus, yet it isn’t being decried in the same way. Rather, it’s presented as an opportunity for entrepreneurialism, a means of filling in the gaps left by the government.

It’s also important to remember that the larger public is not an unlimited source of additional cash, and some may feel that the tax dollars they already contribute should be enough. Crowdfunding for research is a philanthropic venture; when research funding becomes another charity, as we already see with massive campaigns for breast cancer (“Pinktober”) and other causes, how long before the public cannot give anything more? How much money do people have available for such donations – how far can you get by asking friends and family to boost your work? Will people still have money to give, if and when crowdfunding campaigns become more widespread?

Funding has to be sought out somehow, and most available funding is competitive in some way; the competition excludes much of the work that could be done. From that work, we will never know what insights and innovations might have emerged, and what new ideas could have turned out to be crucial to the development of a field. But it’s still important to consider the possible long-term systemic effects of any form of funding.

Yes, there are positive aspects of crowdfunding, but since these already receive so much attention, I’m hoping I’ve been able to convince you that there’s a flip-side to the deal. A focus only on the campaigns that succeed is unhelpful because it created a false impression of feasibility where there may be none. Expressing these critiques is not about siding with elitism or specialization, denying that we must learn to better communicate with non-specialist publics about the work that researchers do. It’s a serious consideration of the funding landscape and of the long-term shape of research support – because at the heart of this is the crucial issue of how we decide what we need to know, who is allowed to find out about it, and how that endeavour will be undertaken.

Student engagement and the PhD, part 2

Posted on September 3, 2014 by

Post to Twitter

This post is the second part of a longer piece that address the issue of student engagement in the context of doctoral education. You can read the first part here.

—————————————————————————————————

I think one of the biggest challenges of education policy is that we’re trying to get things to happen on purpose that often seem to happen by accident. Sometimes it’s as if the more we try to pin down and reproduce the “right” results, the further we get from allowing learning to happen. But there are a lot of different and useful ways of approaching this problem. After all, it isn’t really “by accident” that these things happened. There was (and is) an environment produced by choices that shaped who could participate, and how, in the university’s activities. Bearing that in mind, there are a few ideas I would like to throw into the mix.

Culture. The issue of student engagement raises the perennial problem of policy and culture. These elements need to complement one another, but what is it that steers organizational change? We face a chicken-and-egg situation, a question of what comes first, and therefore, how we should put decisions into practice. Culture is more difficult to change, but also more pervasive and effective. More than just a mission statement, it’s enacted by every person participating in the institution or in the learning experience itself. Difficult as it is, without a culture-based, long-term approach the university is a lot less likely to create lasting relationships with students/alumni, with employees, and with other groups and communities of which it is a part. That’s why for every change we want to make, we should be asking “how does this culture support this (or not)? Does it support the kind of culture we want?” This isn’t a call for homogeneity; goals can be interpreted in diverse ways in a cultural context where we acknowledge some common ground that is not imposed “from above”. I’d also argue that cultural change brings a focus on process and environment rather than solely on outcomes.

Holism. It’s important to think about how goals aren’t necessarily achieved one by one through specific programs that target isolated facets of achievement; change happens holistically, even as each step we take contributes to its form and practice. We’re trying to cover a lot of bases in university education, and some of them seem contradictory. Education itself is a complex process, and fragmenting the process can, ironically, make it less “efficient” at achieving the real goal. For example, specialization of services in the university makes things easier to manage from within, but it can make life difficult for students (and discourage them from using said services when they’re available). We’re shooting at moving targets, since the needs of students are constantly changing, and there are also many paths to the same goals. We need to achieve multiple goals simultaneously rather than focusing only on targeted policies. This requires thinking about how the entire learning environment contributes, not just the elements that are most directly and obviously relevant.

For doctoral education this is a salient point because we cannot simply keep expanding the amount of bases we try to cover during the degree – not without affecting times to completion. But if we’re more thoughtful and deliberate about the process, and the different goals, needs and proclivities of PhD students, we can address this problem. There is no one solution, but we can push for a more iterative approach, where students think and talk about their intellectual development and the possibilities for their futures, making decisions along the way instead of sticking to a predetermined or assumed path. Being “engaged” in various (academic & non-academic) communities can help students learn not just how academe operates and what faculty work looks like, but also what’s “out there” in terms of other kinds of careers and mentors. Trusting relationships are key to this process, but we cannot assume the right conditions will simply develop in the absence of explicit support.

Openness. Can universities as organizations become more “open” or porous in ways that will facilitate better relationships (and further engagement) not only with their students, but also with other groups, both internal and external? Is this a part of the answer – and if so, what would facilitate it? If the university itself is to become more open, this will be a process facilitated by communication with students and with other publics and participants as well. I believe the current ways in which universities treat communication are revealing of assumptions about organizational control. There is a boundary between what happens in the academic organization and what happens elsewhere, but this is a negotiable boundary; in changing it, we see organizational relationships change in ways that are perceived as a loss of control. Because such control was an illusion at the outset, we need to face the issue and develop a different approach.

Returning to the example of the doctorate, I think the main reason that student engagement isn’t discussed with regards to PhDs is that they’re assumed to be “engaged” enough already, by the (academic) parameters of the description. We see terms like “integration” to refer to belonging and participation, but it doesn’t capture the broader scope (and diversity) of learning and experience that takes place informally in both academic and non-academic settings, and feeds into doctoral students’ choices and outcomes.

To re-state: students can learn about future possibilities, and make contributions in the present, by being enagaged with/in these different communities. For example, there’s the community of the university, through which students can come to understand academe in general as an institution – and because scholars play an active role in governance, this is even more important. Then there’s the community of the discipline, wherein students learn the norms and values of the research community; there’s the larger community of academe that’s also global (if variable across jurisdictions). Lastly, there are communities in non-academic contexts, which can comprise specialists and/or non-specialists. We’re increasingly seeing calls for PhDs in particular to make connections between their research and non-academic publics. There’s a wide array of possibilities here, and in too many cases they’re still not being explored enough (or coming up at all) during the PhD program.

While navigating all this, doctoral students are dealing with the fact that some kinds of work are valued in the academic environment while others are not. But after attaining this high level of expertise, do we not have a responsibility to engage with publics beyond those limited to academic contexts? And don’t we want to better know the scope of what is possible with a PhD?

One implication here is that the task of supervision cannot and must not be a “black box”, a process where the student is encouraged to rely heavily upon one person to provide them the guidance, information, and input required to make good academic and professional decisions. Formal institutional reward systems may need to change for this problem to be addressed. A more collective approach to responsibility is crucial because even in terms of preparation for academic careers, the doctorate is often lacking – and it’s unreasonable to expect supervisors to know about non-academic careers when they’ve likely never had one. Students need support not only from a strong peer culture but also from a culture of collaboration in professional development.

As I’ve argued before, the university isn’t apart from and/or better than “the real world”. Its boundaries are of our own making, and they can be re-made if we so choose. And let’s be honest: getting involved with our education as a part of that world can be painful, angering, difficult, troubling, just as much as it’s rewarding and productive. Just knowing about what’s going on every day and knowing how little control you have over so much of what happens – if you care about that, it hurts. Then, if we speak up about the things that matter to us, the response isn’t always positive (and in fact it can be punishing). When we see problems and try to make changes happen, we might find ourselves stuck without knowing what to do next, frustrated but in need of perseverance. We may see, not answers, but more questions.

All this reminds me of these words from Buddhist teacher Thích Nhất Hạnh: “Once the door of awareness has been opened, you cannot close it.” But I think, of course, this is a good thing, and I think that in spite of the difficulty we should show students how to find that door – then encourage them to open it and walk through.

Student engagement and the PhD, part 1

Posted on September 2, 2014 by

Post to Twitter

This post is an expanded version of a keynote talk that I presented on August 26, 2014 at the 11th Annual Workshop on Higher Education Reform, at Memorial University in St. John’s, Newfoundland. The post is in two parts, because it’s quite long and I’ve expanded on every point; but hopefully it’s worth reading to the end.

—————————————————————————————————

The theme I’m taking up here is that of of student engagement within and beyond the university, and as such, I found it hard to approach the topic for a few reasons. One is that I have a lot of trouble separating “student engagement” from learning and from the university experience overall; I think it’s just the integral thing that has to happen in all aspects of university life, if students are to have an education at all. Another reason is that the research literature on student engagement is heavily focused on undergraduate students, but my research only includes PhD students in the analysis (for reasons that make sense in the context of the work I’m doing). Lastly, I’m skeptical about the systems of assessment that are applied once we standardize and reify “engagement” as a thing to be catalogued, described, measured, and compared (more on this below). For that reason I want to stress that I take a pretty broad view of the meaning of the term.

I’ve been interested in how knowledge connects to the world, for better or worse, for a long time. My interests didn’t always play out in ways that were viewed positively by others, but the need was there, and I remember a few moments that highlight how this directly shaped and drove my educational and professional choices.

In the first year of my communication studies degree, I learned that a favourite prof might not be back to teach more courses, because she was a contract professor and couldn’t be certain that she’d be re-hired. I couldn’t understand why someone who did a good job would not also have a secure position. This taught me to pay attention to (and care about) the context in which my education was happening – the university.

The year after that, I became a teaching assistant. As an undergraduate myself, I knew what made me bored in a tutorial, and I knew I wanted to make things interesting and accessible for the students in my group. But almost from the first moment in the classroom, I realized there was a divide I had to bridge: I didn’t know what it was, or how it worked, but no one was as keen as me – and I didn’t know where to start with that. I just knew it had to happen, because my enthusiasm wasn’t enough for them. I had to learn about the practice of education.

Lastly, because a lot of people ask me about this, I’ll mention how I started this blog. It’s because I realised through using Twitter that there was a whole other conversation going on about the things I’d been studying, and that it was a lot bigger than a single department or institution. I saw that there was a way I could join that conversation, so I created a blog where I could participate in more than 140 characters at a time. Through watching what happened when we critiqued education in these venues, I was able to pay more attention to the context of knowledge and who gets to “produce” it.

So is that what student engagement looks like in practice? Being more “engaged” in the narrower sense contributes to academic outcomes, which is a big part of why the issue has been recognized as a legitimate object of research. But this is only one version of what it means to be engaged with one’s own education, and my point is that we have to look beyond stereotypical markers of academic achievement.

I don’t think student engagement can be seen as a distinct element of the university’s mission. Why? Because it’s integral to everything happening at an educational institution. If we can say that the university has a mission, it’s more than an academic one – and more than the “production” and “transfer” of knowledge. It’s also more than an economic mission, unless we think the only point of knowledge is its contribution in that regard. We live in a world that needs people to care about it, and they need not only to care, but to be equipped to do something with that. I believe those goals are mutually reinforcing, as is their inculcation every level in the education system.

There are of course contextual factors that make some approaches easier to pursue than others. Over the past 70 years or so, the institutional growth and fragmentation of many universities – their increased size and complexity – has made it more difficult to imagine common elements of an organizational culture, and harder for students to find a place and find their way within academic systems. At the same time, some elements of governance have become more centralized, and it can be harder for students to make real contributions to the process. They may be spending less time in the campus community, working more hours at jobs to provide funds for education expenses. For doctoral students there’s also the element of intense competition for the “right” jobs, which can affect the peer dynamic and fuel the pursuit of only the most strategic activities.

Related to increasing expenses are the external demands for accountability that come not only from governments but also from parents and students, and other groups that have interests in the outcomes of the work universities do. Students are implicitly encouraged to see education as a private good that leads to economic security, while governments want “human capital” and innovation, and businesses demand “job-ready” graduates with more directly applicable skills.

In this (political) context of increased competition, privatization, and accountability, the data that universities produce are not neutral. Particularly when they’re tied to resource allocation, these data become tools of comparison that shift our attention to specific, measurable objects, such that those objects – rather than to the goals we have in mind when we imagine the purpose of the university – become the focus of (and reward for) our efforts. Transparency in itself is not the issue, but rather the technocratic systems we construct that too often come to take their points of reference as entirely internal. Just as “getting things done” isn’t the same as “productivity” that is positioned on a scale of assessment, learning is not the same as “learning outcomes”; and student engagement is not the same as an increased NSSE score. But if these scores are what we reward, they become the drivers of our decisions.

The epitome of the technocratic approach is when we see technology being proposed as the panacea to systemic problems through education (a recurring historical theme), with one of the best recent examples provided by the MOOC frenzy that peaked around 2012-2013. Yes, MOOCs were supposed to increase student engagement, too; and they were positioned against the boring lecture as a means of highlighting the inferiority of existing educational models.

A related theme is that of the use of learning analytics (and/or “big data”) as the answer to discovering how and why students actually learn, a problem that has surely been at the heart of a century of pedagogical theory and education policy. Tech development is of course heavily affected by the same political economic context as education; the demand for scale, efficiency, competitiveness, and demonstrated outcomes. Needless to say I’m far from convinced that the magic tech bullet will be any more successful in the future; and technocratic policies, injected into institutional environments in order to quickly solve narrowly-defined problems, are really no better.

“Student engagement” is also an accessibility issue. It’s clear that some forms of involvement in education and related activities are tokens for the entitled, used as ways of indicating their eligibility for scarce resources. This is true from primary school to the PhD level, where we see how cultural and economic capital operate to aid the replication of privilege. The “new super people” referred to by the New York Times are examples of the process at work at the undergraduate level, but more generally, “merit” is constructed and recognized in particular ways that benefit those who had more to work with at the outset. While undergrads can invest in tutoring to beef up their test scores and pad their CVs with juicy extracurriculars, PhD students with funding and mentorship can access prestigious conferences, build more influential networks, pay for coaching services, and spend more time focussed on professionally rewarding activities instead of scrabbling for opportunities. We also have to question which voices are seen as worth hearing, what activity will “count” on one’s academic scorecard, and who gets to feel “safe” enough to speak up at all, since the risks are unevenly distributed.

Next up: In the second part of this post, I’ll be looking at student engagement in terms of some principles and approaches for reframing of doctoral education.

An update on tools of the trade

Posted on August 11, 2014 by

Post to Twitter

Periodically I try to share some of the tools I’m using in the process of researching, writing, presenting, and muddling about online; I’ve noticed that frequently there are things I’m taking for granted that others don’t know anything about (and could really use!). This time around I’m focusing on tools I’ve worked with in various ways for academic and other research over the past year. I think this about as close as I’ll get to doing a “back to school” post, especially since it’s not even mid-August. Enjoy – and I hope you find something new and helpful…

Topsy. If you’re doing any kind of research involving social media and especially Twitter, Topsy is invaluable for its capacity to pull up tweets all the way back to 2006. That’s right – while tools like Storify and indeed Twitter’s own search fall down at anything beyond a month back, Topsy excels, allowing you to find (very) old tweets and to go to the originals by clicking on the timestamp. You can search posts within a specific range (down to the hour). However, without a Pro account there are limits, including searches that end at the 10th page of results; I haven’t signed up for this one, but I’m hoping to do so eventually (it would be worth it).

Below: a Topsy search within a specific time-frame shows the earliest #PhDchat tweets, from 2010.

Diigo. Even though I’ve probably discussed it before, Diigo makes the list again because it’s a tool that’s constantly being tweaked – in a good way; it’s a social bookmarking tool with a lot of heft. For example, they’ve introduced a new feature called “research mode” where the bookmarking tool will automagically add the same set of selected tags to each link you save in a session. I upgraded to Diigo Pro quite a while ago and I have no regrets, since I do a lot of research online and being able to cache and tag (and add highlights and notes to) news articles and other posts is extremely helpful for this.

I should mention that many people use Evernote in a similar way, but I haven’t really dug into it because the other methods I use have been sufficient. At a certain point in one’s work it also takes so long to figure out and set up a new system that sticking with the current one is a better idea (this is why I never fully got into using Scrivener, though it’s a great tool).

Camscanner. For my research I’m looking at  a lot of documents: everything from meeting minutes to articles in student newspapers to marketing materials and more, and they’re a lot more useful to me in digital form. Camscanner is an app I added to my Android phone about a year ago (it’s available for iThings too), and which I used to make documents until I was able to buy a decent scanner earlier this year. It allows you to use your phone’s built-in camera as a scanner; you simply photograph the documents, and the app converts them into PDFs. Image quality tends to be excellent, and you can add multiple pages as one document.

Because I have a lot of pages to deal with, I’ve moved on to a scanner with an automatic document feeder, and Adobe Acrobat Pro, with which you can apply OCR on the scanned pages (making them searchable) and combine them into larger documents with sections. But there will always be times when taking a photo of something is the most convenient (or only possible) way, and Camscanner serves that purpose perfectly.

Timeglider. I use timelines quite a lot because they help me to put together a visual “story” of what’s happened in a particular period of policy change, or to show events unfolding in a case study. There are quite a few timeline tools floating around, but I’ve been using Timeglider because it allows for visual elements that differentiate the events, including changing the size of an event to match its importance. Timeglider has a free trial but to get the most out of it, you’ll need to upgrade; since I’m using it quite a bit, I decided to go with the $5 per month account. The other tool I would suggest is Timeline JS, which is free and has a nice clean layout and a simple process.

Below: a section of a timeline in progress, showing PSE policy changes in Canada with a focus on Ontario. Coloured threads show the length of provincial and federal government mandates and periods of policy implementation.

f5. If you’re like me and you can’t afford to pay someone to transcribe your interviews, you’ll need to make the task as easy on yourself as possible. I chose f5, one of many programs that allows users to import an audio or video file and create a text file as the transcription attached to it. What I’ve found very useful about f5 is that it inserts timecodes automatically as you transcribe – so instead of just showing turn-taking, the transcript also shows you how long each person spoke for and when they said what. If you want to go back to an earlier part of the interview, you can click on the timecode. You can also slow down the recording without much vocal distortion. f5 is free, and available for Mac and Windows; it works with or without a footswitch.

Dedoose. This is a qualitative research tool that’s been recommended to me by trusty colleagues, but I haven’t tried it out yet. Since the reviews seem overwhelmingly positive, I thought I’d add it here as an alternative to more well-known software like NVivo (which is now available for Mac as well as Windows). Each of these comes with a cost – Dedoose requires a subscription fee that adds up to $100 per year at the student rate; NVivo has more options, including a per-semester rate of $60. But if you have the budget for it and you want to try something more comprehensive, these two are worth looking into for a start.

Zotero. I finally bit the bullet and picked a reference management tool, though right now I really only use it to organize sources since I already have a system for placing citations, (I complete them as I write). Zotero helps with keeping track of the various sources I’ve collected in my online travels, as it were. If I find a good paper but don’t have time to log in to the library page, search for it and download it, I save it using Zotero’s browser add-on (for Chrome). If I need to access my references from another computer, I can either sign in online or install Zotero and sync it with my account. There’s also an organization system that includes tagging and folders, and you can place the same item in multiple folders. I was pretty pleased to learn that labels can also be colour coded (hurrah for visual cues!).

I hope some of these tools will be of use to you, or that they provide some ideas or starting points for ways of getting things done in your work. If you have further suggestions, feel free to leave them in the comments section below!

“A little brains, a little talent…”

Posted on July 24, 2014 by

Post to Twitter

The pet-peeve language issue I’m going to look at in this post is a particular way of using the word “talent,” which isn’t really a metaphor per se but more of a quality or attribute that is nominalized and reified in ways that detach it from actual people, and their lives and work. I’ve discussed this briefly before in a post about international mobility, where I described “the extraction and objectification of ‘talent’ as something apart from those who might have it and use it, and transformation into a product available for sale.” But lately it seems like these expressions are popping up more regularly in the higher ed news articles I’m reading.

Another term that I hear often at the moment is “talent market”, and closely related to these is the expression “talent pool” (don’t go fishing in the shallow end!). We also see the related use of “brains” (brain drain, brain exchange) and in some cases, “minds” (“free trade in bright minds”). Here are a few examples culled from higher ed news articles:

  • global competition for talent
  • countries compete for the world’s top talent
  • cultivating domestic talent
  • race for the ‘best and brightest’
  • attractive destination for foreign talent
  • international students are seen as a fabulous talent pool for Canada
  • tilting the talent balance
  • the global brain race
  • brain circulation
  • a free market in minds

The underlying synecdoche – the one valued attribute standing in for the whole person – is reflected in expressions of an objectified quality that can be traded in an international market. With the market framing come the metaphors of competition, including war and sports imagery (“battle for brainpower”). In other examples (such as the media coverage of the first CERC awards) there are obvious parallels, including the use of hockey metaphors to describe the recruitment of international scientific leaders.

A number of discursive threads are enabled and connected by this framing of “talent”:

Talent as “natural”: The obsession with talent masks the obvious privilege it takes to have one’s gifts identified, nurtured, and brought to their full potential. Talent must be seen in order to exist, and resources are required for it to be visible; it has to be recognisable to those who seek it and within the systems that attach value to it. Like academic “merit”, this kind of talent is not inherent in people but constructed in large part through context.

Talent as a scarce (natural) resource: Framing talent as an object of exchange also seems to presuppose that the quest for “talent” (as with many other “natural resources”) is a zero-sum game. A “war for talent” becomes legitimate when we assume that talent is in limited supply, and our priorities shift to recruiting the “top talent” from other places. Hence there’s also a kind of fetishization of the “most talented” as objects of intense competition, or more accurately as a resource that must be ferreted out from its most obscure locations (diamonds in the rough!) and channelled to the most competitive institutions and nations: “The more countries and companies compete for talent, the better the chances that geniuses will be raked up from obscurity” (Economist).

Talent as economizable: The assumption that talent must be economized, or indeed is that which can be economized; whereas there are many things we could call “talent” that don’t fit this definition. If this sounds familiar, it’s probably because you’ve read something by Richard Florida or one of his acolytes, who have given the same treatment to “creativity” – for example in the 2005 book, The Flight of the Creative Class: The New Global Competition for Talent. In his widespread proselytizing, Florida has named “talent” as one of the factors in fostering a “creative class” environment (the proposed solution for boosting economic development). Similarly to the theorising of “human capital”, this is a way of seeing people primarily in terms of what they can contribute to the economy. And like Florida’s idea of creativity, talent is only recognized in the forms that contribute to economic life in particular ways.

Talent as mobile: An example of this is the language of a “free trade in minds”. Governments seek mobile talent as one of the (human) resources required to build a productive and skilled workforce. International students are a primary source of this, which is why all this ties in with the effort to “brand” Canada as a top location. While there’s an assumption that talent itself, like an aether, can drift freely across borders to the “best” or most competitive nation, the reality is that not all bodies are as mobile as they need to be to compete in this way. Brains don’t move across borders, people do – people with hopes, with problems, with families, with bodies that need care. Minds also don’t move as freely when people want a level of stability and security in their working and personal lives, or when they lack the resources or privilege to follow important opportunities.

All this becomes the logic used to make arguments about which policy solutions are the right ones. Weapons in this “war” for talent are the policies that governments can use to fine-tune the intake of new potential citizens – general immigration strategies but also more targeted policies designed to help with recruitment of the right people (such as the UK’s “exceptional talent” category and China’s “one thousand talents” program). Again and again we see the vacuous imperative to national economic “competitiveness” being invoked, but for what exactly are we competing?

What was for the HR managers an organizational phenomenon (see “talent management”) has become, at least in part thanks to the creative class gurus, positioned as a national problem that governments need to deal with through policy change. The rhetoric of talent is applied to immigration, to governance of competitive research funding, and to international post-secondary recruitment. International students, many of whom now see themselves as “cash cows” for financially needy universities, are also being viewed as a “talent pool” into which the nation can dip for its required quota of desirable immigrants.

You could argue that it’s obvious why economic theorists would see people in terms of their economic value. But focusing too much on this single factor is both alienating and obfuscating; it takes us away from a holistic understanding of the issues. Students, early career researchers, and other potential migrants aren’t merely the plugs that will stop up the supposed skills gap, and there are ethical problems with any argument that treats them as such.

Not only that, but the “talent market” is like all markets, an unequal and constructed one. So what are the costs of competing, and who can pay them? Would “talent” perhaps be less scarce if we tried developing it by dealing with inequities – looking at what holds people back – rather than trying actively to find the few, elite “best and brightest” elsewhere?

By the numbers

Posted on July 3, 2014 by

Post to Twitter

In a recent Chronicle Of Higher Ed article Dr. David M. Perry asked the question, “but does it count”? with regards to public engagement in academe. Perry argues that while there’s a perception that academics don’t communicate with non-expert publics, in fact they’re doing this kind of work all the time. What we really need, therefore, is a means of formal recognition for public work within the tenure and promotion system.

Like Perry (and many others), I’ve written about the issue of public engagement and the lack of recognition for it in academic promotions; I discussed the reasons why it’s hypocritical to ask young scholars to “engage” with broader publics, when clearly this kind of work does not contribute towards a scholarly career in the way that peer-reviewed articles do. If early-career researcher (ECR) workloads increase, they may then reasonably de-prioritize this kind of work since it adds few or no points to their academic scorecards. The work is even more risky for members of traditionally marginalized groups who already have difficulty gaining access to academic capital.

Here I’m going to return to the points I wrote about a few weeks ago, regarding “productivity”; I want to draw some attention to the connection between what we “produce” and aspects of academic work that encourage us to see ourselves in a particular way. For me this is part of an ongoing exploration of what factors affect our understanding of “knowledge work,” in particular the way it happens in universities. In this case, the right kind of self-governance means understanding that if a certain kind of work doesn’t “count” then we are not being “productive” when we do that work.

That’s why I’m more interested in the answer to a second, unasked question that’s implicit in “does it count?”: count for what? In most cases, it’s an academic job, one with some security and stability; so whether something counts towards tenure is the point, with all the implications this brings. This question of “what counts” – whether it’s articulated explicitly or operating as an underlying theme in academic conversation – reveals something about the ways in which academics’ decision-making is influenced by perception of what will be rewarded with advancement in the existing system.

All this probably sounds obvious, but as usual there’s the bigger picture to consider. The current competition for long-term academic jobs means that “to innovate in form means to risk one’s career” (Perry) and that future academics may become more conservative about the work they do, if that is what’s required to remain in the running for scarce positions. Even those who have faculty jobs must compete for research funding, and/or face some form of professional evaluation based on measurable criteria. With teaching, there is a parallel situation wherein precarious employment means that job assignments become more dependent on student evaluations. Making decisions about how we work is not merely dependent upon personal preferences, but also on our need to remain within the bounds of recognizable merit in a “meritocratic” institution.

At the Governing Academic Life conference in London, UK, last week, this context was the focus of the discussion as participants took time for a more detailed critical examination of the form and experience of contemporary academic work; speakers included Dr. Stephen Ball, Dr. Chris Newfield, Dr. Wendy Brown, Dr. Mitchell Dean and Dr. Richard Hall. While I wasn’t able to attend in person, the conference topic was directly relevant to what I’m writing and thinking about, so I followed along with the discussion on Twitter as much as I could.

One example that came up in the discussion was the UK’s Research Excellence Framework (REF, formerly the RAE), a framework and process of research assessment that determines the direction of HEFCE funding through grading “research output items” (e.g. books or articles). The “impact model”, or the assessment of the effects of research beyond academe, was a particular focus of debate.

This performance-based funding model, built on an instrumental notion of prestige, has been critiqued from a number of angles. For example, Dr. Andrew Oswald makes the point about scholarly conservatism when he argues that “People routinely talk in terms of journal labels rather than discoveries…That is a palpable sign of intellectual deterioration…if you design a Soviet-style planning system, you will get tractors.” Oswald argues that the REF discourages risky research and leads ECRs to focus on instrumental publishing rather than the kind of innovative work that might not check the REF’s boxes. This effect is not specific to the UK; in Canada, with no such formal system in place, Dr. Li-Shih Huang writes that she has “been bluntly asked to change [her] priorities by focusing on publishing only in high-impact journals”.

This example brings us back to the question not only of how academic life is governed, but how academics govern themselves. As I mentioned in my previous post on this issue, the conditions of academic work are also the conditions in which knowledge becomes authoritative and is communicated as such. Dr. David Perry argues in his article that “we…have a problem with how we define, count, and value many types of public engagement.” But what effect do we see from this process of having to define and count our work, and are we considering how this this may change what we can “know”?

I’ve had to think about my own position in this context, since I know I haven’t assessed myself based on how my work “counts”, and worse – I don’t really want to. The price to be paid there is exclusion from the system of academic prestige and from the institutions that value it. But just as “productivity” isn’t the same as getting things done, something can “count” within a system without it having meaningful effects otherwise. And it can matter elsewhere without signifying anything in this system: I can care about whether people read what I write, whether it prompted them to think about something differently, or whether someone else is drawing on my ideas and doing something interesting with them; those are some of my goals. But that’s not the same as the “impact factor” of a journal, or the number of citations an author or paper receives.

Based on the Governing Academic Life conference tweets, it seems that there was also discussion about whether there’s a way to appropriate or change the tools and norms that feel as if they work “against” us (or against the kind of knowledge we want to create). How can scholars continue to work in academe but also challenge its norms on an ongoing basis? This is a question about cultural absorption but also one about the limits of professional validation and advancement. In other words, if challenging the system doesn’t allow you to enter into and progress in an academic career, then how will those who want change find a way to stay and make it happen?

We can’t limit our critiques to those that are acceptable within the existing frames. Yet at the same time, as anyone in a marginal position in academe knows, trying to make change takes a lot of time and (emotional) energy; it can drain you to the point where you can’t do the work that…“counts”. So then what? – you’re discounted.

I was thinking about this issue last year when I wrote a post on academic disciplines and what happens when critical work on the university becomes formalized into its own “field” within academe. Formalization can only happen in this way if it’s sanctioned by people who have already achieved success on traditional academic terms. It also leads to further entrenchment of the work within regular professionalization. So how can we effect the change we speak about so often, when its form is being imagined within these restraints? Are there examples can we see outside the institution that might help with the task?

Because my research is about institutional change and how it looks and happens at different levels, I’m interested in these questions of individuals’ self-governance and its relationship to academic structures and norms. The problem of whether work will “count” for advancement within an academic career is important because it tells us what kind of work will likely be prioritized by successful academics, which in turn has an effect on others’ (future) careers and on PhD education and mentoring – and on knowledge.

All these things will shape the academe of the future; change happens not just through grand external “disruptions” and/or engineered unbundling but also through small actions and decisions – and the resistance – made by people every day. Asking how those things occur, and how they’re affected by context, is another step towards figuring out what kind of academic life we’ll have in the future and what will “count” towards it.

Dissecting the USask fiasco

Posted on June 11, 2014 by

Post to Twitter

It’s not all that often that we see a case study in Canadian university crisis communications and in particular, where a crisis happens because of a conflict involving fundamental ideas about what universities are for and how they should be governed. That’s one way to look at the recent events at the University of Saskatchewan, where actions by the administration have brought unwanted international attention to the university, sparking a nationwide debate about the nature of academic freedom, administrative and professorial rights and responsibilities, and university politics and funding.

What happened?
For those who missed out, the most visible event in the timeline is the firing of professor Robert Buckingham (on May 14), who was the dean of the university’s school of public health. Dr. Buckingham was dismissed and escorted off campus by security personnel, and told not to return. This happened less than 24 hours after Dr. Buckingham circulated a letter titled “The silence of the deans” in which he described the senior administration’s attempts to shut down criticism of the university’s strategic planning process, TransformUS.

In the letter, which Dr. Buckingham sent to the premier of Saskatchewan and to the NDP opposition, he describes being instructed by the president and the provost to “support university messaging.” This was a problem for Dr. Buckingham because the school of public health had recently earned APHEA accreditation for its Masters of public health program; he argued that the changes planned by the administration would require re-assessment, and as such the school might lose its accreditation. Along with his open letter, Dr. Buckingham attached documents including an email from the provost requesting that he not discuss the accreditation issues in public.

Dr. Buckingham’s dismissal and the resultant public outrage led to a chain of events that included an apology from President Ilene Busch-Vishniac and the rapid reinstatement of Dr. Buckingham to a faculty position (though not to his role as dean) on May 15; the resignation of provost Brett Fairbairn and an emergency meeting of the board of governors on May 19; a major student rally on May 20, and ultimately the board’s decision to fire Dr. Busch-Vishniac on May 21.

Strategic planning in universities and the Dickeson model
While Dr. Buckingham’s specific complaint was about changes to the school of public health, the larger point (one echoed loudly on social media) was clearly a criticism of the strategic planning exercise overall. It’s important to point out that when strategic planning happens at universities, it almost always generates or provokes conflict and resistance. Not only that, but the problems are often expressed along particular lines that reveal a lot about the way that governance styles (especially academic vs. managerial) come into conflict in universities. Bearing this in mind, it’s clear that what the U of S experienced was a “smouldering crisis”, an internal one that gathered heat over time and eventually was sparked into a conflagration by Dr. Buckingham’s dismissal – it was a catalyst rather than a cause.

While strategic planning in general brings strife, the specific model being used by the U of S – Robert Dickeson’s program prioritization process (PPP) – is one that’s been a target of recent critiques as it’s been taken up at more universities in Canada (including Guelph, Brock, Nipissing, and Laurier).

The model is designed to deal with funding cuts by reducing university expenditures through “reallocation” of resources. As outlined in his book Prioritizing Academic Programs and Services, Dr. Dickeson’s argument is that if cuts must be made for institutional sustainability, there’s no sense in penalizing all organizational units equally. Thus the organization’s internal units (both academic and non-academic) are required to self-assess within a specified framework, such that the university can make informed sacrifices while maintaining quality in select areas of strength. Dr. Dickeson’s model is a kind of internal differentiation – and it’s no coincidence that there are parallels happening at different levels of governance.

Instead of supporting weaker areas so they can improve, these areas have resources reduced so that targeted programs can prosper; the internal assessment is what guides these decisions. Funding goes from an assumed rising tide that lifts all boats, to a zero-sum institutional game in which departments must prove their “excellence” and relevance to the university’s unique mission. Dr. Dickeson himself points out that the “egalitarian” nature of academe means this kind of ranking exercise goes against the grain of the institutional culture, wherein collective governance by faculty and equal treatment of the disciplines are ideals. Universities, in turn, aren’t designed to shrink when resources dry up, so any reduction in funding is bound to cause pain.

Communication and the university – crisis and strategy
Saskatchewan is a unique example of a university crisis wherein the role of communication in governance is clear for all to see. I’ve argued in the past that universities’ communication strategies are often based on the assumption of control and boundary policing. Rarely does a conflict rapidly traverse the communicative boundaries of the university in the way this incident has; within 48 hours of Dr. Buckingham distributing his letter, he had been fired and re-hired; within 10 days, both the president and the provost were out of the picture altogether. Such drama is even more likely to make the news because universities aren’t known publicly for their internal strife (ironically enough!).

Communication has played a direct part in all aspects of this crisis. News of the university’s swift and extreme reaction to Dr. Buckingham’s letter was fed directly into online media, where the outrage was amplified as it spread through networks. The letter was circulated quickly in part because it was discussed during the Saskatchewan legislature’s question period, tweeted about by the NDP, and then covered by the StarPhoenix the same day (May 13). It also served as a press release full of punchy, critical statements that could be (and were) easily quoted by the media. On Twitter, existing hashtags (such as #USask and #TransformUS) were appropriated and put to use, while the @USask handle received direct “feedback” as the tweeting public responded to the news.

Another document that circulated widely was the letter from provost Brett Fairbairn to Robert Buckingham, informing him of his dismissal. No amount of official equivocation could dispel the force of a letter in which Dr. Buckingham is told that both pay and benefits will be cut off (and which shows the provost’s signature at the bottom). On May 15, Dr. Fairbairn also defended the restructuring decision and dismissed the critiques expressed in Dr. Buckingham’s letter.

During this period the university’s president, Ilene Busch-Vishniac, took a good deal of direct criticism and hardly helped her own cause by committing several PR gaffes. For example when questioned in a CBC radio interview on the evening of May 15, she dismissed the decision-making process as “complicated” and attempted to shift the blame onto a group – including HR and university lawyers – rather than taking it on herself as leader. She also subsequently stated publicly that she had no intention of resigning. When asked when she had realised there was a problem, Dr. Busch-Vishniac responded: “at the point that…I was talking with everybody and said wait a minute, we did what just now?”

Based on what I’ve described, I think it’s fair to assume that the U of Saskatchewan administration was unprepared for a crisis – particularly one that was generated through weak internal decision-making (as opposed to one that was beyond the organization’s control), and facilitated by social media. The situation revealed a serious problem with decision-making, which was not ameliorated by Dr. Busch-Vishniac’s vaguely robotic assurances that “we will make sure that we fix whatever went wrong, and that it will never happen again.”

What next?
No matter what actions are taken next, the University of Saskatchewan has already projected an image of a tense, top-down and low-trust organizational climate, wherein there is a lack of consensus on – indeed, active resistance to – a program of internal change. Dr. Buckingham’s firing was not merely one bad decision; it was the latest (and possibly the culminating) incident in a flawed process that reflects larger, ongoing problems with organizational culture and power dynamics. This means not just strategic planning, but the process of organizational becoming, the ongoing making and re-making of the organizational culture that happens over a much longer period. What we’ve seen is a point where this becomes visible in a way that reflects back negatively on the institution.

As a result, the U of S has also incurred serious reputational damage. Public airing of these problems is a communications failure for the university, and it’s also an opportunity for critics to bring attention to events at the U of S and to connect these to broader themes of under-funding, corporatised governance, and academic freedom.

In the reaction to Dr. Busch-Vishniac’s firing, many critics have drawn on the close association between the president and the strategic plan, TransformUS; there’s been a lot of hope expressed that the plan would end with her departure. But while it’s often the case that the president becomes a synecdoche for the strategic plan (so close is the association between them), it’s very unlikely that the TransformUS plan will be abandoned after so many months of development, or that a new president would be chosen without some guarantee of making the plan work. However, Dr. Busch-Vishniac’s interim replacement, the well-liked Dr. Gordon Barnhart, has said that the plan will be reviewed (and its pace slowed). There will be no public inquiry into Dr. Buckingham’s firing and he will not be reinstated as dean.

The University of Saskatchewan will take a long time to shake off the events of the past month or so. Other institutions will consider as an example (not one to follow) the way U of S mis-handled its internal problems and thus generated a much wider, public debate about models of university governance and academic freedom. Yet given the tensions and crises that continue to emerge both from strategic planning and from the overlapping roles and conflicting loyalties for faculty and administrators, I think this is a conversation we really needed to have, and it’s one that should continue – as difficult as it may be.

Priorities and “Productivity”

Posted on May 26, 2014 by

Post to Twitter

There’s nothing like the perspective of distance to bring murky issues into focus. I was able to re-discover this recently when I (unexpectedly) ended up spending two weeks in New Zealand, where I travelled on short notice to attend my father’s funeral, then visited extended family afterwards.

The 36 hours preceding the trip were a stressful whirlwind of organization – finding a plane ticket, organizing a new passport, and scheduling – and ultimately I made it to my destination with only 3 ½ hours to spare. But once I arrived, I was fairly quick to “disconnect”. For me, New Zealand is a happily distracting environment full of familiar sights and sounds and smells (umbrella-like ponga ferns; chatter of tui; the distinctive tang of sulphur and seasalt). Since I don’t get much time either at home or with family, I wanted to enjoy it while I could.

IMG_6186*.jpg

In spite of the disconnection, I still didn’t feel cut off. It was more like I’d entered a parallel dimension through which events that would normally be of significance to me were made visible only via my periodic glances at social media networks. Even though I hadn’t lost access and could still participate on the same terms, Twitter (and Internet conversations in general) became more like a spectacle viewed from afar, and the separation was caused not only by time and space but also by immediate events.

One morning as I was watching the tweets roll down the page, I saw someone share a link to a blog post about how to write “productively”. In the moment it struck me that every time I see these kinds of posts, I feel a pang of guilt that I am not particularly “productive” in comparison to so many others I know. Not only do I write slowly, I also read slowly. It’s not an issue of comprehension, but one of constant cognitive overload, a sense of “too much” that can be hard to explain or tame. Of course I have strategies to deal with this, but I’m still well aware of how much it slows me down.

As for the trained guilt reaction: it’s no coincidence and it highlights a couple of things, one of which is the comparative aspect of (ongoing) evaluation. If I know I’ll always be compared to other people, that this will be the context of my work, then I begin to compare myself to them. For those trying to gain a foothold in academic work, self-regulation in a competitive environment means knowing we can and should always be doing more – “producing” more – in order to compete with others for scarce resources (such as long-term academic positions).

The point about the concept of “productivity” is that it isn’t merely about getting things done, or doing a lot. Most of us want to do things in life; we might want to create what doesn’t yet exist, or perhaps demonstrate how what is now could or should be changed for the better. Many of us have goals and aspirations, ambitions we want to fulfill, and we’re willing to work hard to make that happen.

But productivity, as a goal, is about governance of a particular type. As I’ve said repeatedly in previous posts, I pay a lot of attention to language and to the roots (etymological, historical, cultural) of the words we so often take for granted. Productivity isn’t just about making things, it’s a concept that’s part of a larger system; it’s an economic concept, one that denotes a relationship between inputs and outputs. This is how efficiency is measured, so to be put to use “productivity” needs to be demonstrated in terms that work for the model of governance being used.

I think there is a problem with taking at face value a term that brings this kind of focus to outputs over process. And while a critique of the use of “productivity” as a concept applied to academic work (and “creative”) work more broadly) is not a direct critique of the culture of over-work and “busyness” per se, it relates to that directly. It’s not a call for everyone to do less. Similarly, rejecting “productivity” as a concept is not at all the same as being lazy or lacking ambition. It doesn’t mean we should do nothing, it just means we should think about the context of doing and the rationales we’re providing ourselves for the work in which we engage.

With regards to the demand for productivity, academe is no different from other sectors of the workforce, but the nature of the work involved means that the effects are different. This points to the question of whether we can govern creativity, or work that involves a creative (thought) process, with managerial logic wherein measurable outputs are the focus. Can we systematize what is so frequently unpredictable? Can ideas be counted? What are the consequences of applying a Taylorist breakdown of work into units of time that are mapped on to specific tasks in a way that enables work to be divided, fragmented, fine-tuned for a particular kind of efficiency?

The reason I keep circling back to the idea that we must “waste time to save time” is that the process can’t always be mapped easily from the outcome. What is it that fills in the gaps between those building blocks of tangible tasks and time? What about the time it took to learn to write, the time it takes to keep learning and honing and becoming as researchers, writers, scientists, teachers, skilled at what we do? What about the time that we need not just to read but to think about what we’ve read and seen and experienced, to understand and then make the deeper connections that are necessary for engagement with knowledge? The concern about time and “production” can be internalized to the point where we strive to find ways of making our progress visible. But for much of what we do, this may not be possible.

If academic work is about knowledge, and we come to apply the concept of “productivity” to this work without questioning the implications, then what are we saying about how knowledge happens – and the nature of knowledge itself? The epistemological question flows from the question of governance. If we govern universities on the same terms that we manage factories, we change our relationship to knowledge and also the nature of what we “know”.

As “knowledge workers” we also change ourselves, and our relationship to what we do. Of course it’s possible to divide our time into manageable units, and perform our work accordingly – to act as we’re encouraged to act, as the production line of knowledge. We may come to accept that accounting for ourselves means counting our selves, in an institutional culture where work is an identity and a way of life; and it will be in some ways easier to accept these things and work with them, especially in the current competitive academic job market. To be non-compliant at the early stages of one’s career is to take on a personal risk, the outcomes of which are validated (one way or the other; “maverick” or “failure”) by a meritocratic culture.

My concern is that what we’ll produce are merely the words required to feed the machine of academic professionalization and promotion; and my point is not really about what we “produce” but why we do it, what motivates us, by what logic we are operating when we work.

IMG_5776.jpg

To circle back to perspective, for obvious reasons I was thinking about these things at the same time that I was thinking about my father, talking to people he knew, and hearing stories about things he did. Based on what I’d known of him and what others said, I’m pretty sure he was never striving to be “productive” – but he was always building or fixing something, or talking to someone, or figuring something out. He was happy with what he had, and part of that was what he did (for himself and for others).

And I saw how it was ridiculous that I should feel even a twinge of guilt about my lack of “productivity”: at a time when there could be no expectation, the inner voice was so well-trained that it simply kept speaking. But we all need the voice to be silent, in the same way that we need compassion for ourselves before we can tend to others. You can’t work to make a change if your own (metaphorical) back is broken, which can happen if you just keep piling on the load. I sometimes think of Bilbo Baggins feeling “like butter scraped over too much bread”, a phrase that reminds me of the need to defend some boundaries in order to cross others.

So I didn’t “produce” anything over the past few weeks. I spent over 55 hours either waiting in airports or on flights, feeling tired and claustrophobic. I also walked on beaches and alongside rivers and by rural roads; I heard and saw favourite birds and trees and mountains. I was reminded of why my father loved New Zealand and felt thankful, as always, for the gift of part of a lifetime there; what better inheritance could I ask for? And I suspect all this was productive, and I didn’t write a word.

With many thanks to Kate Bowles and Alison Seaman for insightful conversations on this topic.

Break the binders – Gender, media, & women’s “choices”

Posted on April 9, 2014 by

Post to Twitter

On March 16, Steve Paikin – the host of the TVOntario’s popular current affairs show “The Agenda” – shared a blog post titled “Where are all the female guests?”. In it, he expressed concern about the ongoing lack of gender parity among the show’s guests, which has led to male-dominated panel discussions. The main question Paikin poses is, “Why, oh why, do we have such a tough time getting female guests on our program?”

I’m always happy to see a discussion about women’s (lack of) representation among “experts” in the media. But as Kirstine Stewart of Twitter Canada commented: “[I] Was hopeful. Then I read the blog.” In it, Paikin takes an opportunity that could have been used to ask deeper questions about the gendered nature of expertise (for example) – and turns it into a throw-up-your-hands, stereotype-reinforcing missive on the “excuses” women make for not wanting to heed the call and appear on a TV show. Indeed, the post answers its own question so well that it would have worked better as satire, but no such luck.

In spite (or in fact because) of its defensively self-righteous tone, Paikin’s piece points us to something important in the rhetoric about women’s lack of visibility as experts in the media and elsewhere: the focus is usually on women’s choices, and/or on their inherent merit as experts. We then see at least two “logical” explanations for women’s relative absence from the highest positions in our institutional and social hierarchies: either they made choices that weren’t conducive to the pursuit of high-impact careers, or they simply lacked the merit to pursue those careers. Then again according to Paikin, it might just be a genetic flaw: “we’ve [...] discovered there also seems to be something in women’s DNA that makes them harder to book.”

Let’s take a look at the “choices” women make that cause Paikin and his TVO colleagues so much difficulty.

First, Paikin argues that “no man will ever say, “Sorry, can’t do your show tonight, I’m taking care of my kids.” The man will find someone to take care of his kids so he can appear on a TV show.  Women use that excuse on us all the time” (emphasis added). So apparently, we can comfortably ignore the entrenched, gendered inequalities in domestic work and especially in child care. While it’s true that men have been taking on more parenting responsibilities over time, the ongoing, underlying assumption – a systemic one, as described in this excellent post by Sarah Mann – is that women are responsible for child care. So given the logistics involved, as well as cultural and relational pressures and expectations, how can this be described as an “excuse”?

The post continues: “No man will say, “Sorry, can’t do your show tonight, my roots are showing.” I’m serious. We get that as an excuse […] But only from women.” The glaring omission here? There’s no mention of the way that women politicians, journalists, activists, professors, and other public figures are subjected to public judgement based on their looks (and sexuality), as opposed to the work they do. This is what Sarah Mann describes as a “steaming pile of double standards related to beauty”, and it’s pervasive. One recent example is that of  British scholar Mary Beard, who has faced much commentary on her looks and was a target for misogynistic abuse after she commented on immigration issues. Beard said the experience “would be quite enough to put many women off appearing in public, contributing to political debate.” If that’s not serious, then what is?

Paikin also states: “No man will say, “Sorry can’t do your show tonight, I’m not an expert in that particular aspect of the story.” They’ll get up to speed on the issue and come on. Women beg off.” Research has indeed indicated that women are less keen to speak to an area beyond their immediate expertise, but that might have something to do with the way women’s expertise is constantly questioned and challenged in overt and irrelevant ways (for example…by criticizing their looks). Of course, even if you are a female expert, you can still be dismissed as a “token” presence. That’s what happened to scientists Hiranya Peiris and Maggie Aderin-Pocock after they appeared as a guests on a TV show discussing a crucial new development in astrophysics.

Speaking of physics, there’s yet another issue here for “The Agenda”, which is that there aren’t enough women experts in the areas being discussed. For example, “if we’re doing a debate on economics, 90% of economists are men. So already you’re fishing in a lake where the odds are stacked against you.” So how about having a discussion about why women continue to pursue certain careers (and academic areas) rather than others – or why those careers so often involve lower-status, lower-wage, and/or precarious employment?

This is probably familiar territory for anyone who’s aware of the lack of women in the STEM disciplines. And the same issue is reflected within academe more broadly: women may be more visible at lower levels in the professional hierarchy, but there are too few rising to the highest positions of leadership. Women still face everyday sexism in the workplace, and the “outcomes” they achieve (or don’t achieve) are affected by a long process involving both overt and unconscious discrimination in practice. Gender also intersects and interacts with other factors such as race, age, social class, sexuality, and disability. So how much does this experience affect a person’s “merit”, and how might it dampen their enthusiasm, or reduce their opportunities for being a public expert?

Paikin’s post frames women’s “excuses” from not appearing on the show as individual choices that prevent “The Agenda” from providing a gender balance among the show’s guests. What this approach ignores is the context in which those choices are made. As Jeet Heer noted on Twitter, this “is a classic example of taking real social/structural problems and personalizing them as character flaws.” The reasons that women themselves provide are dismissed as trivial: “despite our commitment, despite our efforts, despite EVERYTHING…we get the same old excuses.”

Paikin also responded to one critic, “[I] never told anyone to “man up.” the whole damned post is about finding solutions.” But what other message are we likely to receive when we see a litany of comparison in which “no man would ever” is the refrain? One where acting like a man (as if all men act the same way!) is clearly the preferred strategy, and where men are held up as the standard against which women’s actions and decisions are assessed? What solutions are we likely to come up with, based on these assumptions – other than “man up”?

It’s not that Paikin is wrong to point out a gender gap – of course not. This isn’t about whether he and his colleagues are “trying hard enough” or not; and I’ve tried to explain here, it’s about the way the problem’s being framed. Paikin’s arguments just can’t get past the descriptive notion of “choices” to the point of addressing the structural and cultural issues that inform them. We need to go beyond the argument that “women just don’t like the attention”, that they “just aren’t confident enough”. The question is not what do women say, but why are they still saying it? The Agenda would be a great platform for that discussion and based on the Twitter reaction to Paikin’s comments, there are plenty of women who’d be willing to step up and participate. Now let’s see if they’re invited.

Teaching “Productivity”

Posted on March 24, 2014 by

Post to Twitter

Recently the Higher Education Quality Council of Ontario (HEQCO) released a report (PDF) on a study “designed to measure the teaching loads of faculty members in the Ontario university system and the relationship of this variable to others, such as research output and salary.” The study, comprising 10 of Ontario’s 20 publicly funded universities, looked at faculty teaching in three disciplines (economics, chemistry and philosophy). Results were compared separately for each discipline and institution, as was teaching activity at different levels (assistant, associate and full professor). The study is framed in these terms: “the rationale for analyzing teaching loads was to inform the discussion about opportunities for greater differentiation and productivity in the Ontario university system.” It seems that a goal was to show whether an uneven spread in teaching exists, and to highlight where and how teaching loads could be increased to improve “productivity.”

As is clear from the some of the responses it’s received, this study prods at a sore spot (the quest to translate faculty work into measurable units) with what seems like a blunt instrument – but why is that the case? In this post, rather than succinctly analyze the methodology or the policy implications I want to make a few comments about the context of this paper, and what this context tells us about its construction, its interpretation and its use. Firstly:

Measurement of what? Alex Usher pointed out on Twitter that the study is not designed to measure “how much” research professors do, but merely whether they are “research-active” at all, in comparison to how many courses they are teaching. Even this measure leaves out all activity other than tri-council grants and peer-reviewed journal articles. There’s also (as critics have pointed out) no attempt to show anything other than the number of courses taught, so the differing amount of work per course is left out entirely, as are other non-course related activities that provide support for student learning. Isn’t it important to make a distinction between number of classes taught, and actual time used in teaching work?

Massification and public opinion. The focus on dealing with teaching “load” highlights one of the ongoing problems of massification. One reason that expansion of enrolments is a political issue is because more people now have personal experience with the university; they are therefore more likely to have stakes in the discussion and in its material outcomes. We see an example of this when Margaret Wente in her column plays for populism by referencing what parents expect from universities, i.e. that their primary role is to teach students. If most people see universities as teaching-focussed institutions, this expectation must be addressed and it becomes more visible in rhetorical and political strategy. Hence we also see the Ontario Confederation of University Faculty Associations and the Council of Ontario Universities developing communications campaigns that seek to change public perceptions through highlighting the value of university research. Which also brings us conveniently to the next issue:

The professional prioritization of research in academe. Do we have a problem with an unequal spread of “teaching loads?” If we do, then surely part of the issue is that teaching is seen as a “load” to begin with. When we consider what is most valued in the academic career ladder, unfortunately this makes sense. While the public expects teaching to be valued, research is what still generates the most prestige, as is clear from the hierarchies and divisions of labour in academic work. This is why it’s so interesting that the Ontario government’s differentiation discussion often seems to operate within a kind of vacuum where teaching and research are seen as “different but equal” in the academic economy.

Missing data: In this Maclean’s article, we’re told that this is the “best [HEQCO] could do with the data available.” That highlights the old problem that many higher ed researchers in Canada could tell you about: we so often just don’t have any numbers to work with. But in this case, HEQCO’s paper states that university administrations do have much more detailed data, but that it’s not publicly available. The implication is that institutions aren’t willing to share the numbers, and that if they would provide the information in detail, the government could do a better job of policymaking. So are universities refusing to participate in this policy-making process, and if so, why is that the case?

Calculating knowledge. Admittedly my bias is that I do qualitative research, partly because I think it’s important to question where numbers come from and what they actually tell us. The HEQCO report provides an interesting example wherein what can’t be measured, is not “seen” even though it may be acknowledged as absent: “We recognize that faculty members have other teaching responsibilities besides credit courses, such as unassigned courses, preparing for lectures, office hours, student advisement, and undergraduate and graduate student supervision. Information on these activities is not publicly available and rarely even measured, and is therefore not included in our construction of faculty workload.”

So there’s a lot missing. But even if there weren’t, what does “productivity” mean in the context of university teaching, and can all of the contributing factors be captured in an ever-widening net of numbers? This is not the goal of HEQCO’s report, but I think it’s an important question to ask in the face of the possible assumption that “more (quantitative) data” would be the answer to the problems highlighted in this study. There are things that can’t be shown with “more data;” what are they and what will count as data? I think there’s a connection here to the current and growing obsession with “big data” as the answer to big problems, including of course the lack of efficiency of the learning process.

In a case like this where so much information is not available, we get a limited understanding of what’s going on. The resulting image of “productivity” also reminds me of this excellent post by Kate Bowles wherein she discusses the unseen, unappreciated work that is necessary for a university to be “productive” in the ways that can be measured and slotted into existing models. This is a qualitative gap I think we should address. HEQCO’s study is not disingenuous – its limitations are pretty thoroughly described throughout – but we’re still presented with specific recommendations, such as having research-inactive faculty teach 50 percent more courses.

I think an important question is how much influence this kind of research has on PSE policy, given the amount of data (both quantitative and qualitative) that aren’t included. The self-affirming logic of the various differentiation reports has the air of a foregone conclusion; since Ontario’s universities are already differentiated, then it’s “natural” for government policy to reflect that fact and to create greater efficiencies by doing so. Provision of data is framed as an opportunity for participation in governance, but for universities, handing over more detailed information may be perceived as a loss of autonomy in decision-making. This is the context of governance in which HEQCO’s report was produced, and it will be interesting to see if the research ends up being revised and expanded (as I’d say it needs to be), or if there’s any attempt to implement the recommendations.