Some of you may have noticed that I tend to pay a lot of attention to university communications, both internal and external. It’s partly because this is one of the things I’ve researched (including for my Master’s thesis and my dissertation project); I also have a degree in communication studies and I’ve done some communications work myself, so I always find it interesting to see what approaches are being used.
That being said, I’ve noticed recently that there’s an interesting trend in Ontario postsecondary education marketing and branding, not just with the universities but also with associated higher education organizations. The strategies in a number of recent communications campaigns tend to involve inviting/soliciting members of a target group or groups – usually students – to participate in a contest. There is a prize offered, which could be a monetary reward, or a break on tuition fees, or perhaps a gadget such as an iPod. The winners also receive attention through the media campaign associated with the contest, and in an economy where attention is expected to translate into opportunities, this is seen as a reward in itself.
To highlight some examples: McMaster (My Day At Mac) ran a campaign inviting students to create content with their phones (e.g. videos) and submit these as representative of something essential or typical about the institution. I mentioned in a previous post the COU contest involving student-generated mental health social media plans; last month the COU also used a “virtual scavenger hunt” as part of their public outreach designed to bring attention to research in Ontario’s universities. York University has been running an ongoing campaign called “My Time” in which students submit “visions” of their own futures, with the winners appearing on promotional posters/images; and a recent contest from OCUFA involves students submitting videos wherein they talk about how they’ve been influenced by professors’ research, then encourage friends to vote for their video (thus increasing “likes” on Facebook and views on YouTube).
These strategies are partly about using the “we let you speak for us” approach, and in the current context it makes a lot of sense. There’s a lack of trust in highly processed organizational communication, which is seen as just another official “line” from the PR office, carefully designed to promote and persuade. Where universities in particular are not necessarily high-trust organizations (especially when it comes to communication), this suspicion is even more pronounced. Participation can also becomes something that students can point to as an achievement; it becomes part of their individual profile or “brand.”
But the need to reach audiences through “authenticity” poses a challenge, because it pits communicative control against the sense of realness that’s supposed to draw in an audience and encourage them to engage. Another complication is the unfolding context of social media, in which a desired level or type of control is seriously hindered by the possibility of audience spontaneity (including satire, derailment and abuse).
It could be argued that, particularly at universities, this kind of strategy gives students a “voice.” But hopefully students’ voices don’t have to be channelled through institutionally-sanctioned communication in order to be heard: while a contest attracts participants and creates a positive connection between them and the organization, it’s also a publicity project and it allows the organization to screen communicative content through the application process.
York’s “My time” provides a striking example. The focus is on a series of smoothly attractive black-and-white photographs overlaid with text that reveals students’ “visions” of what they will be doing in 15 or 20 years. The winner receives a year’s free tuition, with other prizes awarded to the runners-up. In this case, students not only provide the content: they are the content, and possibility is the product being sold, though it is framed as certainty (using phrasing like “I will” rather than “I want to”, for example). The message being sent is about the kinds of students York attracts and wants to attract, but at the same time of course, only selected students are allowed to do the representing. The campaign has since been expanded to include images of alumni and faculty.
Does a contest-based campaign work? I can only assume the metrics say “yes”, since these strategies are being employed by so many organizations, and since York’s efforts (for example) have now won an award. Such communication is produced within a larger context of increased marketization of (and consumerism in) university education; the Ontario government’s differentiation agenda; and the rising pressure to show the “value” of what universities do. All these factors contribute to the ways in which universities and other education-related organizations try to project and promote images of themselves, their members, and the work involved in research, teaching and learning.
When I was about 20 I made the decision to quit pursuing a career doing the thing I’d been best at all my life. Some of you might be surprised to know that I was “supposed” to be an artist. I was better at art – drawing, building and sculpting, photography – than I was at anything else, and I was usually better at it than anyone around me. Of course that changed when I started my BFA, but I was still very good at it and would have been much better if I’d felt the motivation to perform in the way I was supposed to. But I didn’t feel it, and I didn’t like the performance (which was far more than just “artistic”), and it meant that I had to start questioning “why not?”. I knew I had to make a decision before I went too far down that track, narrowing my options to the point where it would be difficult to start something else, and investing a lot of time that could be better used elsewhere.
In making that decision, I went with my “gut feeling”, as it were. I didn’t have a plan, and I didn’t know what would come next. But over time I realised the feeling was grounded in some very practical knowledge both of myself and of the kind of career I’d be pursuing if I continued. But it was still hard, because leaving all that behind meant abandoning not just a career path but also a version of the person I was supposed to be, based on everything I knew about myself up to that point.
I’ve been thinking about this lately because I’ve finally had time to read through some of the many posts written by academics who’ve decided to leave academe. The genre’s been given the label “Quit Lit”, and as it’s expanded, several of the posts have garnered significant attention and provoked much debate about who gets to have a faculty career and why.
So what is it about academe that demands these kinds of posts, as public declarations of intent?
They function as correctives, and often as confessions, too. The correction being made is usually one about how leaving an academic career is in fact a decision or a choice (and not usually an easy one – based on context), not simply an outcome of one’s lack of capacity or tenacity or merit. Not only that but the posts are statements about identity, choice, and control, even when they are full of the anger and grief and self-doubt that many people feel when they depart.
Lots of others have pointed this out, but it’s worth emphasizing that academic culture encourages us to see professional identity as personal identity – and thus to see “work” as “life”. What are you left with, then, when you walk away from the work? This has also been questioned through the recent criticisms of the idea of “do what you love”, which is in some ways the epitome of the rhetoric around higher education careers.
It’s not just those blog posts that get me thinking, nor even the context to which they so regularly refer – one where we know very few PhDs move into full-time permanent faculty positions. Those things are on my mind a lot anyway, because the research I do is about higher education, and I consider that an advantage when it comes to understanding the context in which I’d be seeking a career. Each time I sat down to read something research-related, I knew I was also looking at the potential conditions of my future work.
In spite of that knowledge, and in spite of what many people already know are the adverse conditions for creating an academic career, what really surprises me is the response that I still see when I tell people I made the decision not to apply for faculty positions (and that I decided quite a while ago). For every reason I provide, a “solution” is bounced back at me. For every hesitation I express, someone else is there to tell me not to think twice.
Every decision has its reasons, yet somehow my reasoning on this issue is always in question. This is extremely frustrating considering the time and effort involved in making a “realistic” assessment of my own strengths, weaknesses, likes and dislikes, with consideration of the resources available to me, and of course the positions that may or may not open up in the future. It wasn’t easy, but I wanted to face those facts. I’ve also tried to pay attention to what day-to-day faculty work actually looks like, and I’ve thought about whether I’d be happy and competent at doing that work in the context in which it currently happens.
If we think again about that conflation of academe with a specific identity that becomes our “lives”, then it makes more sense that the decision to leave elicits such a personal reaction from other academics: it’s because there’s the assumption that we’re making a judgement or moral pronouncement about other people’s choices. I think this is also why, in many of the blog posts that I’ve read on this topic, the authors emphasize that they haven’t changed as people – they’re still scientists, teachers, scholars, but they may no longer be doing those things within a context recognized as “academic”; therefore they become “unrecognizable” as scholars.
I’d love to be able to believe that academe has the right job for me, right now – because then at least I’d know what to do next, what options are on the table, and where I fit in the grand scheme of professional “value”. In my own case, this isn’t about giving up on some ideal life or role; it’s about bearing in mind, constantly, that the job I may have imagined or desired doesn’t really exist and that I can’t pretend it does. While the academic path may look clear enough, each of us has to consider what sacrifices we’re willing and able to make to travel along it, and those sacrifices are (like the privileges) unevenly distributed and becoming more so over time. That context is not a choice. I also feel a sense of urgency in that there are limits to my time and energy, and I want to do as much as possible within those limits.
At this point, long after having come to these conclusions, what disturbs me most is that it feels like there’s an underlying assumption of “wasted” potential if one don’t follow a particular path. But as art has already taught me, I shouldn’t worry about wasting anything because I’ll still have what I always did. I’m still doing photography (and even getting paid for it), and most importantly those photos bring pleasure to me and to others, which isn’t something I ever would have predicted when I was 20. I can still pursue the goal of writing, researching, and in general doing something that matters to me and pays the rent. There will be limitations there, too, but that’s the case with any career, including an academic one with the notoriously intense competition for jobs and the heavy workload for aspiring and junior scholars.
This post, then, isn’t about “quitting”. Doing an advanced degree doesn’t mean you have only one career path and anything else involves “quitting” or worse, “failure”. In my case, you can’t “quit” something so specific when you weren’t really aiming for it in the first place. My decision may have been easier because I’d done it before – I walked away from something that was supposed to be the focus of my life, and I got through it. Maybe this time it was that I knew all along I didn’t start a PhD specifically to become a professor, so I was always open to multiple options. I just didn’t know what they might be (and I’m still finding out).
Returning to the “Quit Lit”, it’s still often the case that there’s a silence in the space where PhDs feel compelled, or make the choice, to leave the well-marked path to professional success in academe. The “Quit Lit” seeks to fill that silence with something other than the assumption of failure, one that’s perpetuated in the meritocratic culture of academe (where incredibly smart, capable people must keep insisting, “I was good enough”). When others question the fact that I don’t see academe as the place for me, I think of that culture and how it shapes people’s perceptions of “other” work. If the part of the answer here is to make those “other” paths and choices more visible, possible, and acceptable as “success”, then I’m happy to contribute to that.
As many of you will have heard by now (since I’m slow to comment on these things), recently the Canadian government released another strategy piece regarding ongoing efforts to recruit international students to Canada. There’s been some great commentary on this already, and I can’t add much to what others have said. Much of the criticism I would agree with, given the nature of the issues involved in branding, recruitment, and retention/graduation (though that last piece doesn’t seem to have made it onto the table for discussion). But I have a few points to add about the practical elements of this plan as a communication strategy.
While there is mention of the idea that Canadian students should also study abroad, the focus of this latest plan remains on the recruitment of international students to Canada with the dual goal of a) receiving economic benefits from their tuition and other expenditures, and b) increasing national human capital by having the “best and brightest” students stay in Canada and gain citizenship.
None of this is surprising if you’ve been following along with Canada’s policies on immigration and the past attempts to make forays into organized international student recruitment. A few years back, Canadian institutions and governments started clueing in to what other countries (primarily in the “developed”, Western world) have been aware of for a long time. Since at least 2008 the Government of Canada has been making attempts to organize a market offensive, and there’s been much flinging about of terms like “branding” and “strategy”. But Canada is late to the game as it were, and the playing field is already crowded with competition – which makes “standing out” even more of a challenge.
Setting aside for a moment the issue of treating Canada as a commodity (which I discussed in a past post), something that really stood out for me was a lack of foresight in terms of the communications component. The government plans “to “brand” Canada to maximum effect” using “customized marketing strategies”, yet it allocated $5 million for this purpose. This might not sound ungenerous until you read that the “target markets” are… Brazil, China, India (three of the “BRIC” nations), Mexico, North Africa and the Middle East, and Vietnam, to which we will market Canada while maintaining its existing appeal to “France, the UK, Germany, Japan, Korea and the US”. That’s a very diverse set of markets – almost, one could say, not particularly targeted – and each one will require a tailored approach if Canada is to “maximize” the possibility of recruiting students. Additionally, according to prior research from last year, Canada’s “brand” is mushy in at least three of the proposed markets. That means there will be extra effort required at the outset simply to make Canada “visible” as an option, and “[leveraging] Canada’s bilingual, multicultural identity” may not be enough to get the job done.
Maybe it’s my communications background talking, but to me this doesn’t sound like it’s going to be a low-cost effort. Successful branding and marketing takes a lot of research, especially when you’re expecting to develop “market plans focused on, and tailored to, each priority education market”, that are “customized to resonate with each key market and audience at home and abroad”.
The question I have is this: how will these things happen when Canada already has a clear issue with coordinating internal players to produce a coordinated externally-directed effort? In other words, there’s no department or ministry of education to set out the line on this issue, so everything takes more work. Canada isn’t like Australia or New Zealand with their more centralized governance of education; and it also isn’t like the U.S. and U.K, nations that have significant long-standing prestige to build on. So will Canadian governments (provincial and federal) and institutions be able to pull together for the sake of marketing Canada as a desirable place to study and live? How exactly will we achieve “improved coordination of marketing efforts and objectives among governments and stakeholders”?
There are also going to have to be decisions made about what kind of students Canada wants to recruit, and this will have an effect on marketing strategies. Sure, we all want the most promising, academically-able students; but isn’t there something of a conflict between trying to get the right number of “bums in seats”, and trying also to poach the best students from the international “talent market”? Can these goals be reconciled?
This latest document describes some lofty objectives for Canadian image-building for the purpose of student recruitment, but doesn’t carry through in terms of setting out how those are to be achieved in practical terms. As others have pointed out, this is not a “strategy” because having goals is not the same as having a thoughtful, well-researched and adequately-funded plan to actually achieve those goals. If a strategy of postsecondary internationalization is really “our blueprint to attract talent and prepare our country for the 21st century [and] ensure our future prosperity” then I think we have a lot more work to do.
Last October was Mental Health Awareness Month in Ontario (October 10 is World Mental Health Day), and as part of the province’s mental health and addictions strategy, there was much fanfare over the launch of new initiatives for postsecondary students bolstered by $27 million in funding. This is an important and positive step, because there’s been an increasing demand for the limited support services available on campuses, and the problem has been worsening for at least a decade.
Unfortunately, what students experience is part of a much bigger problem. Universities and colleges, as much as they may try, cannot plug the yawning gap in our system that is an issue far beyond the campus. There are many people in Ontario who need help with mental health issues and may be seeking it actively – but can’t get it. Why? Because the system is reactive. It’s designed to deal with short-term problems and with extremes and crises, rather than to help us prevent them, or help us to live with long-term conditions. This matters because ultimately, the services on campuses have to mesh with off-campus services in or connected to the healthcare system.
This is a system in which, without a formal diagnosis, you cannot gain access to accommodations in postsecondary institutions (or elsewhere). Yet to obtain this diagnosis, you have to find the right way in to the system and obtain the right help once you get there. The process can take anywhere from a few months to a year (or longer), depending on how much you know and whether you have an advocate.
For example, an assessment for learning disabilities costs $1,500 to $3,000. Some universities have assessment services, but these refer students to external testing (some of which may be covered, depending on circumstances). You still have to be a registered student to access these, or to have costs partially covered through student loan eligibility; otherwise, you or your parents will be paying. If you’ve had to de-register because of your problems, then you’re out of luck. The same goes for therapy; talk clearly isn’t cheap, in fact it costs $80 or more per hour unless you can use university counselling services – where there is a limit on the number of sessions each student can access. All this is based on the assumption that problems will be short-term and can be “fixed”; wait times for long-term services are often very lengthy.
Of course if you have the resources available, you can simply buy what you need. You can see a therapist of your choice, without waiting months to be told whether you are eligible. You, or your family, can pay for expensive assessments so that problems can be uncovered and named, and help can be obtained. The more fortunate students don’t need most of the university’s services and also don’t have to rely on the government, because they have other forms of support.
Clearly it’s still the disadvantaged students – and less-privileged people in general – who are falling through the cracks in this system. We need to ask, who receives the necessary supports and who does not? Who can step forward and say “I think I have a problem”, without fear of repercussions? Who has the resilience and stamina to pursue a solution that can take so long, and can be so draining, both to discover and to put into practice?
The current system continues to privilege not just people with existing resources but also those who are secure enough to speak about the unmentionable, in spite of the lack of awareness that even those who suffer from such problems may experience themselves. For example, the Council of Ontario Universities held a competition to encourage students to come up with the best mental health “social media strategy”. But the best strategy would be a collective one, informed by (and actively soliciting) the input of those who cannot or will not speak in the public eye. The best process would actively seek out criticism from those most affected, not just the more easily marketable solutions.
University initiatives that gain the most positive media attention often conflate short-term, seasonal stress relating to events like exams, with long-term problems like clinical depression and anxiety disorders (as well as focusing on undergraduate students). Yet it’s the exam period “puppy rooms” that make the news, not the underlying issues that are so much harder to address and resolve, like wait times for “assessments” at university counselling clinics, the lack of privacy many students feel when they go there, the difficulty of having to describe one’s situation repeatedly in the process of trying to find help, and the exhaustion produced by having to negotiate (with) a bureaucracy while simultaneously dealing with the effects of one’s condition.
Giving attention to answers that work well in a PR pitch means depoliticizing our context, and this is a serious mistake. It makes it too easy to forget about all those gaps in the system, and also about factors like poverty, abuse, and discrimination based on race, gender, disability, sexuality, and nationality; it makes it easier to individualize both the problems and the solutions, reducing the answers to “lifestyle choices”. It means we downplay the context in which students are living their lives, and how they bring this to the university when they step onto the campus. That context is part of what enriches teaching and learning, but it also has to be addressed in terms of the problems students experience both on- and off-campus, and how we can help them. Universities alone can’t fix these systemic problems, but perhaps they can bring attention to them, and that would be a great start.
In the past few weeks some interesting and contentious threads of discussion have been unwinding on “Academic Twitter”, in particular one that’s focused on the current conditions of the academic job market in the United States. It seems the debate was kicked off by a post from Rebecca Schuman at Pan Kisses Kafka blog, who criticized a UC Riverside department for the practice of sending out interview requests only five days before the interviews would take place at the annual MLA conference. This provoked a response from Claire Potter in her blog Tenured Radical, in which she insisted that there had to be reasonable explanations for the process. Potter also critiqued the tone of Schuman’s post, describing it as a “hissy fit”. Multiple follow-up posts ensued.
After the exchange between Schuman and Potter, the flames were further fanned by Karen Kelsky’s response at The Professor Is In, wherein she made a comparison between the denial of privilege by the tenured and the denial of racism by white people. The comparison is inappropriate, but Kelsky’s analysis of the advantages of the tenured hit home, and it set off another intense discussion about the responsibilities of tenured faculty in a context where non-tenured peers/colleagues are working in exploitative conditions.
I think there have been a couple of things happening in this debate. One of them is the underlying issue itself – the job market and hiring practices and, at root, the culture of academe and its professionalization process. This is tied closely to the nature of the academic workforce, which in the United States now comprises over two-thirds temporary and/or contingent faculty positions (hence “New Faculty Majority”); tenure is becoming exceptional. But also emerging from this heated exchange about academic working conditions is the question of how we talk with each other, and the issue of the “policing” of people’s participation in the name of civility or professionalism as illustrated in Potter’s response to Schuman.
The MLA-related example provides a useful, if extreme, illustration of how the “market” operates for many aspiring academics in the U.S. right now. There have been plenty of horror stories about the process of job-seeking and the numbers of applicants for each position; based on those narratives we get a sense of what is required to land the right kind of job. For example, from the message Schuman shared it’s apparent that the “best” applicant will be “flexible” and will have prioritized their personal resources appropriately (i.e. already planned to attend the MLA). This person is able to overcome or set aside other life circumstances and plans in the name of being able to obtain the desired position. The underlying assumption seems to be that if the candidate doesn’t or can’t “choose” the right way, they weren’t worthy of the job to begin with.
Similar assumptions were already in place long before the most recent “crisis”. The market is an uneven terrain, and candidates have always experienced those inequalities in different ways. The current conditions have made things worse for most job-seekers, but more so for some than others. Tressie McMillan Cottom rightly points out that race and gender are missing from Kelsky’s structural analysis (in spite of her references to racism). We could also question how the “market” has been working for those with disabilities, or with children or other dependents, or for those who are queer, trans, working class, indigenous. When you increase the number of people seeking work and decrease the number of available jobs, life gets even more difficult for the already-marginalized.
If this is a crisis, then, a crisis for whom? Does the problem only become pressing when it reaches even those who previously would have been insulated from existing inequities? Who needs to be affected, and who needs to notice it and raise an alarm, before hiring practices and job market dynamics and the self-validating culture of academe itself become issues worth discussing?
Then there’s the matter of the discussion. The difficulty of the job market (and the stratification of its participants) is too often framed as a “new” thing. Rather than assuming this is something novel, I’d suggest that there was never an adequate level of discussion in the first place. The current quandary has been building for over 30 years, for my entire lifetime – to the point where we even have a whole field of inquiry dedicated to critiquing the changes happening in (predominantly U.S.) academe. Even in Canada, where the situation is less polarized, there was research available over 20 years ago; and there have been similar issues in Australia, New Zealand and the UK.
What’s perhaps “new” about what we see now is the increasing extremity of the situation – the intensification and expansion of effects – and also that there are more channels of communication through which those affected can share similar stories, raising the level of visibility. The US is also the world’s biggest academic “market”, so more people are paying attention. But framing this as a sudden crisis of the past 5 to 10 years is a mistake, and masks the slow sedimentation of decisions and events that brought us to this point. If we really want to know “how this happened” then we need to understand that this is a long-term inheritance, not just something that occurred because of a recession or recent PhD over-enrollment or lack of Baby Boomer retirements.
Aside from all this, I think relatively few people in precarious positions in the academic job market are willing and/or able to speak up about their experiences. Who’s being left out of this discussion? Who is OK with venting on Twitter or in a blog, and who’s excluded because they can’t risk participating? Which words are being echoed through the mediasphere and which are ignored? We need to find ways of including those most affected, to avoid the trap of allowing a few voices to define the issues and to claim control over the ways that others are allowed to discuss them. That means having respect for the experiences of others and recognizing when our beliefs are built on self-serving assumptions.
That’s hard when so many people are justifiably fed-up and angry and when the issues involved are central to the profession, thus central to the identities of many people who work in it (and who want to work in it). Not only that but everyone wants somewhere to place the blame. Predictable isn’t the same as inevitable – why didn’t someone see what would happen, why did no-one intervene, why was there no organized program of resistance? Worse, we can all see how much work would have to be done to change the course we’re on now, and no-one is sure exactly what that work should be or who needs to do it.
As for early-career academics, they’re not even sure if there’s a place for for them in the university anymore or if so, what it will look like. What this adds up to is a special kind of chaos that exists alongside, intertwined with, the still-stable roots and structures of academe; and it takes a lot of privilege to be able to close one’s eyes to that. This is the context that informs the discussion about the power and responsibility of tenured faculty in the United States right now, even as the traditional benefits of tenure are being eroded in various ways.
I think there are ways to talk about this without stalemate, or without hurting each other. This week I started seeing tweets about the MLA Subconference, which was organized by graduate students. Maybe this is an example of how early-career academics can come together and make sense of what they’re experiencing, taking first steps towards a new path in the profession. One thing’s for sure: in academe everyone’s in this together, tenured or not, and the past can’t be changed. But how we recognize it, respond to it, and think about the future – together or apart – that’s up to us.
Last week with the release of the OECD’s PISA results, there was a flurry of media coverage of Canada’s (and other nations’) performance in this assessment, which I think was best summed up by Audrey Watters with the headline, “PISA Scores Confirm that [Fill in the Blank with Education Narrative of Your Choice]”. I didn’t have time to read most of this coverage, but one article I did catch was on the Globe and Mail website and was emblazoned with the headline “If you’re on an education ‘journey’, you’ve lost the race”. The author brings attention to this metaphor, but sadly doesn’t seem to provide any useful analysis of its implications – other than suggesting that the “losing” countries employ metaphors such as “journey” in order to justify their poor performance.
It’s disappointing to see that not only is the “race” metaphor the one that dominates (think of Obama’s “Race to the Top”), but that other metaphors are being dismissed as unhelpful or even as a means of hiding poor performance – assumedly because they don’t invoke this notion of competition and “getting ahead” of others. Yet metaphors aren’t simply ways of describing reality; they “highlight and hide” aspects of what they describe, bringing attention to some aspects of an object while downplaying or obscuring other angles. A race cannot be a journey; in the one case we’re in a hurry and we have specific defined objectives. In the other case we allow for discovery and serendipity, changes in direction, and the idiosyncrasies of individual students.
To state the obvious, competitions have winners and losers. My opinion, which I’m sure makes me an enemy of “excellence”, is that education shouldn’t have winners and losers in this way. But of course it does. In that sense it merely reflects the way our larger society, and particularly its economic side, is structured as a kind of zero-sum game. This isn’t a new point at all; it’s informed critiques of education systems for decades. But it remains the case that we’ve structured education in ways that reflect both a hierarchical and a competitive economy of rewards, from the grading system that allows some students A’s and others C’s, to the fact that such rewards are used as tokens in the bigger game of educational success, “ranking” students by their ability to perform effectively on the terms set out by educational institutions.
This is why every time I hear talk of “gamification” in education, I have to roll my eyes – as if education isn’t already enough of a game, one where the rules clearly favour some students over others. Our schooling systems still allow the economically and culturally privileged can use their capital to purchase more “merit”, from tutoring to special activities and opportunities that help some students to build their profiles further for entrance into prestigious postsecondary institutions. In this sense, true competition is a myth in education just as in economics, since advantages of all kinds can be bought and sold if the resources are available. There is no “level playing field” here, either between nations, between schools, or between students.
Not only that, but competition itself has effects on behaviours. It can breed dishonesty and encourage superficial and technocratic solutions to deeper problems, the rejigging of numbers and rules to produce the “right” results, and the fiddling of facts. It can fuel animosity between agents engaged in similar tasks, and discourage the collaborative and cooperative actions and attitudes that we need so badly and will need more in the future, if we’re to solve the pressing problems we face nationally and globally. These forms of assessment, ranking, comparison and competition also tend to draw the focus to quantitative factors, reducing education to numbers of graduates per capita, to scores on standardized tests, to measurable outcomes that can be used to show that a nation isn’t “losing” on some imaginary global playing field.
So now it should hardly be a surprise that alongside the already entrenched international rankings of universities – a merry-go-round that every year favours the same class of institutions – we have the OECD not merely providing information about the world’s education systems but essentially ranking them in a global “achievement” contest. Ironically, even “equity in education” becomes an element in this contest.
In a context where competition is assumed to be the means of producing better results and further “efficiencies”, information gets put to a particular use. It underpins the comparisons and rankings that are not neutral tools for the self-improvement of institutions and nations, but ways of showing who’s on top in the international hierarchy. With university rankings in particular, results may be used as fodder in policy debates. When Canada’s U15 institutions lobby for changes to funding that would channel more resources to them in the name of targeted excellence, rankings and comparisons are invoked as proof – proof that some universities are better than others, and also that they would be able to “compete” even more effectively, to be really “world class”, if only money were not being squandered on lesser institutions.
Another example of this is the constant complaint that Canada’s proportion of PhDs per capita is too low, once again when compared to other (OECD) nations. This is essentially a meaningless comparison unless we delve into the actual (complex) reasons why increased numbers of PhDs are expected to increase the amount of “innovation” and thereby, the economic performance of the nation. That correlation doesn’t tell us a thing about causation, but it’s the line that’s parroted again and again by those pushing for government support for further increases to PhD enrolment. Never mind the that it’s apparently the PhDs who leave academic institutions (rather than trying to stay and become academics) who are assumed to drive this kind of development, and that many or most PhD students still don’t receive guidance as to how they can turn their education towards a non-academic career of this kind.
The measurement of educational gains is not the same as striving towards the goal of having an education system that lifts up the weakest and least privileged among us, giving them chances they might not have had otherwise. Yes, I’m an idealist; I believe there should be no “losers” in education, either locally or globally. It should be for the benefit of all, and the goal of furthering achievement should be about helping students, not playing in to a global game of who can beat whom in the education rat race. This means rejecting the notion of competition as the most-favoured mechanism of governance in a field where ostensibly, we’re trying to increase equality for all (unless of course it’s just “equal chances to compete”…). Yes, this is a major contradiction and I think it’s one that needs more attention. But for the moment, I think we need to remember that striving to do better is not the same as striving to be better than others.
Lately I’ve been thinking a lot about the job market, poverty, and the assumptions we make when we talk about people’s choices, partly because recently I’ve seen two excellent and provocative posts about this. The first is from Tressie McMillan Cottom on “The logic of stupid poor people”, a post that discusses how expensive status symbols (like a $2,500 handbag) act as powerful signifiers, and how in general there is a complex performance that must be mastered in order for class “mobility” to happen. The second post, “Why I Make Terrible Decisions, or, poverty thoughts” is by Linda Walther Tirado who writes about the (non-) choices faced by poor people, and how they are criticized for what they choose.
I think a theme in these posts is how the scope of (perceived and actual) possibility diminishes radically as your finances worsen – and this affects not only the decisions you make but also your whole view of the world and what it has to offer.
For example, think about the search for adequate employment and how this is experienced by people with low and/or unstable incomes. One thing it highlights is how space changes when you’re poor: no money for a driver’s license, insurance, or a car? Then you can’t take jobs that are far from public transit or that demand the applicant has their own vehicle. Can’t afford to pay for the bus anymore? Then your options diminish further. Time also changes when you’re broke; you get less done because you spend a lot of time just waiting, or having to get things done the hard way because the easy way costs more. Waiting for the bus and hoping you can get to an interview on time. Waiting in line-ups. Waiting for people to get back to you about money or jobs. Waiting for a cheque to arrive so you can pay the bills. Waiting.
What about other resources you need in order to find work? No money to buy a nice suit, appropriate shoes, or makeup and a good haircut? Then you’ll have a hard time applying for office jobs or even much of the service work available, since appropriate self-presentation is crucial (see Tressie’s post for a great description of how this works). These days you need the Internet for a good job search, but what if you can’t get access? What if you can’t afford a phone, making it difficult for potential employers to contact you? What if you’re not exactly sure what will be the next thing you can eat for dinner, and that’s a more pressing concern than drafting the perfect resumé? Priorities change when money runs out, and choices change as well.
Most importantly, you need resources to gain resources. This is either a virtuous circle or a vicious one, depending on where you are on the income spectrum. As Linda Walther Tirado writes in her post: “We know that the very act of being poor guarantees that we will never not be poor.” That’s knowledge that comes from life experience, not from the classroom.
Add to this the weighty sediment of shame that builds up, because in this Western, capitalist society, we tend to assess people’s intelligence and their moral capacity by their financial circumstances. It’s assumed that people are “stupid”, “irresponsible”, and “frivolous” when they lack money. At best they are “unfortunate”. But the assumption is that there’s something wrong with them, not with their context, not with the entire system in which they’re forced to participate. There’s always something “they” could have done to correct their own course; there’s always a different decision they could have made, some cost they could have cut. Never mind that others get to make mistakes and fall back repeatedly, unscathed, on the soft cushion of privilege. To open up about being poor (or about trouble with finances in general) is to expose oneself to scrutiny and judgement of one’s decisions and one’s character.
And so “charity” comes to feel like a dirty word to those who may be on the receiving end of it. Charity means you couldn’t make it on your own – never mind that most people who “make it” do not do so without often-invisible forms of help and support. Charity means relying on the goodwill of strangers, in a society where independence and self-sufficiency are both over-valued and mostly illusory.
You may wonder why I’m writing about work and money and opportunities, instead of education. I think the debate about being poor is intimately linked to the arguments we make about higher education and who has access to it, and the differing “outcomes” of that education. All the things I just described are things that some students may be experiencing or may have experienced in the past. They’re all factors that affect people’s perceptions of the value of things, including education – and the risks we’re told we have to take to access that value. When we talk about student financial assistance, “debt aversion”, the job market, “entrepreneurialism”, and most of all “risk”, we are making assumptions not just about income and privilege but also about mindset.
Not only that, but of course there’s an intimate link between money and mental health issues, and it’s a link that goes both ways. Issues like clinical depression can lead to poverty, but poverty can cause these issues, too. Long-term financial instability wears you down; it reduces the sense that you can gain any control over your own life. Those students who’ve arrived at university from that kind of background are already dealing with a specific kind of long-term exhaustion. They are more vulnerable to being overwhelmed, and possibly less likely to feel safe asking for help. It’s not just the stress of education they’re dealing with – it’s the cumulative stress of living with worry about the lack of things, and the potential lack of things, including lack of possibilities. If they have low expectations about the “returns” on their education “investment”, then this is big part of it; if they fear for the future, then who can blame them?
There’s so much hue and cry about the diminishing opportunities for those who were previously part of the middle class – as if a problem only matters when it happens to folks who had better things in mind. But for some people this has always been their mode of living, their understanding of the world. When we hold out the promise of a better life as the result of higher education, not everyone can believe in that promise. When pundits bemoan the “high expectations” of an entire generation, they’re forgetting that not everyone had the expectation of magical prosperity either from education or anything else. If we took loans, it wasn’t because we truly believed we could repay them; it was because we saw no other option, because we were told our chances of survival were even lower without the coveted Bachelor’s degree. It was because not having a degree was presented a threat to our future employability, and the fear of debt was overshadowed by the fear of other forms of uncertainty. That doesn’t feel like a “choice” – it feels like coercion, and it’s something we need to start thinking about when we engage in debates about policy and accessibility.
A recent article on Slate‘s website came to my attention only because so many academics in my personal Twittersphere were reacting negatively to it. The article caused outrage with its discussion of EdX founder Anant Agarwal’s suggestion that professors who create and present material for (video-based) MOOCs could be replaced by Hollywood stars, who would lure more students to enrol in and complete the courses. This is presented as the logical solution to the problem of needing more camera-savvy and student-friendly presenters, since not all profs are up to the task. Further commentary from Udacity founder Sebastian Thrun is supportive of Agarwal’s approach. I won’t link to the Slate piece here, but you can Google “The new rock-star professor” (if you must).
The article captures our attention by positioning Agarwal’s and Thrun’s comments within a particular frame of speculation, i.e. that professors could be “replaced” by stars who appeal to the “masses” in a system that un-bundles the work of teaching into user interface design, content production, “delivery”, and assessment. It’s not a neutral framing, because it puts forth a vision of education that subjugates the expertise of faculty (and of educators in general) to the logic of markets and to the “big data” that are assumed to generate more important pedagogical insights than experienced professionals can. It also conflates learning with “content delivery”, espousing interaction and personalisation while in practice apparently relying on what Freire called the “banking model” of education.
But to turn back to Slate, they’re certainly not the only publication to realise that anger generates interest, that there are ways of making academics angry, and that this anger leads to pageviews (maybe we should call them “rage-views”). Slate’s a bit late in catching on to a game that’s been played successfully before by the Chronicle of Higher Education, the Washington Post, The Economist, Forbes, the New York Times, and others. Popular articles include those that take on the work-life balance and/or salary of the “average” professor, or that reek of unexamined privilege and spark strong feminist or anti-racist critiques, or of course those that pronounce on the future of higher education, which is conveniently subject to apocalyptic speculation that can continue ad nauseum (because the future is always…the future, right? Fair game!).
This time around the article was written by Jeff R. Young, who’s also the tech editor for the Chronicle. In fact part of the piece is a modified excerpt from his e-book, the title of which contains the words “MOOC”, “high-tech”, and “disruption”. Based on its Amazon.com description, the book looks more like a cram guide for busy senior administrators, to whom I’d recommend instead Audrey Watters’ blog Hack Education. In the meantime, academic rage at Young’s article has probably brought a good deal of attention to his book (I couldn’t tell you if it’s boosted sales).
Taking apart articles like the one Young published in Slate is practically a bore at this point. It so clearly sets out to prod at academic sore spots, taking consumerist logic to extremes while playing on a major theme from recent higher ed reporting (MOOCs can hardly be called “news” by now). Even better is that authors who write on this topic don’t need to make up their own extreme speculations, since the quotes they’re using are taken directly from ed-tech celebs like Agarwal and Thrun whose popularity in turn is strengthened by their edgy proclamations. The excerpt doesn’t address whether stars like Matt Damon would be willing to work for free in the name of a good (educational) cause. But the “logic” reflected both in his comments and in the way they’re framed by Young could be said to assume the insecurity of “Ivory Tower” academics facing impending obsolescence, while playing up the often-self-fulfilling predictions made by ed-tech upstarts – “Casting Damon in a MOOC is just an idea, for now” (emphasis added).
I was thinking of all this during the panel on higher education and the media that I attended last Friday afternoon. I thought the facilitator (Anne McNeilly) and the three journalists on the panel (Léo Charbonneau, Scott Jaschik, and Simone Chiose) did a great job of explaining the context in which coverage of education issues is produced, and how this connects to the kinds of critiques they hear about that coverage. Not only are universities (for example) complex institutions with many facets that aren’t entirely visible even to those who participate in them every day; journalists must also develop ways of “finding” the stories and making them relatable to a much larger audience than the those assumed by most individuals, and they must do this with resources that are limited and not necessarily predictable.
The economic logic of the media, particularly those sources that operate primarily in online territory, tends to be one of attention. Linkbait is linkbait, and even the most offensive article can bring attention and start a “debate” that draws people back to the site repeatedly either through comments, or through a subsequent series of “response” articles. The idea is to gain readers, whereas for some academics, it seems the general goal is the opposite: to shave one’s audience down to the narrowest slice of an expert readership. While academics engaging in this kind of practice could be (and have been) accused of a form of professional solipsism, on the other hand media priorities in some cases encourage particular forms of gleefully narcissistic provocation, which we see in various mainstream publications (here’s an example; and its antidote). These are all dynamics that must be taken into consideration by those working in either higher education or journalism (or both) if coverage of postsecondary issues is going to work for “both sides”, i.e. for both educational and media institutions, as well as for their publics.
For some reason, in the last few months I’ve seen a number of articles and blog posts about the nature of “public intellectuals” – how to define the term, to whom it applies, and of course, the long-running series of “critiques” that discuss the failure of public intellectuals and what contributes to it. Maybe I’m just more attuned to the topic because I worked on the Public Intellectuals Project for a year. Or maybe it’s the fact that, uncomfortably, I started to hear the term being applied to me – and I had to ask myself why I wasn’t exactly happy about it.
There are plenty of people who have spent a lot of time thinking and writing about what it means to be an “intellectual”, and/or a “public” one. I’m not one of those people, so (ironically) I can’t claim that expertise; for me the issue has come up only through practice, not theory. But the use (and abuse) of the term “public intellectual” reveals much about the attitudes people hold towards it and whatever concept it represents to them. I’ve seen it placed in the same category, or conflated with, terms like “pundit”, “guru”, “talking head”, and “celebrity”. These refer not only to a type of public figure, but to one who may be viewed as a self-proclaimed expert. Often these references mock the assumed self-importance that must surely accompany not only the willingness to be “public”, but also to take on “intellectual” as a public persona.
According to their critics (for an example, try this article by Omer Aziz), public intellectuals have supposedly “failed” or “died out”, and there are two popular arguments about why this has happened. The first, most common argument – set out by Russell Jacoby and re-articulated repeatedly since then – blames the university and the professionalization of academe. Increased specialization is the culprit here, and the academic system that rewards those who tailor their work to others in the field for purposes of professional advancement, rather than to audiences beyond it. The meaning of “public” in this equation, tends to mean “non-specialist audiences” or more broadly, “outside the university”. In truth, academic culture in general implicitly encourages a low regard for those who work in the “public” eye, which is connected to the assumption that communicating with non-specialist audiences means “dumbing down” one’s message. This is part of why, in spite of the push for more “engagement” with publics beyond academe, these activities are not professionally recognised in the same way as more traditional activities like peer-reviewed publications.
The second reason for the “death” of public intellectuals is of course the Internet, which has polluted the pure pool of intellect with the corruption of superficial self-promotion, and…wait a second, I thought being “public” meant we were “impure” anyway? And since when was academe free of self-promotion? As you can see, there are plenty of contradictions here. Yet the Internet is feared and loathed as the catalyst that allows almost any person to speak out on any topic (though whether and how they are “heard” is a whole other issue). It offers us no traditional filtration system for determining who is a “real” intellectual and who isn’t, who has the “right” to speak and who doesn’t. Never mind that this also provides the opportunity – which some must see as a threat – for new voices to be heard, those of folks who’ve traditionally been shut out of “public intellectualism” and who may now have the opportunity of showing why they have every right to take on that role. Call me naive, but given the excellent articles I see appearing daily not just in mainstream publications but also on blogs, the picture looks far more complex than “the Internet corrupts intellectual life”. I think what we could and should be asking is not “who has the right” to be called a public intellectual in the age of the Internet, but which ideas (and individuals) seem to gain significant currency in this new context – and why.
I think underlying much of the criticism is a projection of desire and expectation, the hope that such a public figure will take on the tasks, and risks, that we ourselves cannot (or will not). At the same time there is skepticism and resentment, that the attention is directed at a particular individual – do they “deserve” the weight given to their words? How do we know? Why should they be the ones to whom we listen? What should they be allowed to say – should they stick to their areas of expertise, or provide commentary on other issues as well?
Generally, the articles that bemoan the “death” or “failure” of public intellectuals also include the author’s chosen examples of those who have succeeded. Those examples are telling – as is the fact that the “public intellectual” must be anointed by others, never self-described. Indeed, in some of the articles I have read, there is a sort of pining for a lost world of “real” intellectual selflessness, for the “generations of writers and thinkers for whom the demands of either the university or mass media were a minor concern, if even that”. How interesting that this era seems to coincide with the one wherein such activity tended to be institutionally limited to a homogenous intellectual elite of white males, in spite of so many others who have made significant contributions both then and since.
And yet I think all that I’ve said above provides us an explanation as to why many who would seem to belong in this category, and who have indeed been consigned to it, seem to reject the term or claim it doesn’t apply to them. It’s because once you’ve been labelled, there’s no winning: you can’t self-identify as a “public intellectual”, or you’re automatically either shot down, accused of “failure” to achieve unwieldy political goals, or simply assumed unworthy of the title. But if someone else describes you as such, can you accept the description without being seen as a “tool”? I wonder if this is why, as Andrew Potter discusses, there were some key figures (such as David Suzuki) who weren’t willing to include themselves in a recent book on Canadian public intellectuals. But in his review Potter makes a good point, which is that there’s no going back to the “golden age” so often invoked by critics – and that this isn’t a bad thing.
In recent months we have seen many controversial issues arising on university campuses and in other academic contexts in Canada and around the world, which have generated a good deal of media coverage. These are issues that in some cases connect the university, academics, and students to actions, behaviours, and attitudes that have been seen as shocking and/or surprising.
For example, take sexism. In September we saw incidents where, on multiple campuses (Memorial, Western, UBC, Saint Mary’s), frosh week activities were marred by expressions of misogyny and rape culture. There have also been sexual assaults on campuses, including York’s string of attacks and the most recent incidents at UBC. Meanwhile, two professors were charged recently with sex-related crimes – including creating child pornography, and luring young women into sexual situations.
Lately in the United States, we’ve also seen gender-based harassment in the science blogging community, where biologist Danielle N. Lee was called an “urban whore” for refusing to write a blog post for free. Even as the science community reacted in outrage, further revelations about a prominent science blog editor led to his resignation. The comments directed at Lee also revealed deeply-ingrained racist attitudes and serve as a reminder of the intersectional experience of abuse and harassment. When we hear questions about why there are “still” so few women in prominent positions in science, and even fewer women of colour, we don’t have to look far for the answers.
For another, more historical example relating to race and racism, there’s also fascinating research that’s been coming out recently about the relationships of U.S. universities to the slave trade. It turns out – surprise! – that universities have long been tied to the economic context in which they operate, and in the era of slavery this was no different. From this, we should also be reminded of the historical role of academics in constructing and legitimizing scientific racism. The IQ test itself is part of the legacy of attempts to “prove” differences in intelligence between people of different races.
It’s not just gender and race that are factors in this equation. Other forms of discrimination are also rampant – against people with disabilities, and LGBTQ folks, and those dealing with mental health issues, and of course there’s fatphobia, which is so rarely discussed that you could almost believe it doesn’t exist (with this past summer providing an exceptional case). It’s just a shame we only notice microaggressions when they turn into macroaggressions.
Then there’s the related problem of workplace harassment and abuse. It turns out that academics can behave badly in the workplace too, and there’s a small and growing body of research showing how it happens. We have at least one recent case of this occurring in a Canadian university (McMaster), but this example is quite exceptional in the level of public attention it has gained.
Lastly, I refer you to the global report on corruption in universities, which shows the many ways that campuses are home to embezzling, corporate influence on research, sketchy student recruitment practices, and more. There are also forms of research fraud that have occurred such as falsification of results – or the gaming of the citation system in order to increase rankings.
Of course, none of this should be a surprise. We have high expectations of our universities as institutions of learning and knowledge, but all these examples merely serve to remind us that there is nothing inherently moral or good about the university and its task, and in particular, that the context of knowledge creation is in no way separate from the social world in which such knowledge will come to circulate and be put to use. The assumption that the university is somehow outside of the problems we see in the rest of society is part of what underlies the shock people express when abuses are uncovered, when sexism is still rampant, and when corruption is still endemic.
This is why the description of academe as somehow not the “real world” is so errant and potentially destructive. The “ivory tower” metaphor is inappropriate in that it invokes an idea of academe as not only apart from the world but also above it. This is, I believe, directly related to the need to prove the “objectivity” of knowledge and thus its authority. By this logic, it begins to make sense that Danielle Lee’s blog post describing the racism and sexism she experienced was removed by editors at Scientific American, who argued that it was not about “discovering science”. But what could be more crucial to “discovery” and to knowledge than the factors that shape who is allowed to discover things, and how?
In a context where universities are relying more on private funds, where they are encouraged to compete with each other for resources and students, and where being “the best” not just nationally but internationally is an imperative, these issues may be downplayed or treated as momentary “crises” to be managed rather than long-term problems based on entrenched systemic patterns. That latter angle doesn’t make for good public relations fodder.
But we do have an opportunity here, an opportunity to shine a light on these things in ways that may not happen so easily in other institutions. Does the university have a dirty past, and a grubby present? Yes. But that isn’t all it has, and it doesn’t mean we should lose hope in the university as an institution. It just means we need to realise there’s no real separation between what goes on in the ivied halls, and what’s happening “outside” in society at large. As much as we dislike having to admit it, the university is of the world – not above it; but it’s the task of a knowledge institution that is special, meaning that perhaps we do have the tools to address these problems in ways that other institutions can’t. In order to do this, though, the university must address itself and have knowledge of itself – as painful as that might be.