Posts by Melonie Fullick
Lately I’ve been thinking a lot about the job market, poverty, and the assumptions we make when we talk about people’s choices, partly because recently I’ve seen two excellent and provocative posts about this. The first is from Tressie McMillan Cottom on “The logic of stupid poor people”, a post that discusses how expensive status symbols (like a $2,500 handbag) act as powerful signifiers, and how in general there is a complex performance that must be mastered in order for class “mobility” to happen. The second post, “Why I Make Terrible Decisions, or, poverty thoughts” is by Linda Walther Tirado who writes about the (non-) choices faced by poor people, and how they are criticized for what they choose.
I think a theme in these posts is how the scope of (perceived and actual) possibility diminishes radically as your finances worsen – and this affects not only the decisions you make but also your whole view of the world and what it has to offer.
For example, think about the search for adequate employment and how this is experienced by people with low and/or unstable incomes. One thing it highlights is how space changes when you’re poor: no money for a driver’s license, insurance, or a car? Then you can’t take jobs that are far from public transit or that demand the applicant has their own vehicle. Can’t afford to pay for the bus anymore? Then your options diminish further. Time also changes when you’re broke; you get less done because you spend a lot of time just waiting, or having to get things done the hard way because the easy way costs more. Waiting for the bus and hoping you can get to an interview on time. Waiting in line-ups. Waiting for people to get back to you about money or jobs. Waiting for a cheque to arrive so you can pay the bills. Waiting.
What about other resources you need in order to find work? No money to buy a nice suit, appropriate shoes, or makeup and a good haircut? Then you’ll have a hard time applying for office jobs or even much of the service work available, since appropriate self-presentation is crucial (see Tressie’s post for a great description of how this works). These days you need the Internet for a good job search, but what if you can’t get access? What if you can’t afford a phone, making it difficult for potential employers to contact you? What if you’re not exactly sure what will be the next thing you can eat for dinner, and that’s a more pressing concern than drafting the perfect resumé? Priorities change when money runs out, and choices change as well.
Most importantly, you need resources to gain resources. This is either a virtuous circle or a vicious one, depending on where you are on the income spectrum. As Linda Walther Tirado writes in her post: “We know that the very act of being poor guarantees that we will never not be poor.” That’s knowledge that comes from life experience, not from the classroom.
Add to this the weighty sediment of shame that builds up, because in this Western, capitalist society, we tend to assess people’s intelligence and their moral capacity by their financial circumstances. It’s assumed that people are “stupid”, “irresponsible”, and “frivolous” when they lack money. At best they are “unfortunate”. But the assumption is that there’s something wrong with them, not with their context, not with the entire system in which they’re forced to participate. There’s always something “they” could have done to correct their own course; there’s always a different decision they could have made, some cost they could have cut. Never mind that others get to make mistakes and fall back repeatedly, unscathed, on the soft cushion of privilege. To open up about being poor (or about trouble with finances in general) is to expose oneself to scrutiny and judgement of one’s decisions and one’s character.
And so “charity” comes to feel like a dirty word to those who may be on the receiving end of it. Charity means you couldn’t make it on your own – never mind that most people who “make it” do not do so without often-invisible forms of help and support. Charity means relying on the goodwill of strangers, in a society where independence and self-sufficiency are both over-valued and mostly illusory.
You may wonder why I’m writing about work and money and opportunities, instead of education. I think the debate about being poor is intimately linked to the arguments we make about higher education and who has access to it, and the differing “outcomes” of that education. All the things I just described are things that some students may be experiencing or may have experienced in the past. They’re all factors that affect people’s perceptions of the value of things, including education – and the risks we’re told we have to take to access that value. When we talk about student financial assistance, “debt aversion”, the job market, “entrepreneurialism”, and most of all “risk”, we are making assumptions not just about income and privilege but also about mindset.
Not only that, but of course there’s an intimate link between money and mental health issues, and it’s a link that goes both ways. Issues like clinical depression can lead to poverty, but poverty can cause these issues, too. Long-term financial instability wears you down; it reduces the sense that you can gain any control over your own life. Those students who’ve arrived at university from that kind of background are already dealing with a specific kind of long-term exhaustion. They are more vulnerable to being overwhelmed, and possibly less likely to feel safe asking for help. It’s not just the stress of education they’re dealing with – it’s the cumulative stress of living with worry about the lack of things, and the potential lack of things, including lack of possibilities. If they have low expectations about the “returns” on their education “investment”, then this is big part of it; if they fear for the future, then who can blame them?
There’s so much hue and cry about the diminishing opportunities for those who were previously part of the middle class – as if a problem only matters when it happens to folks who had better things in mind. But for some people this has always been their mode of living, their understanding of the world. When we hold out the promise of a better life as the result of higher education, not everyone can believe in that promise. When pundits bemoan the “high expectations” of an entire generation, they’re forgetting that not everyone had the expectation of magical prosperity either from education or anything else. If we took loans, it wasn’t because we truly believed we could repay them; it was because we saw no other option, because we were told our chances of survival were even lower without the coveted Bachelor’s degree. It was because not having a degree was presented a threat to our future employability, and the fear of debt was overshadowed by the fear of other forms of uncertainty. That doesn’t feel like a “choice” – it feels like coercion, and it’s something we need to start thinking about when we engage in debates about policy and accessibility.
A recent article on Slate‘s website came to my attention only because so many academics in my personal Twittersphere were reacting negatively to it. The article caused outrage with its discussion of EdX founder Anant Agarwal’s suggestion that professors who create and present material for (video-based) MOOCs could be replaced by Hollywood stars, who would lure more students to enrol in and complete the courses. This is presented as the logical solution to the problem of needing more camera-savvy and student-friendly presenters, since not all profs are up to the task. Further commentary from Udacity founder Sebastian Thrun is supportive of Agarwal’s approach. I won’t link to the Slate piece here, but you can Google “The new rock-star professor” (if you must).
The article captures our attention by positioning Agarwal’s and Thrun’s comments within a particular frame of speculation, i.e. that professors could be “replaced” by stars who appeal to the “masses” in a system that un-bundles the work of teaching into user interface design, content production, “delivery”, and assessment. It’s not a neutral framing, because it puts forth a vision of education that subjugates the expertise of faculty (and of educators in general) to the logic of markets and to the “big data” that are assumed to generate more important pedagogical insights than experienced professionals can. It also conflates learning with “content delivery”, espousing interaction and personalisation while in practice apparently relying on what Freire called the “banking model” of education.
But to turn back to Slate, they’re certainly not the only publication to realise that anger generates interest, that there are ways of making academics angry, and that this anger leads to pageviews (maybe we should call them “rage-views”). Slate’s a bit late in catching on to a game that’s been played successfully before by the Chronicle of Higher Education, the Washington Post, The Economist, Forbes, the New York Times, and others. Popular articles include those that take on the work-life balance and/or salary of the “average” professor, or that reek of unexamined privilege and spark strong feminist or anti-racist critiques, or of course those that pronounce on the future of higher education, which is conveniently subject to apocalyptic speculation that can continue ad nauseum (because the future is always…the future, right? Fair game!).
This time around the article was written by Jeff R. Young, who’s also the tech editor for the Chronicle. In fact part of the piece is a modified excerpt from his e-book, the title of which contains the words “MOOC”, “high-tech”, and “disruption”. Based on its Amazon.com description, the book looks more like a cram guide for busy senior administrators, to whom I’d recommend instead Audrey Watters’ blog Hack Education. In the meantime, academic rage at Young’s article has probably brought a good deal of attention to his book (I couldn’t tell you if it’s boosted sales).
Taking apart articles like the one Young published in Slate is practically a bore at this point. It so clearly sets out to prod at academic sore spots, taking consumerist logic to extremes while playing on a major theme from recent higher ed reporting (MOOCs can hardly be called “news” by now). Even better is that authors who write on this topic don’t need to make up their own extreme speculations, since the quotes they’re using are taken directly from ed-tech celebs like Agarwal and Thrun whose popularity in turn is strengthened by their edgy proclamations. The excerpt doesn’t address whether stars like Matt Damon would be willing to work for free in the name of a good (educational) cause. But the “logic” reflected both in his comments and in the way they’re framed by Young could be said to assume the insecurity of “Ivory Tower” academics facing impending obsolescence, while playing up the often-self-fulfilling predictions made by ed-tech upstarts – “Casting Damon in a MOOC is just an idea, for now” (emphasis added).
I was thinking of all this during the panel on higher education and the media that I attended last Friday afternoon. I thought the facilitator (Anne McNeilly) and the three journalists on the panel (Léo Charbonneau, Scott Jaschik, and Simone Chiose) did a great job of explaining the context in which coverage of education issues is produced, and how this connects to the kinds of critiques they hear about that coverage. Not only are universities (for example) complex institutions with many facets that aren’t entirely visible even to those who participate in them every day; journalists must also develop ways of “finding” the stories and making them relatable to a much larger audience than the those assumed by most individuals, and they must do this with resources that are limited and not necessarily predictable.
The economic logic of the media, particularly those sources that operate primarily in online territory, tends to be one of attention. Linkbait is linkbait, and even the most offensive article can bring attention and start a “debate” that draws people back to the site repeatedly either through comments, or through a subsequent series of “response” articles. The idea is to gain readers, whereas for some academics, it seems the general goal is the opposite: to shave one’s audience down to the narrowest slice of an expert readership. While academics engaging in this kind of practice could be (and have been) accused of a form of professional solipsism, on the other hand media priorities in some cases encourage particular forms of gleefully narcissistic provocation, which we see in various mainstream publications (here’s an example; and its antidote). These are all dynamics that must be taken into consideration by those working in either higher education or journalism (or both) if coverage of postsecondary issues is going to work for “both sides”, i.e. for both educational and media institutions, as well as for their publics.
For some reason, in the last few months I’ve seen a number of articles and blog posts about the nature of “public intellectuals” – how to define the term, to whom it applies, and of course, the long-running series of “critiques” that discuss the failure of public intellectuals and what contributes to it. Maybe I’m just more attuned to the topic because I worked on the Public Intellectuals Project for a year. Or maybe it’s the fact that, uncomfortably, I started to hear the term being applied to me – and I had to ask myself why I wasn’t exactly happy about it.
There are plenty of people who have spent a lot of time thinking and writing about what it means to be an “intellectual”, and/or a “public” one. I’m not one of those people, so (ironically) I can’t claim that expertise; for me the issue has come up only through practice, not theory. But the use (and abuse) of the term “public intellectual” reveals much about the attitudes people hold towards it and whatever concept it represents to them. I’ve seen it placed in the same category, or conflated with, terms like “pundit”, “guru”, “talking head”, and “celebrity”. These refer not only to a type of public figure, but to one who may be viewed as a self-proclaimed expert. Often these references mock the assumed self-importance that must surely accompany not only the willingness to be “public”, but also to take on “intellectual” as a public persona.
According to their critics (for an example, try this article by Omer Aziz), public intellectuals have supposedly “failed” or “died out”, and there are two popular arguments about why this has happened. The first, most common argument – set out by Russell Jacoby and re-articulated repeatedly since then – blames the university and the professionalization of academe. Increased specialization is the culprit here, and the academic system that rewards those who tailor their work to others in the field for purposes of professional advancement, rather than to audiences beyond it. The meaning of “public” in this equation, tends to mean “non-specialist audiences” or more broadly, “outside the university”. In truth, academic culture in general implicitly encourages a low regard for those who work in the “public” eye, which is connected to the assumption that communicating with non-specialist audiences means “dumbing down” one’s message. This is part of why, in spite of the push for more “engagement” with publics beyond academe, these activities are not professionally recognised in the same way as more traditional activities like peer-reviewed publications.
The second reason for the “death” of public intellectuals is of course the Internet, which has polluted the pure pool of intellect with the corruption of superficial self-promotion, and…wait a second, I thought being “public” meant we were “impure” anyway? And since when was academe free of self-promotion? As you can see, there are plenty of contradictions here. Yet the Internet is feared and loathed as the catalyst that allows almost any person to speak out on any topic (though whether and how they are “heard” is a whole other issue). It offers us no traditional filtration system for determining who is a “real” intellectual and who isn’t, who has the “right” to speak and who doesn’t. Never mind that this also provides the opportunity – which some must see as a threat – for new voices to be heard, those of folks who’ve traditionally been shut out of “public intellectualism” and who may now have the opportunity of showing why they have every right to take on that role. Call me naive, but given the excellent articles I see appearing daily not just in mainstream publications but also on blogs, the picture looks far more complex than “the Internet corrupts intellectual life”. I think what we could and should be asking is not “who has the right” to be called a public intellectual in the age of the Internet, but which ideas (and individuals) seem to gain significant currency in this new context – and why.
I think underlying much of the criticism is a projection of desire and expectation, the hope that such a public figure will take on the tasks, and risks, that we ourselves cannot (or will not). At the same time there is skepticism and resentment, that the attention is directed at a particular individual – do they “deserve” the weight given to their words? How do we know? Why should they be the ones to whom we listen? What should they be allowed to say – should they stick to their areas of expertise, or provide commentary on other issues as well?
Generally, the articles that bemoan the “death” or “failure” of public intellectuals also include the author’s chosen examples of those who have succeeded. Those examples are telling – as is the fact that the “public intellectual” must be anointed by others, never self-described. Indeed, in some of the articles I have read, there is a sort of pining for a lost world of “real” intellectual selflessness, for the “generations of writers and thinkers for whom the demands of either the university or mass media were a minor concern, if even that”. How interesting that this era seems to coincide with the one wherein such activity tended to be institutionally limited to a homogenous intellectual elite of white males, in spite of so many others who have made significant contributions both then and since.
And yet I think all that I’ve said above provides us an explanation as to why many who would seem to belong in this category, and who have indeed been consigned to it, seem to reject the term or claim it doesn’t apply to them. It’s because once you’ve been labelled, there’s no winning: you can’t self-identify as a “public intellectual”, or you’re automatically either shot down, accused of “failure” to achieve unwieldy political goals, or simply assumed unworthy of the title. But if someone else describes you as such, can you accept the description without being seen as a “tool”? I wonder if this is why, as Andrew Potter discusses, there were some key figures (such as David Suzuki) who weren’t willing to include themselves in a recent book on Canadian public intellectuals. But in his review Potter makes a good point, which is that there’s no going back to the “golden age” so often invoked by critics – and that this isn’t a bad thing.
In recent months we have seen many controversial issues arising on university campuses and in other academic contexts in Canada and around the world, which have generated a good deal of media coverage. These are issues that in some cases connect the university, academics, and students to actions, behaviours, and attitudes that have been seen as shocking and/or surprising.
For example, take sexism. In September we saw incidents where, on multiple campuses (Memorial, Western, UBC, Saint Mary’s), frosh week activities were marred by expressions of misogyny and rape culture. There have also been sexual assaults on campuses, including York’s string of attacks and the most recent incidents at UBC. Meanwhile, two professors were charged recently with sex-related crimes – including creating child pornography, and luring young women into sexual situations.
Lately in the United States, we’ve also seen gender-based harassment in the science blogging community, where biologist Danielle N. Lee was called an “urban whore” for refusing to write a blog post for free. Even as the science community reacted in outrage, further revelations about a prominent science blog editor led to his resignation. The comments directed at Lee also revealed deeply-ingrained racist attitudes and serve as a reminder of the intersectional experience of abuse and harassment. When we hear questions about why there are “still” so few women in prominent positions in science, and even fewer women of colour, we don’t have to look far for the answers.
For another, more historical example relating to race and racism, there’s also fascinating research that’s been coming out recently about the relationships of U.S. universities to the slave trade. It turns out – surprise! – that universities have long been tied to the economic context in which they operate, and in the era of slavery this was no different. From this, we should also be reminded of the historical role of academics in constructing and legitimizing scientific racism. The IQ test itself is part of the legacy of attempts to “prove” differences in intelligence between people of different races.
It’s not just gender and race that are factors in this equation. Other forms of discrimination are also rampant – against people with disabilities, and LGBTQ folks, and those dealing with mental health issues, and of course there’s fatphobia, which is so rarely discussed that you could almost believe it doesn’t exist (with this past summer providing an exceptional case). It’s just a shame we only notice microaggressions when they turn into macroaggressions.
Then there’s the related problem of workplace harassment and abuse. It turns out that academics can behave badly in the workplace too, and there’s a small and growing body of research showing how it happens. We have at least one recent case of this occurring in a Canadian university (McMaster), but this example is quite exceptional in the level of public attention it has gained.
Lastly, I refer you to the global report on corruption in universities, which shows the many ways that campuses are home to embezzling, corporate influence on research, sketchy student recruitment practices, and more. There are also forms of research fraud that have occurred such as falsification of results – or the gaming of the citation system in order to increase rankings.
Of course, none of this should be a surprise. We have high expectations of our universities as institutions of learning and knowledge, but all these examples merely serve to remind us that there is nothing inherently moral or good about the university and its task, and in particular, that the context of knowledge creation is in no way separate from the social world in which such knowledge will come to circulate and be put to use. The assumption that the university is somehow outside of the problems we see in the rest of society is part of what underlies the shock people express when abuses are uncovered, when sexism is still rampant, and when corruption is still endemic.
This is why the description of academe as somehow not the “real world” is so errant and potentially destructive. The “ivory tower” metaphor is inappropriate in that it invokes an idea of academe as not only apart from the world but also above it. This is, I believe, directly related to the need to prove the “objectivity” of knowledge and thus its authority. By this logic, it begins to make sense that Danielle Lee’s blog post describing the racism and sexism she experienced was removed by editors at Scientific American, who argued that it was not about “discovering science”. But what could be more crucial to “discovery” and to knowledge than the factors that shape who is allowed to discover things, and how?
In a context where universities are relying more on private funds, where they are encouraged to compete with each other for resources and students, and where being “the best” not just nationally but internationally is an imperative, these issues may be downplayed or treated as momentary “crises” to be managed rather than long-term problems based on entrenched systemic patterns. That latter angle doesn’t make for good public relations fodder.
But we do have an opportunity here, an opportunity to shine a light on these things in ways that may not happen so easily in other institutions. Does the university have a dirty past, and a grubby present? Yes. But that isn’t all it has, and it doesn’t mean we should lose hope in the university as an institution. It just means we need to realise there’s no real separation between what goes on in the ivied halls, and what’s happening “outside” in society at large. As much as we dislike having to admit it, the university is of the world – not above it; but it’s the task of a knowledge institution that is special, meaning that perhaps we do have the tools to address these problems in ways that other institutions can’t. In order to do this, though, the university must address itself and have knowledge of itself – as painful as that might be.
The current Ontario government has been formulating ideas for systemic change in higher education since at least 2005, when the Rae Review was released. Some of the issues raised in that review are still with us now – and one of those issues is university differentiation, which has come up yet again via a data set (PDF) from the Higher Education Quality Council of Ontario (HEQCO) and most recently in the provincial government’s draft (PDF) of a framework for differentiation (here’s a good summary by Gavin Moodie).
Differentiation refers to the idea that universities should each take on a distinct “mission”, one that sets them apart from other institutions, and that their activities and priorities should flow from the mission so chosen. The point of differentiation (PDF) in this way is to curtail or reduce costs through the elimination of activity that does not contribute to the university’s “mission”, and to increase quality by having institutions focus their various resources on a reduced range of programs and/or functions. Past discussions of teaching-focused universities (which already exist in some other provinces) were borne of the same logic.
Earlier in this process, the Ontario government required universities to produce Strategic Mandate Agreements outlining how they would take on specific roles in a larger provincial system. However, an “expert panel” who reviewed the results of this exercise concluded that universities had failed to generate mandates that show significant diversity. Since they haven’t been able to implement differentiation “from the bottom up”, universities are now haunted by the spectre of increased government intervention. Based on the reports we see so far, attaching external funding to internal change is the government’s primary tactic for making the desired change happen.
There are a number of reasons why Ontario’s universities haven’t spontaneously differentiated themselves in the “right” way, and thus why the government may begin to impose more conditions on funds. Asking (or telling) universities to define specific “mandates” isn’t just about saying what they will be doing; it also requires them to make decisions about what they won’t be doing, and the implication is that as some areas are strengthened, other areas will be pruned. Such changes are difficult to make at the best of times, and they cannot be made quickly if there is to be adequate consultation. At the moment, program reviews at the University of Guelph provide an example of this kind of process in action.
Not only are the required decisions difficult to make (and to implement), but most universities are striving to be a similar kind of institution: a “world-class research university”. Universities are not like regular products in a market, as much as their sophisticated branding efforts might indicate this. One of the great contradictions of increasingly marketized university systems is that universities run on prestige, and thus they’re unlikely to accept voluntarily a lower status in the hierarchy as a means of accessing a different, less prestigious slice of the student market. This explains – at least in part – why so many universities seem to have the same “vision” and “mission” in mind (an example is provided by Alex Usher in this blog post on Western’s new strategic plan).
It also helps explain why HEQCO’s categorization of universities is focused on two criteria, research and “comprehensiveness” (reduced from five criteria in their 2010 report, PDF). The strong association of research with prestige means that universities that focus on either teaching or research (for example) are not “different but equal”; differentiation becomes a form of hierarchization because some activities are valued more than others globally. This is possibly why some Ontario universities would applaud formalized stratification, since it would freeze the existing order in place and prevent lesser institutions from trying to climb the prestige ladder. Such a process would have the more tangible benefit of bringing research funding and higher quality students to universities closer to the top of said ladder, without pesky competition from those lower down. It takes resources to be “world class”, after all.
Being internationally competitive means looking at the big picture and where Canadian universities appear in it. Often, comparisons between Canada and other countries such as the United States emphasize that Canadian universities are much more homogeneous than their U.S. counterparts. In Canada, there is said to be a much more “level playing-field” as it were. This is at least in part because almost all Canadian universities are publicly funded, and they have a fairly low level of marketization compared to (for example) the United States.
Yet this situation, which some might see as beneficial, is bemoaned as an obstacle to achieving true “world class” quality. What we see is the conflation of egalitarianism with mediocrity, at least in the rhetoric of justification that is employed. As the argument goes, if we don’t deliberately narrow and target our funding to the “best and brightest”, then how can we have any universities that “compete” internationally? By this logic, spreading the money around is like spreading fine manure too thinly over a large garden. Better to choose a few spots where we want significant growth to happen, rather than reap a weedy crop from a more dispersed fertilization.
The assumption that we must compete internationally, that we must be “world class” (according to a fairly narrow definition), is taken as a given in almost every case where this is discussed. Who would dare to suggest we not strive to be closer to Harvard, MIT, Oxford, and the like? Surely it’s obvious that these are the standards to which we all must aspire. Who could be against “quality”? Such considerations are more important than ever at a time when Canadian universities are ramping up their efforts to recruit top international students. Every university must be “world-class” if it wants to attract the best and brightest from “emerging [student] markets” in countries like India, China, and Brazil. Being world-class is part of the brand; and of course, Canadian universities aren’t the only ones relying on these arguments.
Trying to curtail costs by limiting the expansion of universities’ mission isn’t a new practice, but it’s one that (in Ontario) would be seen as impinging on the significant degree of autonomy that universities have enjoyed in the past. Not only will funding change, but it will be steered by a logic of competitiveness that invokes global trends, even as it confers on only a few universities the permission to pursue them. Bolstering those universities that already dominate in the (international) rankings and in research funding is viewed as strategic allocation in the service of “excellence”.
Universities can’t be all things to all students, and there are no easy answers here. The government must also consider factors like geography and its effect on access; meaningful partnerships between universities and their local communities; and of course, the quality of teaching, which is a stated imperative for every institution (though how they follow through on it may be another story). But even the relentless obsession with recruiting the “best” students leaves us with the question of how “other” students will fare, and who will focus on their needs, if this mission doesn’t bring the global accolades that universities so cherish. While the Ontario government may insist that all mandates will be valued equally, it remains to be seen how this “value” will manifest in the current environment.
“Everyone loves to identify things that have not been identified.
The rabbit hole, where ever I find it, symbolizes solitude.”
–Terrance Hayes, For Crying Out Loud
Recently I’ve been finding it much harder to blog because I’ve been homing in on certain aspects of my dissertation, which has taken up an ever-larger chunk of my focus and thinking time. This is a good thing of course, but it means I’ve also been more impressed than ever at how others are able to write excellent and timely article and blog posts on the latest issues, while I can barely keep up with the higher ed news.
Part of the problem right now is that I’m immersed in a couple of specific data-gathering tasks and it’s almost impossible to avoid thinking about it when I’m “swimming” in so much information. For example, one of the tasks involves news articles, thousands of them. I’ve mastered the art of combing through the lists of articles generated by my searches, then skimming the articles themselves for search terms, to make judgement calls about their relevance based on how many times a word is mentioned and in what particular contexts, the topic of the article in relation to my study, or what the article tells me about the organization that is the focus of my project. At the same time I’m trying to piece together a bigger picture through combining these articles with what I’ve already collected from other sources. Initially I wasn’t sure I wanted to do this kind of news search for my project, because of the time it would take; but once I realized it was generating useful information, I knew it was the right move.
Each of these decisions about relevance is important, because it contributes to a research project that in turn makes claims about what is or can be “known” about my topic. If the goal of research is to expand knowledge, then we have to think carefully about what we use to build the scaffolding on which our claims rest.
In my own experience there hasn’t been nearly enough emphasis on the fact that this process is not (just) about “producing a dissertation” or “doing a PhD” but also about creating knowledge. Each decision we make about research is rooted in this. Epistemology isn’t just a branch of philosophy, it’s something we should know about in relation to the most mundane aspects of research tasks at every level: why choose a particular source for information? Why use more than one source? What assumptions inform your analysis of that source? How does it contribute to “knowing” something about the subject of your study?
My research materials have been somewhat overwhelming from the start; there was no clear path to a structure, even though I had built in boundaries. Since my project is partly about pulling together multiple versions of events to create a kind of multifaceted account of organizational change, it was hard to figure out where to begin. If there isn’t one “true” and essential story, what’s the first story or piece of information that you’re going to build on? I sometimes think about how this is like making a rope by twisting together fine shreds of fibre – you start with something thin and impractical, and gradually, you add more and more, using the initial thread as a guide, until there’s a recognizable form.
That’s one of the things that drives me to keep investigating – realizing I’m stringing together a lot of information that’s never been collected and organized in this way, and that there are probably implications that I can’t yet “see” (but eventually they’ll be visible). Of course, all I “see” right now is each individual tiny piece that must be brought into coherence with other pieces in a complex image that’s not yet clear to me. It often becomes more evident through immersion in the material, as I’ve discovered in the process of transcribing interviews.
I think part of what researchers, academics, and others doing intellectual work learn through working is to have a kind of split-focus. Each task requires a lot of focus, but there are many tasks to accomplish and you can’t allow them all to crowd into your mind at the same time. The distractions, even within the scope of a particular task, are plentiful and it’s pleasurable to indulge in a ramble through these curious little backwoods paths through data that keep emerging. I often think of tribbles, from the original Star Trek series: you have one, and then it just keeps multiplying. An article comes up with content that relates to half a dozen other articles, or to other issues that might be relevant so you think you should probably check them out – just in case. Sometimes I end up with 5 browser windows and who-knows-how-many tabs open, along with Word files full of notes, PDFs of journal articles, news items, institutional documents, and whatever else seems helpful for the tangent being followed in the moment. If I don’t at least set things aside when they spring to mind (as opposed to ignoring them completely), then I can’t focus enough to get any single task finished. This is why I have about 40 draft blog post “ideas” sitting in a folder, waiting for later. I’ve had to learn how to keep the scraps organized.
I’m not alone in this experience of going “down the research rabbit hole”, as fellow dissertators on Twitter remind me on a regular basis, thankfully. I expect (and hope) that over time this need for focus becomes easier to manage, as does the uncertainty about each decision along the research path. In the meantime, I have a lot of news articles to sift through.
This post is about one of my favourite issues in education and various areas of knowledge policy: the attempt to use policy to reliably generate the unpredictable.
As an example, one of the themes that recurs in certain kinds of policy design is the idea of creating a geographic hub of innovation, a golden patch of tech turf that connects universities with businesses and governments, and generates technological change that pays off for all involved: in other words, a new “Silicon Valley”, which is the touchstone for these kinds of discussions. Re-creating Silicon Valley has become a policy goal, with the desired result that economic competitiveness will follow; hence the appearance of “science/research parks” and “innovation districts”.
There’s plenty of research about innovation, organizations, and technological discovery and development, that strives to provide explanations about why we see these much-lauded historical irruptions of creativity, innovation, and of course commercial success. The idea is to be able to make this happen in a deliberate way by adding the right ingredients into the socio-political-economic mix.
For example, I think there’s an implicit understanding that space and physical proximity have effects on eventual outcomes in “discovery” when creativity and innovation are the goals. At a time when we’re hearing so much praise about online spaces and their possibilities, I think this is another sign that certain configurations of place and space are scarce commodities in research and higher education. The value of these commodities is such that they’re assumed to be part and parcel in the construction of conditions where “innovation” will flourish, and in elite contexts they’re accommodated as such.
Serendipity in teaching and learning, too, depends partly on the unpredictable outcomes of social contact. Creative sparks can fly when we’re bumping up against other people and their diverse ideas and perspectives, the eclectic combinations of knowledge they’ve built up over time. Each person who takes in knowledge also changes it through the process of knowing. In this way, it could be said that all education is built on a series of chance encounters.
Of course we can’t really plan for chance, which is possibly why it’s the one ingredient left out of most of the formulae we see being applied. Chance is inefficient per se, problematic in terms of the actual goals of planning, which require the assumption of at least some form of certainty. Better to make sure that other things happen, the things we can guarantee, speaking in those terms instead of the nebulous shades produced by the idea of serendipity, accident, and so on. The power of the serendipitous lies in its very unpredictability, but try explaining that in a grant proposal.
Here in Canada, the federal government isn’t much into chance, unless you count the gamble we take by developing policy without sufficient evidence available to inform it. This government is unlikely to solve Canada’s perennial “innovation” problems by targeting large amounts of funding to those projects it deems most meritorious. Nor will students find answers for themselves – or for the much-lamented “skills gap” – by trying to engage in the kind of advanced futurology that is now expected of them as they prepare to enter the job market.
To return to Silicon Valley and its clone zones: where knowledge is tied to governance, where policy must make predictions, we place bets on future success by attempting to emulate success seen elsewhere in the past and present. The historical analysis is in some cases a selective one; rarely if ever do we see calls for military involvement in new innovation hubs, yet Silicon Valley’s prosperity was built more on US military funding than on venture capital, as is pointed out by Steve Blank.
For good education and research to happen, even for those eventual economic benefits to materialize, we need place/spaces where we can allow for possibilities and work through failure and permit experimentation, where we can learn how to take chances and follow our noses – while encountering others – rather than just building on an assumed formula for success. After all, it doesn’t matter how high the stakes are; we can’t know the future, and if we can’t imagine a new model of success, we won’t be able to deal with whatever changes the future brings our way.
In this week’s post I’m going to stay with the subject of media and higher education, since there’s so much to work with at the moment – ‘tis the season, as they say. Since I last wrote, there’s a new, strategically-timed CIBC World Markets report that has garnered a good deal of media coverage, because it essentially claims that the value of university degrees has declined and that there are radically different “earnings premiums” on different fields of study. The humanities and social sciences of course end up lower in this hierarchy of profit than engineering, commerce, and health-related fields.
There are a lot of points that have already been made in other columns and blogs, so I won’t repeat them (Léo Charbonneau has a selection linked in his own helpful post, here). Instead I’ll just take a some time to focus on one of the issues that I had with this report, or at least with the coverage of its contents.
Whenever political, economic, and social problems are being discussed in the news media (or pretty much anywhere else), people will tend to look for a place to lay the blame – because that’s how we find (or at least propose) various kinds of solutions: by determining where things must be “going wrong”, and proposing an intervention. This is why there’s a need to be skeptical about the assumptions put forth in any argument about crisis in the present and the kind of (often “urgent”) action required to remedy it. The diagnosis tends to be a platform for the promotion of a particular cure.
To return to the CIBC report and the media coverage of it, here are some quotes about the source of the problem being debated:
“…degree holders fall behind in the earnings scale”, which is “largely the result of the programs Canadians have chosen to study” (CIBC, Newswire).
“Despite the fact that it is well known that certain degrees pay more in Canada, there hasn’t been any sort of gravitation towards those degrees among students to match the job market” (Financial Post).
“Plus, more women are choosing to pursue post-secondary education – and females are “disproportionately represented” in arts and social sciences” (HR Reporter).
“…experts are warning that young people aren’t making the educational choices that will allow them to step in [when Baby Boomers retire]”; “Many have arguably been victims of poor advice, encouraged by their parents and high school teachers to follow their whims and passions instead of making realistic career plans for a difficult job market” (National Post).
“….it is crucial to Canada’s economy that we start producing more graduates in growth areas of the economy” (CIBC, Newswire).
That’s right: the blame is being placed primarily on students (perhaps especially women) for making poor choices about their education. If students continue to choose the humanities over the sciences, for example, they can expect poor “returns” on their investment in education, because humanities degrees don’t “pay”. This in turn exacerbates the “skills gap” and affects the success of the Canadian economy, hence the complaint voiced by Rick Miner that “We’re letting a bunch of 17- and 18-year-olds dictate our labour market composition, and they’re not given a lot of advice to make decisions about what might be in their best interests.”
But what else should we expect from those teenagers, when they’ve often been encouraged to see education both as a costly commodity and as a route to a job, without being given any guidance as to how this translation/transition from education to employment actually happens? Even with advice, do we expect young students, or even their parents, to be perfectly informed consumers when it comes to selecting their degree program? Do we expect students’ decisions over a four-year period to reflect this level of information – and who is providing it? Are job market prospects the only factor affecting students’ choices, or are grades, advice and steering, geographic location, and expense, factors as well? Does the job market remain static for four years, and if not, can governments and universities successfully predict its fluctuations? Apparently “[i]n the absence of reliable data, labour market experts encourage students to do their own research in fields they’re interested in” (National Post). But even if they all believed that education should be solely about getting “the biggest bang for [your] buck”, they’d have a hard time finding the necessary information to predict the future of the job market.
We’ve seen all this before in past coverage, but now the argument has returned, full-force; the CIBC piece merely feeds an existing myth, one that also meshes with (and takes momentum from) the ongoing debate about Canada’s “skills gap” and the question of the “value” of humanities and liberal arts education.
As Kate T. Lawson argues, “one thing universities can’t do is perform magic tricks”: they can’t “fix” the economy, or eliminate inequality, or somehow solve problems that are rooted in multiple facets of society, simply by producing the right kinds of graduates or research. The bizarre situation in which we find ourselves is one where it apparently makes sense to increasingly privatize the cost of education, then expect students to make “choices” that are for the larger (public?) good in terms of the economy and the job market. When students resist or fail to follow the supposed path to economic success, perhaps we can just fall back on the narrative about “Millennials” being more interested in “saving the world” than in saving to buy a new home – it’s their choice, after all.
If student choice is the problem, then the “solution” becomes an issue of steering students in the right direction and expecting universities to produce them as candidates who match the jobs available. But education is only partly about choice for each person, and those choices are only part of the ultimate “outcome”. As with many other things in life, we make decisions within parameters, and the art of prediction is not yet and has never been as finely honed as we’re encouraged to believe. Expecting students to master it and to become fully responsible for their own “outcomes” is unreasonable, and also conveniently obscures the myriad other factors at work.
Howard Rheingold, the longtime Internet commentator and UC Berkeley lecturer, uses the term “crap detection” to describe the process of determining whether online information is credible or not. What Rheingold calls “crap detection” is also known as information literacy, and in my case it was acquired partly through a degree in communication studies with an emphasis on analysing mainstream media coverage.
I thought of Rheingold’s ideas, and my own mass comms background, the other day when I came across an article by Douglas Todd from the Vancouver Sun titled “The pros and cons of foreign students.” This article is taking on what is currently a hot topic in Canadian higher education. The issue is only likely to heat up further in the coming years, given that Canadian universities have finally begun to vie for a bigger slice of the international student “market” in which countries like the UK, Australia and New Zealand have already established themselves as desirable destinations.
The first thing I noticed, which for me is always something worth pointing out, is the use of the term “foreign students” as opposed to “international students.” While the terms are used interchangeably, they each have different implications. While “international” is descriptive in terms of students’ national origins and/or citizenship, “foreign” suggests strangeness and unfamiliarity, or “other-ness.” There is other language alongside this, which depicts international students as a horde that will overrun Canadian universities – including “flood of foreign students”; “the river of foreign students”; and “this growing educational army.” The language often used to argue against allowing immigrants into a country is here used alongside the argument that “foreign” students are “crowding out” worthy Canadians.
Another, related thing that stands out about this article, but which isn’t entirely obvious unless you do a little bit of digging (i.e. spend five minutes with Google), is the use of particular voices for commentary. For example, it’s not clear why a political science professor without apparent specialization in higher education, Dr. Philip Resnick of UBC, was chosen for extensive commentary – rather than a professor who is an expert on the subject. Such experts do exist in Canada, and indeed within British Columbia where some of the “locals” in the University of British Columbia faculty of education include higher education scholars Donald Fisher and Amy Scott Metcalfe, both of whom have expert knowledge of higher education policy in the Canadian context.
Mr. Todd then discusses in his Vancouver Sun piece, the use of international student tuition to provide revenue for Canadian universities. The professor who is quoted “acknowledges he’s never researched [the] financial claim” that international student tuition covers all the costs of the students’ education – which it would have to do, if it were to be a source of revenue. But it’s simply not credible then to turn to research from the United States as a means of implying that international students could be costing Canadian taxpayers additional funds, rather than bringing in money for universities. If we don’t have the Canadian numbers on this, then extrapolating from research done in the United States is like comparing the proverbial apples to oranges.
If we check up on the scholar who produced this research (Harvard economist George Borjas), we find that its author generally takes an anti-immigration (and anti-international student) stance, which fits well enough with the fact that he “discovered foreign students have displaced local students, particularly white males, especially in graduate schools.” His research (PDF) is being quoted alongside Canada’s Centre for Immigration Policy Reform, which is a conservative anti-immigration think tank with “official spokespersons” who are members of the conservative and libertarian Fraser Institute.
The author also raises issue of language fluency, which “some local students say is harming the quality of classroom interactions”; he quotes studies that “find many international students are showing up in classes with poor skills in English,” though Dr. Resnick admits that “some [students] are surprisingly good.” One wonders why this would be a surprise given the number of “foreign” countries where English is spoken and/or taught in schools. A colleague who is a Mexican national and permanent resident of the UK commented that it might be a challenge even for those who speak English as a first language to pass the TOEFL or GRE, given the high level of fluency required to do well on those tests. In a recent University Affairs opinion piece (not cited by Mr. Todd) the same issue was addressed and provoked a heated but thoughtful debate about the linguistic readiness of EAL students, which I think shows that while there is substantial disagreement about how prepared the students are for academic success in Canada – it also demonstrates that we can do better than anecdotes and stereotypes in our coverage of this topic.
Lastly, the subtitle of the Vancouver Sun article mentions “pillaging” the best students from “poorer” countries, which would have been an interesting point of discussion, and it’s certainly been addressed by other authors in the recent past. However, even this was addressed in ways that invoked racial and class stereotypes, e.g. by calling the (Asian) students “richies” and quoting the author of a “popular novel” titled Crazy Rich Asians, while not addressing at all the critique of “brain drain” from other nations that have scarce human capital and may lack adequate educational infrastructure to train skilled professionals. There is plenty to discuss here but some of the most salient problems seem to have been avoided or ignored.
It’s a real shame to see this kind of superficial reporting on such an important topic, especially when stereotypes of race and class are being invoked, something that highlights what many students already face when they come to Canada from overseas. I believe that the recruitment of international students raises complex ethical issues that will need to be addressed in the coming years as Canadian universities try harder to fill enrollment gaps due to demographic changes. But these points will require debate that is equal to the nuances of the subject – something that certainly wasn’t being provided by the Vancouver Sun this week.
Recently, the American Historical Association (AHA) posted a policy statement that caused some controversy among academics, because of its recommendation that universities should allow junior scholars the option of a 6-year embargo on electronic publication of their dissertations.
The argument goes that younger or early career researchers (ECRs) need the option of an embargo because widely-available dissertations might not be acceptable to publishers in book form. Some universities make it mandatory for students to submit their dissertations to an open online database, so the embargo would ensure that ECRs have the option of keeping their research private until it’s ready for publication.
While this policy is only about ensuring that grad students can have their dissertation embargoed if they want, rather than telling them they have to, what’s revealing is not only the argument that’s been provided but also that there’s been such a strong reaction and an intense debate generated by the issues involved.
Some commentators have viewed the AHA’s strategy as “empowering”, but others question the idea that only or primarily a book can gain you entry into the profession, and that publishers won’t deal with a book that has previously been available online in dissertation form. Critics argue that there doesn’t seem to be much evidence that pre-published dissertations are off-putting for academic publishers. The debate also seems to beg the question of what exactly the difference is between a book and a dissertation; they’re very different forms, with radically different audiences in mind. Not only that, but the main customers for academic books have been neither academics nor the broader public, but libraries, as pointed out here. Barbara Fister notes that “[l]ibraries have been buying fewer books no matter whether they are based on dissertations or not; they won’t buy more books because dissertations go offline.”
The key issue here isn’t the difference between dissertations and books. It’s the academic career, and what “counts” towards building it – and additionally, what “proves” quality in a candidate for tenure. The AHA post states that “although there is so close a relationship between the dissertation and the book that presses often consider them competitors, the book is the measure of scholarly competence used by tenure committees” (emphasis added). This is not just about who gets to publish what, and when. It’s about entry into the academic profession, and what kind of scholarship is seen as valid when assessing who should be on the tenure-track or tenured, and who should not.
This is why the AHA’s stance is being framed by some as “protective” of ECRs who, if forced to make their dissertations widely accessible, will put themselves (or their careers) at risk by jeopardising their chances of having books published. As the statement reads, “History has been and remains a book-based discipline, and the requirement that dissertations be published online poses a tangible threat to the interests and careers of junior scholars in particular” (emphasis added). At a time when competition for academic jobs is fiercer than ever, their argument has a lot of weight.
All this is wrapped up in the shift away from physical copies of dissertations filed on library shelves, and toward the digital databases that allow access to anyone who can get online and run the right search – a scenario involves a different set of assumptions about who can access the work, from where, and with what results for the scholarship (and scholars) involved.
I think an important issue here is the purpose of scholarship, which shapes the work that happens and what “knowledge” is created through it. As others have discussed, historical research is a public good as well as an academic one; what use is there in assuming a few narrow forms of dissemination are the most appropriate, and will continue to be so? A particular understanding of purpose is driving the assessment of scholarly worth. If the purpose of scholarship is to get you a book contract and ultimately, a tenure-track job, then the exclusivity that is so cherished in this process needs to be maintained. If the purpose of publishing is to “launch” one’s career in an appropriate fashion, then there isn’t much room for serendipity or risk in that process.
In terms of purpose, a related example is that of the REF in the UK, which is designed to assess the “quality” of research produced by UK universities. Many critics have already pointed out the systemic effects of this process, which is also tied to funding. There is a connection between academic systems of assessment and professional advancement, and the way scholars go about their work (and make career-related decisions). Because tenure is hard to obtain, the fear factor is an important one for many junior scholars and the AHA’s initial announcement reflects this.
It’s the system in which we already operate that engenders risks from “openness”, risks that are more obvious and present for some than for others. We already have a system wherein keeping one’s work sequestered is still more of a failsafe way of building an academic career. This is why it’s so difficult to use moral arguments about the openness of knowledge: this openness hasn’t yet been institutionally validated to enough of an extent that ECRs are willing to stake their careers on it. That the AHA can argue it has also “supported” alternate forms of publishing and recognition, tells us that there may be an ongoing disconnect between what scholarly societies and funding agencies advocate, and what actually happens in a departmental hiring process.
The academic profession is in a period of flux in terms of what kind of work is valued, what forms it should take, and how it should be shared with others. In many ways academe is still in reactive mode regarding changes that have been happening for some time; technology has already helped to move scholarly activity beyond the limits of what academic career tracks have “traditionally” encompassed. Those hoping to become academics are now receiving very mixed messages about the limits of what is acceptable or desirable for professional credibility. This makes things difficult not only for prospective scholars but also for graduate supervisors and mentors who hope to support their students in developing academic careers.
If it’s less risky to fall back on an established understanding of what counts, if our decisions are governed by fear and anecdotal evidence, then the issue of giving junior scholars a “choice” seems almost beside the point. Choices tend to be governed by context. Maybe it’s time to work on changing that context, instead of trying to work within what are being treated as permanent boundaries.