Lately I’ve been thinking a lot about the job market, poverty, and the assumptions we make when we talk about people’s choices, partly because recently I’ve seen two excellent and provocative posts about this. The first is from Tressie McMillan Cottom on “The logic of stupid poor people”, a post that discusses how expensive status symbols (like a $2,500 handbag) act as powerful signifiers, and how in general there is a complex performance that must be mastered in order for class “mobility” to happen. The second post, “Why I Make Terrible Decisions, or, poverty thoughts” is by Linda Walther Tirado who writes about the (non-) choices faced by poor people, and how they are criticized for what they choose.
I think a theme in these posts is how the scope of (perceived and actual) possibility diminishes radically as your finances worsen – and this affects not only the decisions you make but also your whole view of the world and what it has to offer.
For example, think about the search for adequate employment and how this is experienced by people with low and/or unstable incomes. One thing it highlights is how space changes when you’re poor: no money for a driver’s license, insurance, or a car? Then you can’t take jobs that are far from public transit or that demand the applicant has their own vehicle. Can’t afford to pay for the bus anymore? Then your options diminish further. Time also changes when you’re broke; you get less done because you spend a lot of time just waiting, or having to get things done the hard way because the easy way costs more. Waiting for the bus and hoping you can get to an interview on time. Waiting in line-ups. Waiting for people to get back to you about money or jobs. Waiting for a cheque to arrive so you can pay the bills. Waiting.
What about other resources you need in order to find work? No money to buy a nice suit, appropriate shoes, or makeup and a good haircut? Then you’ll have a hard time applying for office jobs or even much of the service work available, since appropriate self-presentation is crucial (see Tressie’s post for a great description of how this works). These days you need the Internet for a good job search, but what if you can’t get access? What if you can’t afford a phone, making it difficult for potential employers to contact you? What if you’re not exactly sure what will be the next thing you can eat for dinner, and that’s a more pressing concern than drafting the perfect resumé? Priorities change when money runs out, and choices change as well.
Most importantly, you need resources to gain resources. This is either a virtuous circle or a vicious one, depending on where you are on the income spectrum. As Linda Walther Tirado writes in her post: “We know that the very act of being poor guarantees that we will never not be poor.” That’s knowledge that comes from life experience, not from the classroom.
Add to this the weighty sediment of shame that builds up, because in this Western, capitalist society, we tend to assess people’s intelligence and their moral capacity by their financial circumstances. It’s assumed that people are “stupid”, “irresponsible”, and “frivolous” when they lack money. At best they are “unfortunate”. But the assumption is that there’s something wrong with them, not with their context, not with the entire system in which they’re forced to participate. There’s always something “they” could have done to correct their own course; there’s always a different decision they could have made, some cost they could have cut. Never mind that others get to make mistakes and fall back repeatedly, unscathed, on the soft cushion of privilege. To open up about being poor (or about trouble with finances in general) is to expose oneself to scrutiny and judgement of one’s decisions and one’s character.
And so “charity” comes to feel like a dirty word to those who may be on the receiving end of it. Charity means you couldn’t make it on your own – never mind that most people who “make it” do not do so without often-invisible forms of help and support. Charity means relying on the goodwill of strangers, in a society where independence and self-sufficiency are both over-valued and mostly illusory.
You may wonder why I’m writing about work and money and opportunities, instead of education. I think the debate about being poor is intimately linked to the arguments we make about higher education and who has access to it, and the differing “outcomes” of that education. All the things I just described are things that some students may be experiencing or may have experienced in the past. They’re all factors that affect people’s perceptions of the value of things, including education – and the risks we’re told we have to take to access that value. When we talk about student financial assistance, “debt aversion”, the job market, “entrepreneurialism”, and most of all “risk”, we are making assumptions not just about income and privilege but also about mindset.
Not only that, but of course there’s an intimate link between money and mental health issues, and it’s a link that goes both ways. Issues like clinical depression can lead to poverty, but poverty can cause these issues, too. Long-term financial instability wears you down; it reduces the sense that you can gain any control over your own life. Those students who’ve arrived at university from that kind of background are already dealing with a specific kind of long-term exhaustion. They are more vulnerable to being overwhelmed, and possibly less likely to feel safe asking for help. It’s not just the stress of education they’re dealing with – it’s the cumulative stress of living with worry about the lack of things, and the potential lack of things, including lack of possibilities. If they have low expectations about the “returns” on their education “investment”, then this is big part of it; if they fear for the future, then who can blame them?
There’s so much hue and cry about the diminishing opportunities for those who were previously part of the middle class – as if a problem only matters when it happens to folks who had better things in mind. But for some people this has always been their mode of living, their understanding of the world. When we hold out the promise of a better life as the result of higher education, not everyone can believe in that promise. When pundits bemoan the “high expectations” of an entire generation, they’re forgetting that not everyone had the expectation of magical prosperity either from education or anything else. If we took loans, it wasn’t because we truly believed we could repay them; it was because we saw no other option, because we were told our chances of survival were even lower without the coveted Bachelor’s degree. It was because not having a degree was presented a threat to our future employability, and the fear of debt was overshadowed by the fear of other forms of uncertainty. That doesn’t feel like a “choice” – it feels like coercion, and it’s something we need to start thinking about when we engage in debates about policy and accessibility.
A recent article on Slate‘s website came to my attention only because so many academics in my personal Twittersphere were reacting negatively to it. The article caused outrage with its discussion of EdX founder Anant Agarwal’s suggestion that professors who create and present material for (video-based) MOOCs could be replaced by Hollywood stars, who would lure more students to enrol in and complete the courses. This is presented as the logical solution to the problem of needing more camera-savvy and student-friendly presenters, since not all profs are up to the task. Further commentary from Udacity founder Sebastian Thrun is supportive of Agarwal’s approach. I won’t link to the Slate piece here, but you can Google “The new rock-star professor” (if you must).
The article captures our attention by positioning Agarwal’s and Thrun’s comments within a particular frame of speculation, i.e. that professors could be “replaced” by stars who appeal to the “masses” in a system that un-bundles the work of teaching into user interface design, content production, “delivery”, and assessment. It’s not a neutral framing, because it puts forth a vision of education that subjugates the expertise of faculty (and of educators in general) to the logic of markets and to the “big data” that are assumed to generate more important pedagogical insights than experienced professionals can. It also conflates learning with “content delivery”, espousing interaction and personalisation while in practice apparently relying on what Freire called the “banking model” of education.
But to turn back to Slate, they’re certainly not the only publication to realise that anger generates interest, that there are ways of making academics angry, and that this anger leads to pageviews (maybe we should call them “rage-views”). Slate’s a bit late in catching on to a game that’s been played successfully before by the Chronicle of Higher Education, the Washington Post, The Economist, Forbes, the New York Times, and others. Popular articles include those that take on the work-life balance and/or salary of the “average” professor, or that reek of unexamined privilege and spark strong feminist or anti-racist critiques, or of course those that pronounce on the future of higher education, which is conveniently subject to apocalyptic speculation that can continue ad nauseum (because the future is always…the future, right? Fair game!).
This time around the article was written by Jeff R. Young, who’s also the tech editor for the Chronicle. In fact part of the piece is a modified excerpt from his e-book, the title of which contains the words “MOOC”, “high-tech”, and “disruption”. Based on its Amazon.com description, the book looks more like a cram guide for busy senior administrators, to whom I’d recommend instead Audrey Watters’ blog Hack Education. In the meantime, academic rage at Young’s article has probably brought a good deal of attention to his book (I couldn’t tell you if it’s boosted sales).
Taking apart articles like the one Young published in Slate is practically a bore at this point. It so clearly sets out to prod at academic sore spots, taking consumerist logic to extremes while playing on a major theme from recent higher ed reporting (MOOCs can hardly be called “news” by now). Even better is that authors who write on this topic don’t need to make up their own extreme speculations, since the quotes they’re using are taken directly from ed-tech celebs like Agarwal and Thrun whose popularity in turn is strengthened by their edgy proclamations. The excerpt doesn’t address whether stars like Matt Damon would be willing to work for free in the name of a good (educational) cause. But the “logic” reflected both in his comments and in the way they’re framed by Young could be said to assume the insecurity of “Ivory Tower” academics facing impending obsolescence, while playing up the often-self-fulfilling predictions made by ed-tech upstarts – “Casting Damon in a MOOC is just an idea, for now” (emphasis added).
I was thinking of all this during the panel on higher education and the media that I attended last Friday afternoon. I thought the facilitator (Anne McNeilly) and the three journalists on the panel (Léo Charbonneau, Scott Jaschik, and Simone Chiose) did a great job of explaining the context in which coverage of education issues is produced, and how this connects to the kinds of critiques they hear about that coverage. Not only are universities (for example) complex institutions with many facets that aren’t entirely visible even to those who participate in them every day; journalists must also develop ways of “finding” the stories and making them relatable to a much larger audience than the those assumed by most individuals, and they must do this with resources that are limited and not necessarily predictable.
The economic logic of the media, particularly those sources that operate primarily in online territory, tends to be one of attention. Linkbait is linkbait, and even the most offensive article can bring attention and start a “debate” that draws people back to the site repeatedly either through comments, or through a subsequent series of “response” articles. The idea is to gain readers, whereas for some academics, it seems the general goal is the opposite: to shave one’s audience down to the narrowest slice of an expert readership. While academics engaging in this kind of practice could be (and have been) accused of a form of professional solipsism, on the other hand media priorities in some cases encourage particular forms of gleefully narcissistic provocation, which we see in various mainstream publications (here’s an example; and its antidote). These are all dynamics that must be taken into consideration by those working in either higher education or journalism (or both) if coverage of postsecondary issues is going to work for “both sides”, i.e. for both educational and media institutions, as well as for their publics.
In recent months we have seen many controversial issues arising on university campuses and in other academic contexts in Canada and around the world, which have generated a good deal of media coverage. These are issues that in some cases connect the university, academics, and students to actions, behaviours, and attitudes that have been seen as shocking and/or surprising.
For example, take sexism. In September we saw incidents where, on multiple campuses (Memorial, Western, UBC, Saint Mary’s), frosh week activities were marred by expressions of misogyny and rape culture. There have also been sexual assaults on campuses, including York’s string of attacks and the most recent incidents at UBC. Meanwhile, two professors were charged recently with sex-related crimes – including creating child pornography, and luring young women into sexual situations.
Lately in the United States, we’ve also seen gender-based harassment in the science blogging community, where biologist Danielle N. Lee was called an “urban whore” for refusing to write a blog post for free. Even as the science community reacted in outrage, further revelations about a prominent science blog editor led to his resignation. The comments directed at Lee also revealed deeply-ingrained racist attitudes and serve as a reminder of the intersectional experience of abuse and harassment. When we hear questions about why there are “still” so few women in prominent positions in science, and even fewer women of colour, we don’t have to look far for the answers.
For another, more historical example relating to race and racism, there’s also fascinating research that’s been coming out recently about the relationships of U.S. universities to the slave trade. It turns out – surprise! – that universities have long been tied to the economic context in which they operate, and in the era of slavery this was no different. From this, we should also be reminded of the historical role of academics in constructing and legitimizing scientific racism. The IQ test itself is part of the legacy of attempts to “prove” differences in intelligence between people of different races.
It’s not just gender and race that are factors in this equation. Other forms of discrimination are also rampant – against people with disabilities, and LGBTQ folks, and those dealing with mental health issues, and of course there’s fatphobia, which is so rarely discussed that you could almost believe it doesn’t exist (with this past summer providing an exceptional case). It’s just a shame we only notice microaggressions when they turn into macroaggressions.
Then there’s the related problem of workplace harassment and abuse. It turns out that academics can behave badly in the workplace too, and there’s a small and growing body of research showing how it happens. We have at least one recent case of this occurring in a Canadian university (McMaster), but this example is quite exceptional in the level of public attention it has gained.
Lastly, I refer you to the global report on corruption in universities, which shows the many ways that campuses are home to embezzling, corporate influence on research, sketchy student recruitment practices, and more. There are also forms of research fraud that have occurred such as falsification of results – or the gaming of the citation system in order to increase rankings.
Of course, none of this should be a surprise. We have high expectations of our universities as institutions of learning and knowledge, but all these examples merely serve to remind us that there is nothing inherently moral or good about the university and its task, and in particular, that the context of knowledge creation is in no way separate from the social world in which such knowledge will come to circulate and be put to use. The assumption that the university is somehow outside of the problems we see in the rest of society is part of what underlies the shock people express when abuses are uncovered, when sexism is still rampant, and when corruption is still endemic.
This is why the description of academe as somehow not the “real world” is so errant and potentially destructive. The “ivory tower” metaphor is inappropriate in that it invokes an idea of academe as not only apart from the world but also above it. This is, I believe, directly related to the need to prove the “objectivity” of knowledge and thus its authority. By this logic, it begins to make sense that Danielle Lee’s blog post describing the racism and sexism she experienced was removed by editors at Scientific American, who argued that it was not about “discovering science”. But what could be more crucial to “discovery” and to knowledge than the factors that shape who is allowed to discover things, and how?
In a context where universities are relying more on private funds, where they are encouraged to compete with each other for resources and students, and where being “the best” not just nationally but internationally is an imperative, these issues may be downplayed or treated as momentary “crises” to be managed rather than long-term problems based on entrenched systemic patterns. That latter angle doesn’t make for good public relations fodder.
But we do have an opportunity here, an opportunity to shine a light on these things in ways that may not happen so easily in other institutions. Does the university have a dirty past, and a grubby present? Yes. But that isn’t all it has, and it doesn’t mean we should lose hope in the university as an institution. It just means we need to realise there’s no real separation between what goes on in the ivied halls, and what’s happening “outside” in society at large. As much as we dislike having to admit it, the university is of the world – not above it; but it’s the task of a knowledge institution that is special, meaning that perhaps we do have the tools to address these problems in ways that other institutions can’t. In order to do this, though, the university must address itself and have knowledge of itself – as painful as that might be.
In this week’s post I’m going to stay with the subject of media and higher education, since there’s so much to work with at the moment – ‘tis the season, as they say. Since I last wrote, there’s a new, strategically-timed CIBC World Markets report that has garnered a good deal of media coverage, because it essentially claims that the value of university degrees has declined and that there are radically different “earnings premiums” on different fields of study. The humanities and social sciences of course end up lower in this hierarchy of profit than engineering, commerce, and health-related fields.
There are a lot of points that have already been made in other columns and blogs, so I won’t repeat them (Léo Charbonneau has a selection linked in his own helpful post, here). Instead I’ll just take a some time to focus on one of the issues that I had with this report, or at least with the coverage of its contents.
Whenever political, economic, and social problems are being discussed in the news media (or pretty much anywhere else), people will tend to look for a place to lay the blame – because that’s how we find (or at least propose) various kinds of solutions: by determining where things must be “going wrong”, and proposing an intervention. This is why there’s a need to be skeptical about the assumptions put forth in any argument about crisis in the present and the kind of (often “urgent”) action required to remedy it. The diagnosis tends to be a platform for the promotion of a particular cure.
To return to the CIBC report and the media coverage of it, here are some quotes about the source of the problem being debated:
“…degree holders fall behind in the earnings scale”, which is “largely the result of the programs Canadians have chosen to study” (CIBC, Newswire).
“Despite the fact that it is well known that certain degrees pay more in Canada, there hasn’t been any sort of gravitation towards those degrees among students to match the job market” (Financial Post).
“Plus, more women are choosing to pursue post-secondary education – and females are “disproportionately represented” in arts and social sciences” (HR Reporter).
“…experts are warning that young people aren’t making the educational choices that will allow them to step in [when Baby Boomers retire]”; “Many have arguably been victims of poor advice, encouraged by their parents and high school teachers to follow their whims and passions instead of making realistic career plans for a difficult job market” (National Post).
“….it is crucial to Canada’s economy that we start producing more graduates in growth areas of the economy” (CIBC, Newswire).
That’s right: the blame is being placed primarily on students (perhaps especially women) for making poor choices about their education. If students continue to choose the humanities over the sciences, for example, they can expect poor “returns” on their investment in education, because humanities degrees don’t “pay”. This in turn exacerbates the “skills gap” and affects the success of the Canadian economy, hence the complaint voiced by Rick Miner that “We’re letting a bunch of 17- and 18-year-olds dictate our labour market composition, and they’re not given a lot of advice to make decisions about what might be in their best interests.”
But what else should we expect from those teenagers, when they’ve often been encouraged to see education both as a costly commodity and as a route to a job, without being given any guidance as to how this translation/transition from education to employment actually happens? Even with advice, do we expect young students, or even their parents, to be perfectly informed consumers when it comes to selecting their degree program? Do we expect students’ decisions over a four-year period to reflect this level of information – and who is providing it? Are job market prospects the only factor affecting students’ choices, or are grades, advice and steering, geographic location, and expense, factors as well? Does the job market remain static for four years, and if not, can governments and universities successfully predict its fluctuations? Apparently “[i]n the absence of reliable data, labour market experts encourage students to do their own research in fields they’re interested in” (National Post). But even if they all believed that education should be solely about getting “the biggest bang for [your] buck”, they’d have a hard time finding the necessary information to predict the future of the job market.
We’ve seen all this before in past coverage, but now the argument has returned, full-force; the CIBC piece merely feeds an existing myth, one that also meshes with (and takes momentum from) the ongoing debate about Canada’s “skills gap” and the question of the “value” of humanities and liberal arts education.
As Kate T. Lawson argues, “one thing universities can’t do is perform magic tricks”: they can’t “fix” the economy, or eliminate inequality, or somehow solve problems that are rooted in multiple facets of society, simply by producing the right kinds of graduates or research. The bizarre situation in which we find ourselves is one where it apparently makes sense to increasingly privatize the cost of education, then expect students to make “choices” that are for the larger (public?) good in terms of the economy and the job market. When students resist or fail to follow the supposed path to economic success, perhaps we can just fall back on the narrative about “Millennials” being more interested in “saving the world” than in saving to buy a new home – it’s their choice, after all.
If student choice is the problem, then the “solution” becomes an issue of steering students in the right direction and expecting universities to produce them as candidates who match the jobs available. But education is only partly about choice for each person, and those choices are only part of the ultimate “outcome”. As with many other things in life, we make decisions within parameters, and the art of prediction is not yet and has never been as finely honed as we’re encouraged to believe. Expecting students to master it and to become fully responsible for their own “outcomes” is unreasonable, and also conveniently obscures the myriad other factors at work.
I first came across the term Critical University Studies (CUS) when it was mentioned in a 2012 Chronicle of Higher Ed article by Jeffrey J. Williams. The likely reason I hadn’t heard of this “emerging field” was that it seems the name hadn’t been used very much before, other than by Williams and Heather Steffen as discussed in the article – though Christopher Newfield has been described as one of the scholars who “helped to found” the field. Because the term wasn’t coming up in the framing of daily discussions I’d been seeing and papers I’d been reading about the critique of academe, I didn’t think about CUS again until recently, when a colleague in the UK, Dr. Martin Paul Eve, wrote to me about a project he’s working on that addresses some similar issues (he also wrote this blog post). Since my name was being connected to the aforementioned emerging field, I figured I’d better look more closely at what’s been said about it.
It was odd to realise that my work probably does fall within CUS as described by Williams. Something I’ve always said about what I do is that “I don’t have any discipline”. It used to be more of a joke, but over time I’ve come to realise that compared to friends in sociology, history, and various areas in the sciences, there’s little definition in the area where I work. This doesn’t seem to matter much – I feel the underlying themes are pretty clear, even though my degrees are in 3 different “areas”. But I’ve always known that it does matter when the academic system is one where professional socialisation and advancement are still aligned heavily with one’s specified field or area of research.
Some examples include the system of divisions between and within academic conferences; graduate and undergraduate programs, built on claims about legitimate areas of knowledge and study; the themes of academic journals; and of course, hiring practices for tenure-track faculty, wherein candidates are sought by discipline or field. This is why in practice, to be interdisciplinary means one must really be multidisciplinary, able to hold one’s own in a number of academic contexts, and adapt according to the opportunities available.
What then is the purpose of naming a field? What are the consequences of creating and using terms in this way? Once something has been defined, it takes on a new “realness” and becomes something people think and talk about in new and specific ways; Williams states that naming “recognizes that [the work] has attained significant mass and signals a gathering place for those considering similar work”. I’d add that when the boundaries are set out, we’re encouraged to define ourselves and the scope of our work in relation to them. More academic legitimacy can be gained when a research area can be discussed in terms of a body of work and/or a group of researchers, and when we have means of making a distinction between what belongs and what does not. In asserting that a field is “emerging” Williams positions himself at the forefront of a now-recognizable movement.
I’ve already noticed a number of these boundary issues arising in the debates about the digital humanities, particularly questions about who is a part of the field and who is not; whose work fits, and whose doesn’t; and who is “co-opting” the terms. When we consider that related claims can underpin requests for research funding, new publications, student recruitment, and other forms of (material) systemic recognition and reward, then it makes sense that there would be debate over the boundaries that separate those “in” the group from those outside it. If hiring and funding still happens based on field and discipline, then PhD students and early-career researchers are likely to align themselves accordingly.
For me the important point about CUS is that this work, which has the goal of critiquing the existing university system, is of course being enfolded and constructed by the same processes it criticises: the need to stake out academic territory and build upon it the infrastructure that will mesh with existing systems of assessment and professional advancement. A new field has been designated, but it’s one that should logically begin with a critique of the conditions of its own creation; can such a field transcend or violate those conditions and still “survive” within the institution? What are the consequences of survival?
There are politically positive uses for institutional recognition: I’d ask, will having Critical University Studies as a recognised field protect early career scholars who feel they can’t criticise the academic system “from the inside” without sacrificing their careers? Will it bring people together and help organise and inform so that the issues (which affect all academics) are being discussed more openly and in a more complex way than in sensationalist media accounts? And will it be open enough to include all those with stakes in the future of university education, not just those who hold particular political and theoretical views?
Considering the long-running debate about changing the university to meet multiple and divergent “needs”, I think these discussions are more important than ever, but they’ll need to engage diverse participants. If constructing a “field” is something that facilitates these things – rather than merely delimiting another narrow professional grouping – then I’m all for it.
Later this week I’m going to be on a panel about the inescapable subject of MOOCs, so for this post I’m thinking through an issue I’ve been noticing since I last wrote a big post on this topic, which was during the peak of the media mayhem in July 2012. For many of those researching higher education, even those who’ve been doing it for just a few years as I have, the ongoing hyperbolic MOOC debate that has hijacked the higher ed news has been quite frustrating. Of course, there is plenty of bluster on both sides of this debate. But it’s really troubling to see many perfectly legitimate criticisms reduced to straw-person arguments about “faculty fear“ (“those teachers just don’t want to lose their jobs!”), or about how those who are skeptical must be “against accessibility”.
So I would like to address this issue of “accessibility” that has come up repeatedly in MOOC debates. In articles that evangelise about the benefits of MOOCs, it’s often pointed out that there there is a huge (global) demand for higher education and that many eligible students are losing out due to lack of resources or to their location in “third world” countries. Even in richer nations, student loan debt has become a more significant concern over time, alongside rising tuition; and postsecondary education is becoming more of a financial burden for those who can least afford it. All this has happened in a context where the economy has changed significantly over a period of about 30 years. Socioeconomic mobility has been stymied (including for those with education), middle-class jobs are being fragmented and technologised, and young people are finding it more and more difficult to get a foot in the door. This is the “perfect storm” often referenced in arguments for the “urgency” of turning to MOOCs as a solution.
Lest you should think I am blowing proponents’ claims out of proportion, I’ll provide a few examples. Take a look at this recent article in the Guardian UK, by Anant Agarwal of MIT, President of edX. Agarwal claims that MOOCs “make education borderless, gender-blind, race-blind, class-blind and bank account-blind” (note the ableist language – and the fact that he left disability off the list). Moving on, in this article from the Chronicle of Higher Ed, Mary Manjikian argues that MOOCs (and other forms of online learning) “threaten to set [the existing] social hierarchy on its head” and that we should “embrace the blurring of boundaries taking place, to make room for a more-equitable society”, which can be achieved through the dis-placement of elitist place-based education. And lastly, I point you to an article written by a MOOC user who epitomises the claims to worldwide accessibility that Agarwal so keenly puts forward: Muhammad Shaheer Niazi of Pakistan, who, with his sister, has taken numerous MOOCs and writes enthusiastically about the benefits of online learning.
I think these arguments beg the question – if MOOCs provide “access”, who, then, has access to MOOCs? What is required of the user, to get the most out of these online resources? To start, you’ll need a regular, reliable Internet connection and decent computer equipment, which are of course not free. Assuming you have the right tech, you’ll also have to be comfortable with being tracked and monitored, given that surveillance is required to “prove” that a particular student did the work (there is much potential for cheating and plagiarism). There are also “analytics” being applied to your online activities, so you need to be on board with participating in a grand experiment where the assumption is that online behaviour shows how learning happens. In these “enclosed“ MOOCs, there will be no private, “safe” spaces for learning.
And learning itself must fit the parameters of what is on offer – so the kind of “personalization” often touted is a rather limited one. You’ll be fine if you learn well or best at a computer, and if you don’t have any learning (or other) disabilities that require supports. The few demographics available also suggest that thus far, MOOC users are more likely to be male, white, to have previous postsecondary education, and (judging by course offerings) to be speakers of English, even while the actual pass rates for the courses are still proportionally very low. In terms of the actual needs of the majority of students, we should consider whether all this is really about privileged autodidacts projecting their ideal of education onto everyone else.
Questioning the quantification of assessment, the level of access, the cost of tuition, the endless search for “economies of scale”, and the funding troubles faced by public higher education, must happen if we are to find solutions to those problems. Yet plenty of people have been questioning these trends for a long time, and somehow the research they’ve produced doesn’t have the same appeal. Pro-MOOC critiques of the current system never seem to reference the existing literature about (for example) neoliberalism and the economization of education policy, increased privatization (from tuition fees to corporate influence on research), marketization and commercialization, and the unbundling and outsourcing of faculty work. Perhaps that’s because MOOCs would mostly serve to exacerbate those trends.
What then is the function of MOOCs in terms of “access”? It isn’t about extending real opportunities, because we live in a society and economy where opportunities are unequally distributed and even (online) education cannot “fix” this structural problem, which is deepening by the day; finding a solution will be a complex and difficult task. It isn’t about ensuring the students get higher “quality” of teaching, unless you truly do believe that only professors at elite universities have something to offer, and that all other faculty are somehow a sub-par version modelled on that template. Some have argued that MOOCs can reduce tuition costs for students, but surely there’s only so long a business can exist without making a profit, and the “product” clearly isn’t the same. The ongoing efforts to link MOOCs to the prestige of existing universities through accreditation deals are unlikely to leave these courses “cost-free”, and the hundreds of hours of work it takes to create one MOOC can’t go uncompensated.
Perhaps MOOCs in their revisionist, start-up incarnation are partly about projecting the possibility that even the most downtrodden can still do something to get ahead, at a time when the old path to mobility through hard work and (expensive) education seems less effective than ever. What could be better than more education, for “free”? In this sense, MOOCs really do help to “train” workers for the new economy, since they’re teaching us to govern ourselves, to be autonomous and flexible learners in an economy where businesses can simply refuse to provide on-the-job-training, instead holding out for the perfect custom candidate (while keeping wages low). This is framed not as a problem with business – or even with the long-term changes to the economy in general – but as a failure of education. Meanwhile, we’re encouraged to believe that we can mitigate personal risk by investing in ourselves, and if we don’t “get ahead” that way then it’s about personal responsibility (not systemic problems). If MOOCs “level the playing field” then no-one can complain when they’re left out of the game.
Who is most desperate for these possibilities? Maybe those folks will be the ones using MOOCs. But will the possibilities materialise into something real, for those who need it most, and not just for the few example “learners” who are invoked in MOOC-boosting articles and speeches? Are most current users there because they need to be or because they have no other option? Would massification through MOOCs be more effective that any of the other forms of educational massification that we have seen over the past 200 years – and if so, why? In what way will the new tokens of achievement be any better than a university degree at present, and will they translate concretely into opportunities for the least privileged? After all, isn’t that what “access” is about?
A deeper understanding of context is relevant to every argument being deployed. To return to Muhammad Shaheer Niazi, it’s clear that he actually exemplifies why we cannot make sweeping generalizations about students based on their location. Niazi describes how he had “access” to a supportive and education-oriented family; to “a very good school in Pakistan”; and to computers and books in his home. As Kate Bowles and Tressie McMillan Cottom have both pointed out, there are many families in the United States who wouldn’t be able to provide this kind of environment, and yet “Pakistan” is used frequently as a signifier of poverty, inaccessibility, and general disadvantage. Niazi’s piece shows us he is far from desperate – he is in fact part of the small international group of gifted and well-resourced students that universities most desire to recruit.
Because of the claims being made about disrupting hierarchies and helping the underprivileged, the MOOC trend calls on us to ask ethical questions. Questions about control, resources, and agendas; questions about who is excluded and who is included in this “new” landscape. Questions about how the story of this “phenomenon” is being re-written and re-shaped to reflect particular priorities. We’re seeing perverse exploitation of arguments about access, when the “solution” proposed involves breaking down the university into commodifiable, out-sourced units and reinforcing (or even exacerbating) existing institutional and social hierarchies. In the current political/economic landscape, where there are so many problems that seem intractable, the apparent concreteness of the MOOC “solution” is part of its appeal and also part of why uptake at traditional universities has been so rapid and widespread. But MOOCs are an answer that can only be posited if we construct the question in the right way.
Though it isn’t the topic of my current research, I’ve been interested in the Internet (as an object of study) for some time, in particular its possibilities for connecting people and helping them generate new relationships and forms of social support that might not otherwise have been available. I think this is because I’ve been engaging in forms of distance-networking for over ten years now, starting with snail mail and leading all the way to Twitter. I’m not particularly sociable by nature, because unfamiliar social situations tend to tire me out; all social interaction is a form of performance, but some people find it more taxing than others. Over time I’ve discovered that for me personally, it’s easier to cultivate an initial level of familiarity through mediated interactions, rather than through increased in-person socializing, because the latter tires me out too quickly.
While I was working on my undergraduate degree in Communication Studies, I did a project about how people conceptualise the Internet, as signified by the way they talked about it. I became interested in this because I noticed that people talked about “online” experience precisely as if it happened in a “world”, or a place, where they could “go” – even though clearly it wasn’t the same as the space they inhabited physically. Why was the sense of place so strong that it dominated our conceptual framing of the Internet? How have we come to experience a communication tool itself as being (or providing) a “space”, and what is that space like, compared to others?
The reason I started thinking about this again recently is that the debate about online education has become more intense, and along with it comes what is usually a subtext about physical vs. virtual. This division is emphasised, either positively or negatively (depending on the argument), when we see references to the Internet as set apart from “real life”. On the utopian side, some argue that you can “be whoever you want to be” online, or less dramatically, that the Internet provides flexibility and accessibility to education (for example) – that it is a “better” place. From the more dystopian perspective, technology cuts us off from “real” connections/relationships and experiences with other people, causing us to become too focussed on our tools and oblivious to their effects on our cognitive, emotional and psychological well-being; and it can exacerbate the “divides” we seek to bridge.
Like most binaries, this one is overly simplistic. Real/unreal, physical/virtual, utopian/dystopian, all these illustrate extremes when in reality much of the discussion is about grey areas. The critique of (and ongoing debate about) “digital dualism” is a good reminder of this.
I think online/offline is a useful distinction, but that relationships we develop online are not segregated from or less significant than others that start with in-person contact, nor are they part of a different “reality”. More and more, we see there are gradations, and newer technologies and tools further blur these lines. The friendship I have with someone because we saw each other every day during undergrad might not continue after the degree ends, whereas I might stay in touch for longer with people I’ve “met” only through Twitter interactions that have morphed into coffee dates. The practice of developing academic networks through Twitter is an example of how this fluidity works; for those without established contacts, chatting with strangers online makes it a lot easier to meet them in-person later at a conference.
All that being said, the emphasis on online education as an industry (or set of marketable services) has grown in the context of higher education’s increased stratification, loss of funding, and massification, so we have plenty of reason to ask critical questions about the nature of various spaces of learning and what can and does happen in them. What is the difference between sitting in a room with others, vs. being with peers or colleagues who are “there” in some other way? What about affect/emotion, how is this expressed and experienced by students working and communicating through the Internet, as opposed to in a traditional classroom (and how might they work together)? Shouldn’t we consider elements of privacy, when everything is being “shared” and/or documented online in one form or another? Who will feel free to talk? Will everyone be able to gain access?
We also need to consider how some of the assumed properties of the online environment are extrapolated and projected to form a new image of the student, the self-motivated and autonomous learner (autodidact) who is so frequently championed in techno-futurist rhetoric (in spite of the collaborative nature of so much of what happens on the Internet). But this type of person is still also a relatively rare learner, an ideal type to be plugged into the process of creating future policy. The “structures” created for Internet spaces may also be designed with such assumptions in mind. If we are going to acknowledge and accept the reality of online experience (and that education happens there), we need to think about how that reflects our other experiences in the world. As with all places, the Internet is more hospitable to some people than to others.
This past weekend I attended HASTAC 2013, held at York University in Toronto. This was the first HASTAC conference held in Canada, and about half the participants were Canadian. In fact, it was the first time the conference had (physically) happened outside the United States. The HASTAC (“haystack”) acronym stands for Humanities, Arts, Science, and Technology Advanced Collaboratory; it’s a “virtual organization” co-founded by David Theo Goldberg and Cathy Davidson in 2002, which functions as a kind of user-driven platform, a support system and a place of meeting and collaboration for scholars interested in technology, creativity, pedagogy and educational change. I became interested in learning more about the organization because I seemed to know a lot of people who were involved in one way or another. When I discovered that the 2013 conference would happen at York University, I realized I had a perfect opportunity to find out first-hand what kind of work was being created by affiliated scholars.
HASTAC isn’t the usual academic conference featuring a menu of panels packed with academic talks. It’s a bit of a smörgåsbord of goodies: alongside regular keynote talks, panels and posters, there were “lightning talks”, demos, performances, multimedia art and even a Maker Space. I decided to attend fewer panels and spend more of my time looking at exhibits, taking photos, and interacting with participants – I managed to see some fascinating things and meet many new friends and colleagues, some of whom I’d chatted with online but hadn’t yet met in person.
Friday’s schedule included one event I’d determined to check out, the Global Women Wikipedia Write-In, sponsored by the Rewriting Wikipedia Project. The idea for this event was sparked partly by research on the gender imbalance in Wikipedia editors and in the content on the website itself. One participant at the conference (Ruby Sinreich) was editing the HASTAC entry itself, and another (Michael Widner) worked on an entry for Caribbean writer Karen Lord – who then turned out to be on Twitter and started chatting with him. Though I hadn’t prepared myself adequately to write or edit a Wikipedia article, I did a search for noted higher ed scholar Sheila Slaughter and discovered that she didn’t yet have a page. I felt the urge to remedy this immediately, but didn’t have the time to dig in to the task (of course, others did – here is a report of what they achieved).
Near the Wikipedia room, like buried treasure, there was a distractingly entertaining Kinect demo happening. I’m not at all familiar with the technical terms and I couldn’t find the names of the creator/s (they were from OCAD, and the group included prof Paula Gardner), but I still wanted to mention this piece because I loved the idea: it involved generating different kinds of sounds through movement, for example if you walked forwards or backwards within a specific area, the music became louder or softer; if you moved left or right, the notes moved from low and “bassy” to high, tinkly sounds. I made sure to capture a video so the effect could be conveyed more directly.
On Saturday, in spite of missing the early bus to York I managed to catch most of the morning panel “Building an Academic Community for the Digital Age” with Fiona Barnett, Amanda Phillips, and Viola Lasmana. Each of the panel members made strong points about the need for mutual scholarly and personal support, the importance of the emotional/affective side of building connections and doing work as a community (not just as individuals), and the role of HASTAC in facilitating and working on/with these things. I won’t paraphrase too much because the presenters’ own words are far more articulate than mine on these issues (their posts are linked, above).
By Saturday afternoon it was our panel’s turn to present, and in a sense our theme was “community” as well. My co-panelist Bonnie Stewart introduced us as “the most ironic panel” at the conference: our session was called “Cohorts without Borders” (my slides are here), and indeed two of our panel members were unable to attend in person because of borders and barriers of various kinds. Our colleague sava saheli singh, an Indian citizen living in the U.S., couldn’t get a visa in time from Citizenship and Immigration Canada (she did her talk through Skype); and Trent M. Kays, who contributed a video of his talk and then tuned in via Skype, was unable to get funding for his conference trip (a special shout-out goes to Daniel Lynds, who provided crucial technical support for our presentations). This highlights a “missing piece” from the rhetoric about the international “talent market” and mobility of students, scholars and “knowledge workers” around the globe, i.e. that some can be mobile while plenty of others have their movements (and contributions) restricted by a lack of resources and/or by policies that treat people differently according to their citizenship status. This is also a crucial issue in any discussion about internationalization and access to the professoriate.
Later on Saturday evening, York’s Scott Library was the venue for an after-hours reception that featured a performance piece called Digitize and/or Destroy, by York librarians William Denton, Adam Lauder, and Lisa Sloniowski. The piece was designed to highlight the process of digitization (and the work of librarians) and the kinds of decisions that have to be made during it. Each participant was invited to select a book from a trolley, and the choice of either destroying it (several pages would be cut out and shredded), or digitizing it (the book’s cover would be scanned, meta-data recorded and posted to a Tumblr), or both – in whatever order we preferred. Some of the books participants chose to have shredded included “Wife in Training”, various Weight Watchers books, and (my pick) “The Tipping Point”.
This post is just a small taste of this year’s HASTAC conference menu. If you’re interested in reading more about the conference panelists and talks, HASTAC Scholars Director Fiona Barnett has created a roundup of blog posts about the conference, available here.
A recent post by David Naylor, the President of the University of Toronto, has been quite popular with academics and has generated a lot of commentary. Naylor makes the argument that Canadian higher education is dogged by “zombie ideas”, and he describes two of them: the first is that universities “ought to produce more job-ready, skills-focused graduates [and] focus on preparing people for careers”. The second is the idea that research driven by short-term application or commercialization, should be prioritized by universities because it provides a better return on governments’ funding investments.
I focus here on the first point, since in the past few weeks, in the run-up to the federal budget on March 21st, there has been a great deal of coverage of the alleged “skills gap” in in the Canadian workforce. Others have already done the work of summarising this issue, but as a quick recap, the argument goes something like this: business leaders and employers in Canada complain (to the government) that they cannot fill positions because candidates lack the skills. Yet Canada produces more post-secondary graduates than ever, and those grads are having trouble finding employment that matches their qualifications. So why is there an apparent “mismatch” between the education students receive, and the skills employers are demanding?
I don’t have anything to add to the debate about what is needed more–“narrow” skills such as those available from colleges or apprenticeships, or the “broader” education that universities argue they provide–because I don’t have the expertise to make an assessment within those parameters. However, I find the discussion interesting in terms of its context, including who is doing the arguing, and why.
For example, while the “skills gap” is assumed as a dramatic fact by Federal Human Resources Minister Diane Finley, who “recently called the labour and skills shortage “the most significant socio-economic challenge ahead of us in Canada”” (CBC)–other experts, including Naylor, disagree that a skills gap exists at all. University graduates, they argue, are still making better money than those without degrees; and most of them (eventually) find jobs that draw on their skills–so why reduce the number of enrolments? Alex Usher of HESA has been generating a lot of commentary for this side of the argument as well; in the comments of one of his posts, his points are disputed by James Knight of the Association of Canadian Community Colleges.
Clearly the debate is more complex than “BAs vs. welders”, but this is the rhetoric being reproduced in numerous mainstream media articles. The average reader could be forgiven for finding this issue hard to untangle, based on the radically different accounts provided by media and policy pundits. Yet all this is discussed with much urgency, because post-secondary education is now being understood as a stopgap for everything the economy seems to lack–and economic competitiveness is imperative.
The politics of urgent “responsive” decision-making lie behind many of the arguments being brought forth. The skills gap, should it exist, has its political uses; agreeing that a thing exists means having to find ways of dealing with it somehow. In this case, a restructuring of university education is one solution on offer, including steering students away from the corruption of the arts and humanities and towards more suitable areas where demonstrable “skills” are in demand. Those doing the arguing have the means and “voice” to define the problem in a particular way; they can intervene in that debate and someone will listen. Each player has stakes in this game, too–the colleges plump for skills and job training over research investments, while the universities, and their advocates, claim a “broad” education is more appropriate; employers want graduates they don’t have to train, so the concern is with graduates being job-ready (for jobs that may not even exist yet).
Is this a kind of moral panic for Canadian higher education? That’s an important question, because such tactics are used to create a climate in which particular policy changes are favoured over others, both by politicians and policy-makers and by voters.
I think at the heart of the debate there are the problems of risk, certainty, and value (for money). Canadians have more of a “stake” in what universities do–often through directly paying ever increasing amounts of money for it–and so they care more about what universities are for. Governments have more of a claim now too, because of the idea that universities are magic factories where students enter undeveloped and emerge brimming with human capital (but it must be capital of the right kind).
The more we experience instability, the more we desire certainty–or at least some form of guarantee that if things go off the rails, we have other options. Yet there is no certainty about economic (or other) outcomes either from education or from non-commercial, “basic” research. Education and research give us no way to “go back”, either. For those trying to get a good start in life, there’s no tuition refund if we fail our classes or find the job market unfriendly at the end of the degree. We can’t wind back time and have another try. So the question becomes: what will guarantee our ability to cope with the future? A long-term focus on broad learning, which can (it is argued) help us to adapt to the changing structure of careers? Or a short-term focus, on skills designed to prepare students for specific, immediate positions?
This is why Naylor makes the argument that “the best antidote to unemployment–and the best insurance against recession-triggered unemployment–is still a university degree” (added emphasis). The word “insurance” speaks to the risk each person internalises in the current economy. Such risk has many effects, and one of them is heightened fear of the unknown: with so few resources to go around, will we get a “return” on what we invested, will our sacrifices “pay off”? What will happen if they don’t? As Paul Wells has pointed out, university advocacy organizations such as AUCC have pushed for universities to be recognised as providing economic benefits–since this is a logic that validates requests for further government funding. Yet it means universities are held captive by their own argument, since funding comes with the expectation of economic returns for the government. What if they cannot deliver on this promise?
The skills/employment “gap” is being blamed for a lack of national economic competitiveness; and it is a parallel to the ongoing “innovation problem” that Canada has in the research sector. But it’s the outcome, not the process, that’s really driving this debate. Never before have we been compelled to pay so much attention to the purpose and results of university education, and now that it seems to matter so much, we’re finding that “what universities should be doing”–or even what they already do–can’t be pinned down so easily; it can’t be mapped so cleanly onto a specific, measurable result. This is partly because what we now demand of universities is certainty, where serendipity used to be enough.
I’m a big fan of British comedies, particularly the fine tradition of political humour so well exemplified by Yes, Minister and The New Statesman. More recently, The Thick of It has become a favourite, and in one of the most squirm-inducing episodes, staff in the Department of Social Affairs and Citizenship realise that 7-and-a-half months’ worth of immigration records have been wiped from a computer. Havoc ensues, especially after the gaffe is accidentally revealed to a journalist.
Of course the humour comes from the extremity of the scenario–“that would never happen in real life”, we tell ourselves–it’s just too far down the path of incompetence. But last Friday afternoon we were proven wrong, not by the UK government, but by Canada’s own–and this time it wasn’t 7.5 months’ worth of data, it was 6 years’ worth. The data were from Canada Student Loan program clients and HRSDC employees, and they were on a portable hard drive that was “lost” from an office in Gatineau, Quebec. Is it any surprise that for some of us the first reaction was “is this a joke?”
The news, real enough, is that over half a million students (and 250 civil servants) have had their privacy compromised by the loss of personal and financial information–“student names, dates of birth, Social Insurance Numbers, addresses and student loan balances”–that could be used for identity theft or other forms of fraud. Whether or not the information had been obtained by someone who might have malicious intent is unknown–because we don’t know where the data went. The external drive just disappeared.
As it turns out this loss was discovered only during the process of investigating an earlier mishap involving a USB key containing information from another 5,000+ Canadians. The Office of the Privacy Commissioner has begun an investigation of the breach since “there is a serious possibility that an investigation would disclose a contravention of the Privacy Act”; the issue was also referred to the RCMP on January 7.
From the press release there are a couple of things that stand out, other than the obvious. Looking at the timeline of events, it seems like it took over two months from the time the hard drive was missed (on November 5, 2012) to a public announcement alerting CSLP clients to the loss (on January 11, 2013). During this period the HRSDC developed a new “policy for storing secure information” designed to prevent similar incidents in the future, which is described in detail in their press release. I’d be interested to know more about why it took so long to inform the affected parties.
It’s also interesting to look at how this information was communicated to the public. For example, the announcement was made as part of what journalists and political communicators often call the “Friday news dump” (a tactic that doesn’t always work). The press release itself, including a statement from Human Resources Minister Diane Finley, was inappropriately (but optimistically) titled “Protecting Canadians’ personal information at HRSDC”. While I understand the organization’s desire to provide the least negative slant, this kind of re-framing is vaguely embarrassing given the nature of the problem.
The issue has gained more media attention this week, especially after Newfoundland lawyer Bob Buckingham filed a class-action against HRSDC; thousands of students are already coming forward to join it. Since I have student loans from the period in question, I knew this incident could have personal consequences. I called the number provided by HRSDC and after being greeted with “thank you for being proactive about your privacy”, a search was run on my SIN and I was told that my information hadn’t been “compromised”. But even knowing that my name isn’t on the infamous 583,000-person list hasn’t been enough to dull my curiosity about how this happened in the first place, and the person I spoke with on the phone didn’t have anything else to tell me. Others who’ve found their information was on the drive haven’t had better luck; they’re being told to wait until they receive a letter via snail mail, and to start taking precautions themselves. Unfortunately, we can’t protect our information pre-emptively on behalf of a government agency–otherwise this might not have happened in the first place.