In this week’s post I’m going to stay with the subject of media and higher education, since there’s so much to work with at the moment – ‘tis the season, as they say. Since I last wrote, there’s a new, strategically-timed CIBC World Markets report that has garnered a good deal of media coverage, because it essentially claims that the value of university degrees has declined and that there are radically different “earnings premiums” on different fields of study. The humanities and social sciences of course end up lower in this hierarchy of profit than engineering, commerce, and health-related fields.
There are a lot of points that have already been made in other columns and blogs, so I won’t repeat them (Léo Charbonneau has a selection linked in his own helpful post, here). Instead I’ll just take a some time to focus on one of the issues that I had with this report, or at least with the coverage of its contents.
Whenever political, economic, and social problems are being discussed in the news media (or pretty much anywhere else), people will tend to look for a place to lay the blame – because that’s how we find (or at least propose) various kinds of solutions: by determining where things must be “going wrong”, and proposing an intervention. This is why there’s a need to be skeptical about the assumptions put forth in any argument about crisis in the present and the kind of (often “urgent”) action required to remedy it. The diagnosis tends to be a platform for the promotion of a particular cure.
To return to the CIBC report and the media coverage of it, here are some quotes about the source of the problem being debated:
“…degree holders fall behind in the earnings scale”, which is “largely the result of the programs Canadians have chosen to study” (CIBC, Newswire).
“Despite the fact that it is well known that certain degrees pay more in Canada, there hasn’t been any sort of gravitation towards those degrees among students to match the job market” (Financial Post).
“Plus, more women are choosing to pursue post-secondary education – and females are “disproportionately represented” in arts and social sciences” (HR Reporter).
“…experts are warning that young people aren’t making the educational choices that will allow them to step in [when Baby Boomers retire]”; “Many have arguably been victims of poor advice, encouraged by their parents and high school teachers to follow their whims and passions instead of making realistic career plans for a difficult job market” (National Post).
“….it is crucial to Canada’s economy that we start producing more graduates in growth areas of the economy” (CIBC, Newswire).
That’s right: the blame is being placed primarily on students (perhaps especially women) for making poor choices about their education. If students continue to choose the humanities over the sciences, for example, they can expect poor “returns” on their investment in education, because humanities degrees don’t “pay”. This in turn exacerbates the “skills gap” and affects the success of the Canadian economy, hence the complaint voiced by Rick Miner that “We’re letting a bunch of 17- and 18-year-olds dictate our labour market composition, and they’re not given a lot of advice to make decisions about what might be in their best interests.”
But what else should we expect from those teenagers, when they’ve often been encouraged to see education both as a costly commodity and as a route to a job, without being given any guidance as to how this translation/transition from education to employment actually happens? Even with advice, do we expect young students, or even their parents, to be perfectly informed consumers when it comes to selecting their degree program? Do we expect students’ decisions over a four-year period to reflect this level of information – and who is providing it? Are job market prospects the only factor affecting students’ choices, or are grades, advice and steering, geographic location, and expense, factors as well? Does the job market remain static for four years, and if not, can governments and universities successfully predict its fluctuations? Apparently “[i]n the absence of reliable data, labour market experts encourage students to do their own research in fields they’re interested in” (National Post). But even if they all believed that education should be solely about getting “the biggest bang for [your] buck”, they’d have a hard time finding the necessary information to predict the future of the job market.
We’ve seen all this before in past coverage, but now the argument has returned, full-force; the CIBC piece merely feeds an existing myth, one that also meshes with (and takes momentum from) the ongoing debate about Canada’s “skills gap” and the question of the “value” of humanities and liberal arts education.
As Kate T. Lawson argues, “one thing universities can’t do is perform magic tricks”: they can’t “fix” the economy, or eliminate inequality, or somehow solve problems that are rooted in multiple facets of society, simply by producing the right kinds of graduates or research. The bizarre situation in which we find ourselves is one where it apparently makes sense to increasingly privatize the cost of education, then expect students to make “choices” that are for the larger (public?) good in terms of the economy and the job market. When students resist or fail to follow the supposed path to economic success, perhaps we can just fall back on the narrative about “Millennials” being more interested in “saving the world” than in saving to buy a new home – it’s their choice, after all.
If student choice is the problem, then the “solution” becomes an issue of steering students in the right direction and expecting universities to produce them as candidates who match the jobs available. But education is only partly about choice for each person, and those choices are only part of the ultimate “outcome”. As with many other things in life, we make decisions within parameters, and the art of prediction is not yet and has never been as finely honed as we’re encouraged to believe. Expecting students to master it and to become fully responsible for their own “outcomes” is unreasonable, and also conveniently obscures the myriad other factors at work.
Recently, the American Historical Association (AHA) posted a policy statement that caused some controversy among academics, because of its recommendation that universities should allow junior scholars the option of a 6-year embargo on electronic publication of their dissertations.
The argument goes that younger or early career researchers (ECRs) need the option of an embargo because widely-available dissertations might not be acceptable to publishers in book form. Some universities make it mandatory for students to submit their dissertations to an open online database, so the embargo would ensure that ECRs have the option of keeping their research private until it’s ready for publication.
While this policy is only about ensuring that grad students can have their dissertation embargoed if they want, rather than telling them they have to, what’s revealing is not only the argument that’s been provided but also that there’s been such a strong reaction and an intense debate generated by the issues involved.
Some commentators have viewed the AHA’s strategy as “empowering”, but others question the idea that only or primarily a book can gain you entry into the profession, and that publishers won’t deal with a book that has previously been available online in dissertation form. Critics argue that there doesn’t seem to be much evidence that pre-published dissertations are off-putting for academic publishers. The debate also seems to beg the question of what exactly the difference is between a book and a dissertation; they’re very different forms, with radically different audiences in mind. Not only that, but the main customers for academic books have been neither academics nor the broader public, but libraries, as pointed out here. Barbara Fister notes that “[l]ibraries have been buying fewer books no matter whether they are based on dissertations or not; they won’t buy more books because dissertations go offline.”
The key issue here isn’t the difference between dissertations and books. It’s the academic career, and what “counts” towards building it – and additionally, what “proves” quality in a candidate for tenure. The AHA post states that “although there is so close a relationship between the dissertation and the book that presses often consider them competitors, the book is the measure of scholarly competence used by tenure committees” (emphasis added). This is not just about who gets to publish what, and when. It’s about entry into the academic profession, and what kind of scholarship is seen as valid when assessing who should be on the tenure-track or tenured, and who should not.
This is why the AHA’s stance is being framed by some as “protective” of ECRs who, if forced to make their dissertations widely accessible, will put themselves (or their careers) at risk by jeopardising their chances of having books published. As the statement reads, “History has been and remains a book-based discipline, and the requirement that dissertations be published online poses a tangible threat to the interests and careers of junior scholars in particular” (emphasis added). At a time when competition for academic jobs is fiercer than ever, their argument has a lot of weight.
All this is wrapped up in the shift away from physical copies of dissertations filed on library shelves, and toward the digital databases that allow access to anyone who can get online and run the right search – a scenario involves a different set of assumptions about who can access the work, from where, and with what results for the scholarship (and scholars) involved.
I think an important issue here is the purpose of scholarship, which shapes the work that happens and what “knowledge” is created through it. As others have discussed, historical research is a public good as well as an academic one; what use is there in assuming a few narrow forms of dissemination are the most appropriate, and will continue to be so? A particular understanding of purpose is driving the assessment of scholarly worth. If the purpose of scholarship is to get you a book contract and ultimately, a tenure-track job, then the exclusivity that is so cherished in this process needs to be maintained. If the purpose of publishing is to “launch” one’s career in an appropriate fashion, then there isn’t much room for serendipity or risk in that process.
In terms of purpose, a related example is that of the REF in the UK, which is designed to assess the “quality” of research produced by UK universities. Many critics have already pointed out the systemic effects of this process, which is also tied to funding. There is a connection between academic systems of assessment and professional advancement, and the way scholars go about their work (and make career-related decisions). Because tenure is hard to obtain, the fear factor is an important one for many junior scholars and the AHA’s initial announcement reflects this.
It’s the system in which we already operate that engenders risks from “openness”, risks that are more obvious and present for some than for others. We already have a system wherein keeping one’s work sequestered is still more of a failsafe way of building an academic career. This is why it’s so difficult to use moral arguments about the openness of knowledge: this openness hasn’t yet been institutionally validated to enough of an extent that ECRs are willing to stake their careers on it. That the AHA can argue it has also “supported” alternate forms of publishing and recognition, tells us that there may be an ongoing disconnect between what scholarly societies and funding agencies advocate, and what actually happens in a departmental hiring process.
The academic profession is in a period of flux in terms of what kind of work is valued, what forms it should take, and how it should be shared with others. In many ways academe is still in reactive mode regarding changes that have been happening for some time; technology has already helped to move scholarly activity beyond the limits of what academic career tracks have “traditionally” encompassed. Those hoping to become academics are now receiving very mixed messages about the limits of what is acceptable or desirable for professional credibility. This makes things difficult not only for prospective scholars but also for graduate supervisors and mentors who hope to support their students in developing academic careers.
If it’s less risky to fall back on an established understanding of what counts, if our decisions are governed by fear and anecdotal evidence, then the issue of giving junior scholars a “choice” seems almost beside the point. Choices tend to be governed by context. Maybe it’s time to work on changing that context, instead of trying to work within what are being treated as permanent boundaries.
The term “talent market” has always seemed vaguely obnoxious to me. Maybe it’s the extraction and objectification of “talent” as something apart from those who might have it and use it, and transformation into a product available for sale. Maybe it’s the fact that “talent” used in this way reminds me of a circus or sideshow (not without reason). Or perhaps it’s just that it’s another term like “creatives”, which is being mobilised in an increasingly pervasive rhetoric about who, or what, is most desirable in the “new” economy (what fate awaits the non-talented?).
In any case, the “talent market” certainly isn’t a “free market”, if such a thing is possible in any context. I’ve had multiple recent reminders of this fact. One example that stands out is something I mentioned in my last post, regarding the HASTAC panel I helped organise. Two of our panel members were unable to attend in person, one because of a lack of funding and the other because of problems obtaining a visa in time.
Clearly, not all scholars can be “mobile”, at least not mobile enough to participate in the “talent market”. Laws and restrictions apply differently according to one’s nationality, life history, immigration status, place of residence, access to funds, and so on. It’s not something I’ve experienced first-hand, because citizens of countries like Canada and New Zealand (where I was born and grew up) tend to have an easy ride. In fact New Zealand seems to be viewed as one of the most innocuous nations on the planet. I haven’t travelled a whole lot, but I’ve never needed a visa (only a visa waiver). Customs and immigration officers have greeted me with jokes about Marmite and sheep. They tend to look curiously at the Māori words in my passport (or uruwhenua Aotearoa) before stamping it and allowing me to go on my way; and in addition, because I have white skin and an Anglosphere accent, I have no problem being accepted at face value.
As for some of my colleagues, they have to plan to get permission for travel months in advance; they’re missing conference events (including their own scheduled presentations) in neighbouring countries because the required forms cannot be processed in time, or because of unpredictable glitches, or new and/or esoteric requirements for visas. As the gears of government bureaucracy grind away, precious professional opportunities are lost. That’s what happened to one member of our HASTAC panel, whose visa arrived after the conference had already begun – preventing her from presenting, and also from seeing friends and family in Canada.
Another side of this issue is that there are two kind of “mobility” in higher education. The preferred version involves having the resources and status to travel where you please, to take up opportunities in other places if you so desire. This means not only money but also prestige and other kinds of support (from institutions, mentors, and loved ones). It means either being single/unattached with little to worry about in terms of family, or alternately, having a family who are willing and able to be “mobile” as well. It means (potentially) being fluent in more than one language, preferably English plus another language. It also means having been able to demonstrate “merit” in the right ways. The candidates in this group are a part of the “élite” that every university wants to woo. They have won awards for their work; they represent the cream of the global scholarly crop. But obviously, they don’t make up the majority of those working in academe.
What about the other kind of mobility? The flip side of this deal is that some folks move because they have no choice: they have to take that tenure-track job no matter where it comes up, if they want tenure (or an academic career) at all. Mobility isn’t élite for everyone – not if it means moving far from home (possibly more than once) and working in precarious jobs because that’s what happens to be available, and not if it’s about feeling forced to leave your home country because there are no opportunities there at all. There are also more local versions of this phenomenon: in Ontario, for example, some academics have contract faculty positions at more than one institution, spending a disproportionate amount of time commuting from campus to campus. Yet even this requires resources of a kind that some academics won’t be able to access (such as a car, or convenient transit).
For all these reasons and undoubtedly many more, the “talent market” is clearly a deeply unequal one, and is not genuinely meritocratic; it’s a reflection, indeed an amplification, of existing inequalities. Specifically for academics, when combined with the tendency towards élite, targeted funding and emphasis on overseas recruitment of “the best”, and an increasingly stratified and fragmented academic workforce, we have to ask what the globalised (and polarised) professoriate is going to look like. If it’s going to follow the same lines as other aspects of globalisation such as so-called free trade, then I think we can do a whole lot better.
Much attention has been paid to student mental health issues over the past year, and recently the level of coverage peaked with a new report from Queen’s University at the end of November (PDF here). The report came from an initiative prompted by a number of student deaths by suicide at Queen’s in 2010 and 2011. On a related note, some of you may recall a post I wrote a year ago (and a follow-up), regarding Ph.D. attrition and mental health issues such as stress and depression.
Last week, a blog post from HESA’s Alex Usher invoked both the more recent media attention to undergraduate stress, and my own (aforementioned) post, expressing skepticism about the reality of an “epidemic” of mental health troubles. The post is written as a kind of “Mythbusters 101” about student mental health, and the topic is unfortunately treated as if it is merely the fad of the month (or year) in Canadian postsecondary education (PSE). While it’s always a compliment when someone engages with something I’ve written, I believe the compliment is a backhanded one in this case, since the arguments I made about graduate education are quickly dismissed as a smokescreen for Ph.D. student “angst” and fear of (real or imagined) failure.
This choice of term is revealing; “angst” is a significant word because through connotation, it both individualizes and trivializes the problem. Here, it is part of an argument about “the tendency to over-medicalize daily life”, a comment that assumes there’s nothing going on in daily life that should be considered “stressful”. The very point I was trying to make in my post, a year ago, was that the problems of stress and anxiety are not just attributable to individual weaknesses or quirks of circumstance–not when we can identify certain patterns unfolding across systems over long periods. Suggesting that the problem lies with individuals’ interpretations of their circumstances, rather than being (also) a structural and cultural one, is dismissive of the elements in an equation that may be beyond the individual’s control. We need more understanding about what those might be, rather than an assumption that they aren’t part of the equation.
As others have already pointed out, there are a number of causes for increased visibility of mental health issues among students (not just in Canada), but that doesn’t mean the issues weren’t there before or that they haven’t been building over time; they’re long-term and influenced by systemic factors. For example, the massification of PSE, and related increases to costs, have changed the kinds of students who attend university and the circumstances from which they have to work on their education. This means more students who have fewer (academic, financial, and cultural) resources to draw on, and are more likely to be struggling to keep up for various reasons. So perhaps students are “a lot more fragile, and less prepared” than in the past — but not necessarily for the reasons provided by Usher.
It’s also a problem to assess students’ financial straits primarily by looking at tuition numbers. This is a very superficial way to examine finances, no matter what other argument is being made (in this case, Usher also argues tuition has not really increased–and neither has student debt). It’s particularly problematic when we know there has been a serious economic recession that has affected finances in many ways that go far beyond tuition and other fees. Stress from financial difficulty is a serious problem to which some people have much more immunity than others. To be financially vulnerable is to be exposed, perpetually, to the possibility of loss and disruption. It often signals, or in fact creates, a parallel social isolation; this is why it’s impossible to deny the reflexive link between mental health and poverty.
There’s also a recurring gripe, raised again by Usher and of course taken up with much enthusiasm by Margaret Wente in her latest column (the argument has been “answered” eloquently by Gary Mason here), that accuses young students of having (ironically?) “too much self esteem”. I’m not sure where this assumption could be coming from — perhaps the focus on “high achieving” students? — but it certainly doesn’t match the experiences I’ve had working with undergraduate students in universities over the past eight years or so. Perhaps this is all part of the new “young people don’t have it as bad as they think” discourse that seems to be emerging, though much of that commentary is coming from those who benefited most from what young people are now losing, i.e. the welfare state systems of education, health, and pensions.
Sure, the “kids” have expectations–which were happily passed along to them from their parents’ generation. They were told that if they worked hard and went to university, there would be a job at the end of it. They were told that standards of living could keep rising, and that they could do what their parents did, but somehow do better. Yet the real bubble–that 30-year blimp of post-war prosperity–has long since gone down in flames, and we’re finally seeing the long-term effects. This is about more than changes to the job market or periodic recessions; it’s about risk, speculation on long-term “outcomes” of larger “investments” being made by people when they’re at a young age, when they cannot expect the kind of socioeconomic mobility that their parents could. Yet commentators continue to assume that all this must be the responsibility of the individual, the family, perhaps even the school system (since education is supposed to prepare us for life — and it perpetually “fails” at this).
Wente’s comment that “stress is a fact of life at university” disturbingly echoes the “everyone has a breakdown!” mentality that I described in my initial post about PhDs and depression. While she describes herself as “extremely sympathetic to the issue of students’ mental health”, the actual argument is, “if I could take it–they should be able to take it too.” But if we take a step back, the larger context might start to look like a recipe for stress; and if you think undergrad students are worried about jobs, you should see Ph.D. students who want university faculty careers. So I must disagree that structural issues in the university, and in the larger society and economy, can be written off so easily as “angst”.
I would argue that comparisons to the 1990s are not really useful, because the problems of disappointed expectations and increasing stress (over outcomes), both at the graduate and undergraduate levels, are not just blips on the historical-economic radar. They signal the end of a way of life, or rather, a life trajectory, and at a deeper level, a kind of betrayal of trust that further dents our faith in social progress. Whatever we may think about “kids these days”, one thing’s for sure: unless you start out in a nice solid position on the socioeconomic class ladder, sustainable ascension is more and more of a challenge. That means it’s harder to have the things in life we’ve been told we should want — a home, a family, some security for the times when we can no longer work to sustain ourselves.
From what I can tell, the majority of young people entering university want to be able to do something reasonably meaningful, and sustainable, with their lives–without having to be perpetually concerned about whether finances and lack of social capital will trump opportunity at every turn. If those expectations are too high, then I would ask, what exactly is “reasonable”?
Sometimes I find there are threads of conversation that keep coming up with friends, colleagues, and students, both in person and online. Recently one of those threads, which has also recurred in my own blog posts, is that involving the focus on skills and outcomes in university education and the apparently perpetual critique of universities’ capacity to help students gain what they need to be “successful” (in the workplace and in life more generally).
Over Canadian Thanksgiving weekend, the Globe & Mail began a two-week-long series on postsecondary education in Canada (full disclosure: I also participated in this series). One of the themes explored in print has been that of the “core curriculum” vs. “specialization”, and which one works best when it comes to preparing students for developing their careers. In particular, articles by James Bradshaw and Cathy Davidson explored the benefits and difficulties of advocating a curriculum shift towards less specialized, more “liberal arts”-style approaches.
The question of curriculum in university education is posed (in the articles) primarily in terms of a broad-based approach involving common “core” elements determined by the university, or on the other hand, more student choice and more specialized options. The appeal of a liberal arts education is that it’s “broad” and supposedly flexible; flexibility, we’re told, is what’s required for a successful career these days.
But nothing is flexible if you aren’t aware of the options it opens up. One perennial irony is that it’s almost impossible to gain the benefits of this kind of education without abandoning a certain kind of instrumentalism–the exact kind that students are encouraged to have when they select university programs. Traditionally, this freedom from anxiety about specific outcomes has been the privilege of the elite–as has the cultural capital required to make the most of liberal arts education. Now that universities have expanded beyond catering to existing elites, and costs have increased, the question of instrumentalism has become more urgent. This is also part of why we now see more explanations of the liberal arts as having a “utility” that is still translatable within economic logic.
It’s no surprise then, that James Bradshaw notes “many prospective students–as well as their parents–still consider liberal-style learning impractical.” This attitude is related to the emphasis on skills and outcomes. The focus on and demand for “marketable” skills and job outcomes places pressure on any debate about the components of education, along with the argument that employers require or demand certain skills (and that universities are not providing these–or students are not choosing to acquire these “useful” skills). Yet there’s no point in saying students need a particular skill, without placing that skill in context. Skills tend to be acquired in the pursuit of some larger interest or goal that motivates us. And that goal, that interest, that passion–that’s what students need, not just whatever is deemed most marketable in the moment.
Perhaps this is my preferred line of reasoning for a kind of common curriculum designed to provide “grounding” for students not only with some breadth of knowledge, but with a sense of the way university education works. Undergraduate students frequently don’t have a coherent path mapped out for themselves, unless they have access to cultural capital that allows it. For many, university education is the way in which discoveries and decisions about careers are made–as well as the means of carrying out those decisions. All this can generate a lot of anxiety about what the “outcomes” might be. Are we acknowledging the situation students face, or are we constructing systems that are based on the assumption that students engage in fully-informed decision-making behaviour at an early stage, with no “information asymmetry” involved?
I also agree that what Mark Kingwell describes in his article, “a sense of intellectual connection, of how things fit together and influence each other”, is a large part of the answer. Students need to see those connections between different areas of knowledge, because through those connections (new) meanings emerge. The specialization of knowledge has helped us to gain deeper understanding, but it can also hinder the learning process because specialized knowledge can be taught without reference to a holistic context. Yet we’ve spent a very long time encouraging the fragmentation of the university into different areas that may or may not be engaged in (or be willing to engage in) interdisciplinary exchanges. This fragmentation has affected not only organizational forms and policies such as funding structures, but also the culture of academe.
It’s really metacognition–“thinking about thinking”, or as Lawrence Summers described it, learning “about how to learn”–that is at the core of what students need, no matter what their area of study. It’s something that underpins critical thinking, aids our adaptation to new environments and experiences, and helps us understand our strengths and how to use them. Students tend to do best when they know their own interests and talents, and are themselves determined to work to take things further. Without that desire, how can learning happen at all? This kind of self-awareness is vital, aided by advice, mentoring, and a pedagogy that must overcome the theory/practice, academic/“real-world”, and content/process divides that permeate so much of our thinking about education. An old adage applies: this is the difference between giving someone a fish, and teaching them to catch their own.
So instead of questioning (for example) “are students getting the ‘right’ skills to get a job?”, we could ask: can we foster (self-) knowledge and skills at the same time, and how will that look for different students with various needs and resources? I think it’s questions like those, rather than the ones about market demand, that are central to the kinds of problems we’re trying to address now in university education.
Recently, Colorado State University and Harvard University each posted job advertisements that included specific time limits on when the candidate’s PhD had been earned. Colorado’s ad requested PhDs only from 2010 or later; and Harvard’s ad read: “Applicants must have received the PhD or equivalent degree in the past three years (2009 or later), or show clear evidence of planned receipt of the degree by the beginning of employment”.
Both Colorado State and Harvard ended up re-writing these descriptions, but not before the ads set off a good deal of discussion online. UK scholar Dr. Ernesto Priego has already pointed out the incredibly time-consuming and expensive process of applying for academic positions, and I’ve written in the past about the large amount of unpaid work that all candidates much put in to stand a chance at a long-term academic position. Jo VanEvery cast a new light on this by raising the issue of Harvard’s recruitment of younger tenure-track academics:
In her blog, Lee Bessette articulately described the frustrations of an academic now “out of the pool” as an “old PhD”. At Escape the Ivory Tower blog, Julie Clarenbach argued that perhaps this kind of restriction isn’t actually a bad thing, because it provides an explicit signal about when we should get out of the academic job market, move on, and do something else.
I’m not convinced by this latter argument, because while it’s practical in the moment–for those “caught in the middle” between being a “recent” PhD, and being an established superstar–I don’t think it helps us to address the real issue, which is that the job market operates implicitly in ways that are discriminatory, since right now it’s most definitely a buyer’s market.
I think this relates to something I wrote recently for University of Venus blog, wherein I took a brief look at the potential effects of increased competition on young and early-career academics, and on the culture of academe in general. The dynamics change when a larger group of people are “competing” for a smaller pool of resources. In the meantime, academic socialization often demands that we continue the search (even if in vain) lest we be considered traitorous to the profession. As we know, for many PhDs this has meant taking a low-paying postdoctoral position and/or teaching part-time and/or on contract for long periods while continuing the search for a tenure-track position. Those people will of course be written out of the equation officially if job advertisements can explicitly call for recent PhDs only.
As a bit of a side note, it’s interesting that one of the follow-up articles quotes my tweet about feeling like an over-ripe piece of fruit being cast into the cider bucket (of course in the tweet that followed it, I augmented my comment). I think the metaphor is an apt one: “over-ripe” for one, (narrow) purpose may mean “just right” for something else, ie for the making of some other, potentially very different “outcome”.
I think the really egregious thing here is the idea that after anywhere from 7 to 12 years of higher education in preparation for such a job, there is only a 3-year window in which you’re considered eligible to apply. The idea that this is “just what’s happening anyway”–even if not so much in Canadian universities–should make us all furious, considering the waste of time, talent and funding involved. So in a way, I’m disappointed that the job ads have been removed. Perhaps a solid, visible barrier, rather than nebulous claims about “meritocracy”, would finally provoke a real fight about what’s happening in the academic profession, given that the trends have already been so bad, for so long.
Last week Leo Charbonneau over at Margin Notes blog wrote about an article in the most recent issue of Canadian magazine The Walrus, “The uses and abuses of university”. The article isn’t online yet, but my print copy arrived last week, so I was able to take a look at it. The authors, Drs. Ken Coates and Bill Morrison, argue at length that there is a “mismatch” between the kinds of degree programs that students are choosing, and the real needs of the economy. The solutions they discuss include increasing the number of STEM graduates (not a new idea), and “directing students into fields with the greatest need” through various mechanisms, including subsidizing some fields (those in economic demand) but not others.
I think it’s interesting to compare the piece to another recent article, “How the invisible hand points students to a job”, in which Miles Corak took a similar approach yet came to a very different conclusion. Corak discusses a study by Morley Gunderson and Harry Krashinsky (here is a similar one by the same authors, from 2009, in PDF format). Contrary to Coates and Morrison, The researchers found that “the higher the expected earnings in a field of study, the greater the enrollment”, with “one big exception: students choosing the social sciences”, who chose “irrationally” (i.e., they selected fields with low economic returns). Though the authors couldn’t explain the latter “anomaly”, overall it was concluded that the market is functioning effectively, since students are paying attention to “price signals”.
Clearly “crisis”, invoked in the Walrus article, is a popular trope in education discourses of all kinds. So rather than trying to pin down (again) the latest definition of the term, I think it’s more interesting to ask: what we can learn about society and education by looking at the use of the term “crisis”–what does its use signify, or alert us to, in our current context?
It seems that universities have been in “crisis” for, not just decades, but centuries. More recently we have a “crisis literature” in higher education, a whole series of texts that take up this trope from various viewpoints. It seems as if the power to define what the crisis is enables one party or another to offer the solution, to offer a line of reasoning that supports particular actions. The word has an urgency to it, and it calls out for an immediate resolution, one that must not be delayed lest the situation deteriorate to some theoretical point of no return.
In the examples I’ve shown here, the (potential) crisis is an economic one, a crisis of the market, a matter of the right levels of “human capital” production in the right areas. What is lacking is a match between graduates “produced”, and skills in demand. This is why, in both articles, the burden of appropriate choice is placed on young students: as when Corak states, “make a wrong turn in these hallways and you will pay for years. But so will the rest of us”.
The argument about a skills/jobs “mismatch”, as shown by Coates and Morrison, is part of what may be characteristic of more recent critiques: universities are being responsibilized for economic failures of the nation, on one hand, and for entrenched socio-economic inequality, on the other. I think this is why we have the bizarre spectacle of Harvard professor Niall Ferguson claiming that American universities support a “caste system”.
In a way this makes sense, given that this most recent “crisis” involves the massification of the university and a concomitant shift of expectations. Such demands are being projected onto the university at this moment not because universities are necessarily failing in some inherent way, but because in the larger context, these are the terms of assessment (i.e. those of the market, of national economic competitiveness). For example the entire concept of “disruption” — so popular now — is tied to the idea of an education market and the creation of new technology-enabled “business models” for it.
Instead of questioning the latest version of “crisis”, perhaps we should ask whether a central characteristic of universities over time has been the tension between social context and the desire/demand to create “new” knowledge, and that it’s somehow understood the university should be both of this context and also outside of it. The focus on universities’ slowness to adapt to external change is a part of this tension as well, especially if we consider the simultaneous obsession with universities as centres of “innovation”.
As always, there are no simple answers here–at least none that rely on the university as the means of resolving deeper societal contradictions. But I think it might help if we start asking different questions when we hear those words, “crisis”, “failure”, “disruption”.
Today’s post is about a thread of discussion happening over in the UK, where there has been an increasing amount of debate about the role of post-secondary education in “social mobility” (hint: the argument is that universities should do more to make it happen). In a bizarrely perverse twist, one commentator even argued that universities should have to demonstrate how much social mobility they provide, and that part of their funding should be dependent on it.
This discourse of responsibilization is nothing really new. Since at least the 1960s education has been handed the role of the “great equalizer” in unequal societies. Even once we nominally acknowledged structural rather than just individual constraints, education was (ironically) expected to alleviate these–whilst simultaneously being critiqued for reinforcing economic and social hierarchies.
If we accept the premise that we live in a capitalist democracy, we also need to remind ourselves that the term is practically oxymoronic. On the one hand we have an emphasis on economic competition and consumerism, and on the other, the notion of participation and equality for citizens.
In this system we also have a conflict between the idea of “goods” (not just “consumer” goods) being available to every person if we work to raise the overall level of well-being in society, and the idea of a kind of zero-sum game of prosperity. We can see the rising tide has not lifted all boats; when someone “moves up” others are probably either staying where they are, or moving down. “Mobility” does not occur in a vacuum, and now the shrinking and destabilizing of the professional classes is occurring alongside universities’ expansion.
Increasingly it is universities that are viewed as the cure-all for the chronic affliction of economic regression. On the one hand they are expected to increase “innovation”, the magic bullet of the moment. On the other hand, they should be training appropriate human capital for the workforce, and the individual benefits to this are expected to come in the form of increased income, which is often used to justify tuition expenses.
But the university itself has been changed. The massification of university education means that even as accessibility has increased, the university field itself has been stratified and hierarchized, no longer automatically providing “elite” credentials and social capital that formerly allowed a small group to continue its entrenched dominance (or in fewer cases, for less prosperous individuals to join dominant groups).
All this is to say that what’s expected is for education to transcend its context, which is one of neo-liberal marketization, increased competition (for individuals, organizations, and nations), and unstable government funding. But how can education be required to transcend the strictures of capitalism (and indeed, help individuals do the same) even as it is being increasingly subject to them? If education is to be the “answer” then it must undo what we’ve already done, somehow–even while it’s a product of, and affected by, those conditions.
When we talk about “mobility” I think what we need is a re-framing of the issue. Can education continue to drive this very particular model of “progress” with which we’ve been living for so long? Can education perpetuate the illusion of endless potential prosperity, while itself subject to austerity? There is no cure-all, and the notion of education as providing access to higher levels of socioeconomic status is one that must be pulled apart.
The danger of the assumptions I’m describing is the consequences of education’s “failure” for the educational professions and those working in them. Education has been subject to attacks at the primary and secondary levels as those forms of education have been seen to fail at their appointed task. Now that a PSE credential is seen as almost a necessity (for a “good life” or at leave to stave off unemployment), universities are beginning to be subject the same attacks. But the attacks will be more intense given that the price paid by individuals for their university degrees is continually spiraling upwards. It’s no coincidence that the mobility debate is occurring in the wake of new tuition policies implemented in England, where students have now seen a tuition ceiling rise from £0 before 1998, to £9000 in 2012. Unsurprisingly, one new argument in the UK is that “if the sector cannot prove its social worth then it can only expect further cuts” (Atherton).
This requirement of “proof” is only one more way in which education is being scapegoated for problems that require changes to other areas of society and governance as well. To abuse a hackneyed metaphor, how do we untie (or slice through) the Gordion Knot of inequality, when it seems engendered by the very system we have created? This deeply complex problem can’t be solved by universities–which are a part of the unequal system as well as a part of the answer to it–even if we try to make them “accountable”. If this is the answer being provided for economic regression, I think it’s time to re-focus on the big picture and ditch the reductionist, responsibilizing rhetoric.*
*But not the alliteration.
Yesterday, as I was taking a short break between grading assignments and exams and working on my dissertation, I found myself amazed to be reading an article from the Guardian UK wherein the author argued that in spite of what others might say, academe is not a stressful place — in fact it’s the best possible place to work.
This article, which is obnoxiously entitled “Academia, stressful? Not for me!”, is by graduate student (postgraduate, in the UK) Katie Beswick. Ms. Beswick writes, after a cursory nod to the legitimacy of other people’s stress, “I’m familiar with the problem. But, personally speaking, I still don’t get it.” She then proceeds to list the reasons why academe — or rather, a very idealized version of it — is the ideal work environment.
I want to make it clear that I do not see the university in a wholly negative light — of course not. There’s a reason I’m there. Indeed, I want to understand the way the university itself functions, and why, and how we can make it better. But I know the research and reading I’ve done about higher education suggests that this post’s author has been shielded from some harsh realities. This is why, when I read about her “instinctive inner eyeroll” at the “complaints” of others, I’m afraid my own physical reaction was something more akin to gagging.
Yes, everyone experiences something different in graduate school and in the academic job market and workplace. But what’s deeply offensive here is the imperious tone expressed, the personalization of the problem and the suggestion (assumption?) that those who criticize are merely whiners. All these are familiar means of dismissing the legitimacy of (well-documented) experiences of others. It’s impossible to take seriously an argument that describes “an onslaught of moans” from fellow students and professors and wishes they would “stop bloody whinging!”, given the context of the comments and the vast body of research literature that contradicts these superficial statements.
So if you’re a graduate student and you’re enjoying life, then let’s talk about some of the conditions of that enjoyment. Firstly, you made it in. That means you’re less likely to be from a low-income background, or to have suffered discrimination as part of a racialized group. You’re less likely to have been persecuted for being gay, lesbian, trans, or otherwise queer-identified. You probably don’t come from a “second-class” nation in the global hierarchy, one without the research infrastructure to support your endeavours, or lacking the kind of education system required to propel you into university in the first place.
It’s less likely that you’ve had family troubles that distract you from getting work done. In fact, your family probably provides you support — moral and emotional, financial, and perhaps even academic (you might also have a partner who now supports you in similar ways — particularly if you’re male). Partly because of this, you don’t work more hours at your outside job than you do on your studies — and your job is more likely to be related to your career goals.
You’re likely to be free from health problems that could prevent you from getting academic work done and from earning a living. You’re free of significant debts, or perhaps you don’t have to worry about tuition payments, rent, or costs of upkeep for any dependents. You’re not a single parent. You don’t suffer from anxiety or from any mental heath issues that might impede your academic performance or social integration in the academic environment. You probably don’t have a disability; you’ve probably never lived on food stamps or other forms of social assistance.
In a Master’s or PhD program, to do well you need a good relationship with your supervisor, as well as appropriate mentorship and an academic environment that’s supportive and integrative, and some degree of financial stability. These supports help students finish their studies within appropriate time limits.
And if you’re not at all worried about finding an academic job, is there something you know that the rest of us don’t? It seems more appropriate to consider what information one would have to lack, in these times, to pose the question: “what’s everyone so stressed about [in academe]?” As one commenter responded, “I think once you finish your PhD and start looking for an actual job, you’ll be able to answer your own question quite easily.” Or perhaps a quick read-through of the comments on my article about PhDs and mental health.
Do the contextual factors described above necessarily prevent us from achieving our goals in academic careers — or from being happy? No, definitely not. But we must acknowledge that these factors contribute to people’s experiences, and that they make academe harder for some than for others. While universities are indeed admitting more students who don’t fit the “ideal” model, there’s an underlying model that persists. The university is a changing environment, and the demands of an academic career are changing too. This has increased the pressure on early-career academics, not the least in the UK, and it must be taken seriously as a cause of re-stratification and increased gatekeeping.
Is there a productive way to make the point Ms. Beswick is getting at? Of course there is. How about “I’ve had a great experience in academe, and I’m thankful for that because I know it’s not that way for everyone. These are the things that made it good.” That would be a better way of “framing” the truth, and it might even lead to consideration of what makes life “better” for some of us and less enjoyable for others.
This week I’m taking a bit of a break from the news and paper-writing to recover from the past six weeks of work. I’ve been pondering the writing “process” and why/how it works (or doesn’t) for me and others.
A friend wrote a few days ago and asked if I had any advice about getting over the fear of writing that tends to alienate us from the process. At first I thought I didn’t have any advice to give, since I still struggle with writing so much myself. My ideas seem to require long periods of percolation and then writing often happens in short, intense bursts, which is inconvenient in the academic context. Like many others I also suffer from the fear that everything I write is somehow inept and ill-formed, unworthy of being paraded in front of an audience other than myself — and I constantly question every point I’m trying to make. But I’ve had to make writing “work” somehow, so what does that process involve?
I actually have little “rituals” that I use to get myself into gear for writing when I know I have a lot to do. I think those mundane, taken-for-granted habits are interesting to share, because they reflect our relationship with writing as a process and they tell us something about approaches to learning and thinking.
For example, the physical space in which I write has an effect on how much I can focus. I’ve been attributing this, perhaps erroneously, to the fact that I lean heavily towards “visual-spatial” modes of thinking and understanding — and I always feel as if physical disarray only exacerbates a kind of mental clutter from which I’m already suffering. I think this is why I often do housework before sitting down to write, and on writing breaks. For me, cleaning is a great way to take a break because it’s a bit mindless, it provides some physical activity, and there’s some immediate gratification from the results.
When I’m actually piecing together what I’m writing, I struggle the most with structuring my ideas. I’ve always had a problem translating what I’m thinking about into the relatively linear approach that seems to be demanded by words on a page. I end up using a lot of charts and mind-maps, and the process of (literally) drawing out ideas helps me to understand them more and to make connections and solidify points. In the past I’ve had charts all over the walls around my desk, reminding me of the “big picture” I’m trying to look at even as I work to refine some small element.
Most of us have experienced “writer’s block” at some point, so how do we generate the momentum to return to our writing over and over, re-articulating the same ideas in better ways, or trying to develop new ones? One way I like to do this is by going through mental exercises — like returning to the “big questions” that triggered my interest in what I’m working on. What question was it that grabbed my attention? What connection provoked a response? Another approach involves allowing myself to jot a lot of notes without having to connect them; then I can cut and paste them into groups that make sense later on. Often when I’ve written a draft, even if I’m not happy with it, I’m afraid to chop it up for re-editing in case I “lose” something; so I start another document instead, and allow myself to cut and paste as much as I want.
A lot of “blocks” — and procrastination — are caused by underlying fear that nothing we can write (or think) will be good enough. Perfectionism, which can be fuelled by that fear, is an oft-cited problem for graduate students and this manifests regularly in the act of writing.
I try to deal with my own perfectionist tendencies by finding ways to take the pressure off myself. I pretend that no one will read what I’m writing, other than me — not that this necessarily helps, since I’m my own worst critic. During a conversation on Twitter Andrea Zellner raised this issue, saying she “actively [ignores her] inner critic in a big way.” Since “no writing will ever be perfect” we need to know when it’s “good enough” so that we can move on, share it with others, then accept criticism if and when it comes (deadlines often help with this!).
Fear also begins to wear off when we receive supportive and constructive feedback over time. That’s what helps us to build up enough fortitude that we begin to overcome our fears about public exposure (for me, blogging has really helped). For graduate students, writing is often an anxious process because of the awareness of a new level of competence required, and new audiences to which one’s writing (and one’s self?) will be exposed — peers and colleagues, journal editors and readers, conference attendees. The art and craft of writing is highly personal and is approached in different ways according to temperament, experience, convenience, compulsion and emotion. What methods do you use — and do they work for others?