“Everyone loves to identify things that have not been identified.
The rabbit hole, where ever I find it, symbolizes solitude.”
–Terrance Hayes, For Crying Out Loud
Recently I’ve been finding it much harder to blog because I’ve been homing in on certain aspects of my dissertation, which has taken up an ever-larger chunk of my focus and thinking time. This is a good thing of course, but it means I’ve also been more impressed than ever at how others are able to write excellent and timely article and blog posts on the latest issues, while I can barely keep up with the higher ed news.
Part of the problem right now is that I’m immersed in a couple of specific data-gathering tasks and it’s almost impossible to avoid thinking about it when I’m “swimming” in so much information. For example, one of the tasks involves news articles, thousands of them. I’ve mastered the art of combing through the lists of articles generated by my searches, then skimming the articles themselves for search terms, to make judgement calls about their relevance based on how many times a word is mentioned and in what particular contexts, the topic of the article in relation to my study, or what the article tells me about the organization that is the focus of my project. At the same time I’m trying to piece together a bigger picture through combining these articles with what I’ve already collected from other sources. Initially I wasn’t sure I wanted to do this kind of news search for my project, because of the time it would take; but once I realized it was generating useful information, I knew it was the right move.
Each of these decisions about relevance is important, because it contributes to a research project that in turn makes claims about what is or can be “known” about my topic. If the goal of research is to expand knowledge, then we have to think carefully about what we use to build the scaffolding on which our claims rest.
In my own experience there hasn’t been nearly enough emphasis on the fact that this process is not (just) about “producing a dissertation” or “doing a PhD” but also about creating knowledge. Each decision we make about research is rooted in this. Epistemology isn’t just a branch of philosophy, it’s something we should know about in relation to the most mundane aspects of research tasks at every level: why choose a particular source for information? Why use more than one source? What assumptions inform your analysis of that source? How does it contribute to “knowing” something about the subject of your study?
My research materials have been somewhat overwhelming from the start; there was no clear path to a structure, even though I had built in boundaries. Since my project is partly about pulling together multiple versions of events to create a kind of multifaceted account of organizational change, it was hard to figure out where to begin. If there isn’t one “true” and essential story, what’s the first story or piece of information that you’re going to build on? I sometimes think about how this is like making a rope by twisting together fine shreds of fibre – you start with something thin and impractical, and gradually, you add more and more, using the initial thread as a guide, until there’s a recognizable form.
That’s one of the things that drives me to keep investigating – realizing I’m stringing together a lot of information that’s never been collected and organized in this way, and that there are probably implications that I can’t yet “see” (but eventually they’ll be visible). Of course, all I “see” right now is each individual tiny piece that must be brought into coherence with other pieces in a complex image that’s not yet clear to me. It often becomes more evident through immersion in the material, as I’ve discovered in the process of transcribing interviews.
I think part of what researchers, academics, and others doing intellectual work learn through working is to have a kind of split-focus. Each task requires a lot of focus, but there are many tasks to accomplish and you can’t allow them all to crowd into your mind at the same time. The distractions, even within the scope of a particular task, are plentiful and it’s pleasurable to indulge in a ramble through these curious little backwoods paths through data that keep emerging. I often think of tribbles, from the original Star Trek series: you have one, and then it just keeps multiplying. An article comes up with content that relates to half a dozen other articles, or to other issues that might be relevant so you think you should probably check them out – just in case. Sometimes I end up with 5 browser windows and who-knows-how-many tabs open, along with Word files full of notes, PDFs of journal articles, news items, institutional documents, and whatever else seems helpful for the tangent being followed in the moment. If I don’t at least set things aside when they spring to mind (as opposed to ignoring them completely), then I can’t focus enough to get any single task finished. This is why I have about 40 draft blog post “ideas” sitting in a folder, waiting for later. I’ve had to learn how to keep the scraps organized.
I’m not alone in this experience of going “down the research rabbit hole”, as fellow dissertators on Twitter remind me on a regular basis, thankfully. I expect (and hope) that over time this need for focus becomes easier to manage, as does the uncertainty about each decision along the research path. In the meantime, I have a lot of news articles to sift through.
This post is about one of my favourite issues in education and various areas of knowledge policy: the attempt to use policy to reliably generate the unpredictable.
As an example, one of the themes that recurs in certain kinds of policy design is the idea of creating a geographic hub of innovation, a golden patch of tech turf that connects universities with businesses and governments, and generates technological change that pays off for all involved: in other words, a new “Silicon Valley”, which is the touchstone for these kinds of discussions. Re-creating Silicon Valley has become a policy goal, with the desired result that economic competitiveness will follow; hence the appearance of “science/research parks” and “innovation districts”.
There’s plenty of research about innovation, organizations, and technological discovery and development, that strives to provide explanations about why we see these much-lauded historical irruptions of creativity, innovation, and of course commercial success. The idea is to be able to make this happen in a deliberate way by adding the right ingredients into the socio-political-economic mix.
For example, I think there’s an implicit understanding that space and physical proximity have effects on eventual outcomes in “discovery” when creativity and innovation are the goals. At a time when we’re hearing so much praise about online spaces and their possibilities, I think this is another sign that certain configurations of place and space are scarce commodities in research and higher education. The value of these commodities is such that they’re assumed to be part and parcel in the construction of conditions where “innovation” will flourish, and in elite contexts they’re accommodated as such.
Serendipity in teaching and learning, too, depends partly on the unpredictable outcomes of social contact. Creative sparks can fly when we’re bumping up against other people and their diverse ideas and perspectives, the eclectic combinations of knowledge they’ve built up over time. Each person who takes in knowledge also changes it through the process of knowing. In this way, it could be said that all education is built on a series of chance encounters.
Of course we can’t really plan for chance, which is possibly why it’s the one ingredient left out of most of the formulae we see being applied. Chance is inefficient per se, problematic in terms of the actual goals of planning, which require the assumption of at least some form of certainty. Better to make sure that other things happen, the things we can guarantee, speaking in those terms instead of the nebulous shades produced by the idea of serendipity, accident, and so on. The power of the serendipitous lies in its very unpredictability, but try explaining that in a grant proposal.
Here in Canada, the federal government isn’t much into chance, unless you count the gamble we take by developing policy without sufficient evidence available to inform it. This government is unlikely to solve Canada’s perennial “innovation” problems by targeting large amounts of funding to those projects it deems most meritorious. Nor will students find answers for themselves – or for the much-lamented “skills gap” – by trying to engage in the kind of advanced futurology that is now expected of them as they prepare to enter the job market.
To return to Silicon Valley and its clone zones: where knowledge is tied to governance, where policy must make predictions, we place bets on future success by attempting to emulate success seen elsewhere in the past and present. The historical analysis is in some cases a selective one; rarely if ever do we see calls for military involvement in new innovation hubs, yet Silicon Valley’s prosperity was built more on US military funding than on venture capital, as is pointed out by Steve Blank.
For good education and research to happen, even for those eventual economic benefits to materialize, we need place/spaces where we can allow for possibilities and work through failure and permit experimentation, where we can learn how to take chances and follow our noses – while encountering others – rather than just building on an assumed formula for success. After all, it doesn’t matter how high the stakes are; we can’t know the future, and if we can’t imagine a new model of success, we won’t be able to deal with whatever changes the future brings our way.
I first came across the term Critical University Studies (CUS) when it was mentioned in a 2012 Chronicle of Higher Ed article by Jeffrey J. Williams. The likely reason I hadn’t heard of this “emerging field” was that it seems the name hadn’t been used very much before, other than by Williams and Heather Steffen as discussed in the article – though Christopher Newfield has been described as one of the scholars who “helped to found” the field. Because the term wasn’t coming up in the framing of daily discussions I’d been seeing and papers I’d been reading about the critique of academe, I didn’t think about CUS again until recently, when a colleague in the UK, Dr. Martin Paul Eve, wrote to me about a project he’s working on that addresses some similar issues (he also wrote this blog post). Since my name was being connected to the aforementioned emerging field, I figured I’d better look more closely at what’s been said about it.
It was odd to realise that my work probably does fall within CUS as described by Williams. Something I’ve always said about what I do is that “I don’t have any discipline”. It used to be more of a joke, but over time I’ve come to realise that compared to friends in sociology, history, and various areas in the sciences, there’s little definition in the area where I work. This doesn’t seem to matter much – I feel the underlying themes are pretty clear, even though my degrees are in 3 different “areas”. But I’ve always known that it does matter when the academic system is one where professional socialisation and advancement are still aligned heavily with one’s specified field or area of research.
Some examples include the system of divisions between and within academic conferences; graduate and undergraduate programs, built on claims about legitimate areas of knowledge and study; the themes of academic journals; and of course, hiring practices for tenure-track faculty, wherein candidates are sought by discipline or field. This is why in practice, to be interdisciplinary means one must really be multidisciplinary, able to hold one’s own in a number of academic contexts, and adapt according to the opportunities available.
What then is the purpose of naming a field? What are the consequences of creating and using terms in this way? Once something has been defined, it takes on a new “realness” and becomes something people think and talk about in new and specific ways; Williams states that naming “recognizes that [the work] has attained significant mass and signals a gathering place for those considering similar work”. I’d add that when the boundaries are set out, we’re encouraged to define ourselves and the scope of our work in relation to them. More academic legitimacy can be gained when a research area can be discussed in terms of a body of work and/or a group of researchers, and when we have means of making a distinction between what belongs and what does not. In asserting that a field is “emerging” Williams positions himself at the forefront of a now-recognizable movement.
I’ve already noticed a number of these boundary issues arising in the debates about the digital humanities, particularly questions about who is a part of the field and who is not; whose work fits, and whose doesn’t; and who is “co-opting” the terms. When we consider that related claims can underpin requests for research funding, new publications, student recruitment, and other forms of (material) systemic recognition and reward, then it makes sense that there would be debate over the boundaries that separate those “in” the group from those outside it. If hiring and funding still happens based on field and discipline, then PhD students and early-career researchers are likely to align themselves accordingly.
For me the important point about CUS is that this work, which has the goal of critiquing the existing university system, is of course being enfolded and constructed by the same processes it criticises: the need to stake out academic territory and build upon it the infrastructure that will mesh with existing systems of assessment and professional advancement. A new field has been designated, but it’s one that should logically begin with a critique of the conditions of its own creation; can such a field transcend or violate those conditions and still “survive” within the institution? What are the consequences of survival?
There are politically positive uses for institutional recognition: I’d ask, will having Critical University Studies as a recognised field protect early career scholars who feel they can’t criticise the academic system “from the inside” without sacrificing their careers? Will it bring people together and help organise and inform so that the issues (which affect all academics) are being discussed more openly and in a more complex way than in sensationalist media accounts? And will it be open enough to include all those with stakes in the future of university education, not just those who hold particular political and theoretical views?
Considering the long-running debate about changing the university to meet multiple and divergent “needs”, I think these discussions are more important than ever, but they’ll need to engage diverse participants. If constructing a “field” is something that facilitates these things – rather than merely delimiting another narrow professional grouping – then I’m all for it.
After the post I wrote recently about innovation, I noticed that yet more articles have been popping up in the wake of the report I was discussing, including this one by Tom Jenkins who was part of the team that produced the report.
Then as I listening to Radio NZ recently, I heard a BBC history segment on the relationship between Edison and Tesla, and some of Tesla’s attitudes made me think of the way Edison’s example is invoked by Jenkins both in the report and in his article.
There is a long and complex story behind the relationship between Tesla and Edison, but suffice to say that after one of their conflicts Tesla ended up digging ditches for a while. In any case, Tesla apparently regarded Edison as a mere tinkerer, someone who purchased, and marginally improved on, the things others had created. Tesla’s assessment may have been drawing a line between discovery and invention, rather than between invention and innovation. Edison pioneered a kind of production-line process using many assistants; he also raised capital before embarking on a venture (sound familiar?).
From this description it seems to me that Edison was an entrepreneur, while Tesla was more of a scientist. And in the end, Tesla’s work is still with us, in spite of Edison’s PR campaign against him during the “War of Currents.” Politics was indeed at play.
Roger Martin, dean of the University of Toronto’s Rotman School of Management, has also chimed in on the innovation issue, with an article about Steve Jobs as consummate “innovator.” Martin also argued earlier this year that SSHRC had shortchanged MBA students by not offering them designated funding (even though this is not how it works for other disciplines; and MBAs don’t usually do academic research). Clearly the arguments about “innovation” are also arguments about resource distribution. Policies have been critiqued as failing, but this isn’t enough for advocates to leave off asking for government funding and planning of R&D.
James Colliander, a math professor at U of T, has responded with a blog post in which he argues that the point about Steve Jobs is off the mark, since rather than producing original scientific or technological advances, Jobs produced original synthesis and design.
So it seems we’re still stuck on the notion of innovation as a means of producing marketable objects and processes, or so I gather from quotes like this one (from Martin): “commercial success and impact is more about innovation than about invention.” While this “is typically the product of the curiosity of a scientist”, “it can be pretty irrelevant when it is a technology in search of a user.”
It’s interesting to see, over in the UK, inventor and entrepreneur James Dyson is funding a new professorship at Cambridge. Dyson is arguing that even when we don’t know where research is going, it’s important to invest in it, especially when the government is cutting back. It may seem like a shame we live in an era when the noblesse oblige of large corporations and private foundations is what we must rely on. But at the same time, government intervention as a primary mobiliser of science and technology is relatively new in history, and discovery and innovation have not usually occurred in just one isolated environment.
Arnold Pacey argues in his book The Maze of Ingenuity that the environments in which scientific discovery and innovation take place are those where there is generally more than merely a monetary motivation for the great scientific discoveries of the past; such environments were also sheltered from full exposure to market forces. So somehow science must always be “protected” both from the market and also (Dyson’s view) from undue political intervention, which itself is now linked directly to economic development.
I would argue that it’s not a question of whether scientists, business, or the government are good at “picking winners.” The issue of needing to pick a “winner” at all is the problem. I’ve quoted James Burke more than once in some of my previous blog posts (including the last one!), but here he is again, discussing the unpredictable and non-linear nature of “discovery.” I still question the use of these definitions of terms and the narratives they’re being used to construct, which are split along the old lines — applied, theoretical; practical, “speculative;” and so on. What would be truly “innovative” to me would be a change to the discussion so that these definitions were no longer the only categories available.
A new panel on research and development (R&D) and innovation led by Tom Jenkins, executive chairman of OpenText Corp., has produced a report entitled “Innovation Canada: A call to action.” The 6-member panel has recommended “a radical overhaul that includes the creation of a new funding council and transforms the country’s largest research entity, the billion dollar National Research Council.”
I think the report is interesting not only because of its potential influence on changes to the Canadian research landscape, but also because what’s being reiterated is in many ways the same story that has been told about Canadian R&D for over 50 years.
The evident goal of the panel’s proposals is to facilitate the “triple helix” of university-industry-government relations (Benner & Sandström, 2000; Etzkowitz & Leydesdorff, 2000). One example is the recommended creation of an Industrial Research and Innovation Council, which “would be an arm’s-length funding agency to help entrepreneurs bring ideas to the marketplace.” So the report fits well with the federal government’s Science and Technology (S&T) policy document from 2007, which takes as its aim the construction of a Canadian knowledge infrastructure that integrates creation of human capital (i.e. a well-trained workforce) with the production of “innovation” through links between academic and business stakeholders.
Of course, the concept of “innovation” is imagined and translated in very specific ways. Peter MacKinnon invokes this logic when he states, “innovation drives productivity growth, which in turn enables Canada to compete globally and maintain our standard of living.” For this reason the “innovation problem” becomes a thorn in Canada’s side, and is by now taken as a given. What’s also a given is that the conversation about innovation continues, even in a self-conscious manner, without policy ever “solving” Canada’s problem. Canada does not “innovate”; its businesses do not invest in R&D, and its research institutes and universities fail to collaborate with industry. Therefore Canada will always fall behind its competitors.
Similar threads of critique have continued through decades of panels, commissions, and reports, some of them government-sponsored and others externally produced by universities, companies, and think-tanks. Past examples include the Glassco Commission of 1962, which criticized the NRC for lack of R&D collaboration with industry (Dufour & de la Mothe, 1993, pp. 12-14), and the Lamontaigne Commission of 1967-1977 which advocated for “permanent steps [to] be taken to bridge the gap between the academic and industrial sectors” (1968-77, vol. 2, p. 521, cited in Atkinson-Grosjean, House & Fisher, 2001, p.13).
To me it seems there’s something of a paradox in the fact that the government is expected to provide a solution to the problem of lack of private-sector innovation. One of the perennial questions from critics is: why do companies not invest in the R&D side in Canada? Could it be the “branch-plant economy,” the historical emphasis on manufacturing and natural resources, or some flaw in the national psyche? Whatever it is, the assumed role for governments is to provide the most hospitable environment possible for private R&D activities. Which leads to another major critique — that government investments in R&D never live up to their imagined potential in Canada. The argument is epitomized in Carol Goar’s comment that “for roughly 30 years, Ottawa has been pouring taxpayers’ dollars into Canada’s “innovation gap” — and achieving precious little.” Perpetual disappointment tends to be blamed on the private sector problem, or on the government for producing poor policy or trying to alter the market.
Every “innovation” is built on incremental discovery, but the notion of “discovery” itself is one we should consider carefully. What does it mean to “discover” something? What does mean for something to be “innovative”? Innovation policy deals with the economics of knowledge, where knowledge is assumed to be something that can and should be “economized” in this way. When the word “knowledge” appears in this context it’s clear that certain kinds of knowledge are (assumed to be) more closely related to “innovation” than others. What then does it mean to “discover” something that cannot be (immediately, obviously) economized? The parallels with critiques of education and its “outcomes” are not coincidental.
These are questions I’m not equipped to answer — but I do believe that “innovation” and “knowledge,” like “creativity,” are slippery words subject to narrow interpretations when convenient. When it comes to implementation, “innovation” will no longer be a rhetorical abstraction; it will be instrumentalized in some particular way. For this reason the language of policy is important; it tells us something about the way these difficult concepts are being implicitly defined, and how they will be realized in practice.