Who knew it? Today’s researchers actually agree more than ever—and too much.
At least that’s what’s been discovered in published literature, step by step, decade by decade, over the last 60 years until 2010. In institutional and peer review meetings – in which being critical, let alone bringing nuanced critiques, too often dominates – the contrast between our research talk and our research walk is striking.
This path to growing consensus was the surprising conclusion of a recent high-profile study by Michael Park and colleagues reported in the journal Nature. The team found that papers and patents published over 60 years are progressively less likely to challenge previously published work.
So what, and who cares? Well, we all should. The confirmatory trend was itself confirmed to hold across multiple fields and disciplines, and different citations and metrics. While researchers are producing more publications than ever before, there’s been less and less progress in new directions. We have not a disciplinary problem, but a systems problem.
The mainstream media cared too, running amok with speculation on what researchers do and value in their time and efforts. Forbes Magazine and the New York Times questioned how it was even possible that this confirmatory trend could happen despite the undeniable and unsurpassed decades of disruptive scientific breakthroughs. Australia’s Finance Review reasoned that, blinded by breakthrough headlines, it is our assumptions that are wrong: research innovation is just much slower than the world assumed. Business News latched onto the impending end of science itself, while the more austere Economist speculated on explanations for this trend – an answer was offered by The Atlantic in its castigation of a scientific tsunami of crummy papers.
Do we trust the study? Few are qualified or confident enough to do a nuanced critique of the study, or its bibliometric methods – the science of studying science remains an under-researched and under-populated field. Though massive in size, how much weight should be placed on a single study? Debate will continue, but for now: what should academics do in response?
When we don’t like a message, our brains are cognitively hard-wired to kill, avoid, or undermine it. Academics’ biggest weakness is that we believe wrongly we are not as prone to such cognitive biases. But in relation to this study, we think this would be a mistake.
All studies, especially those of this size, have inevitable trade-offs. The analysis was large and very comprehensive: 30 million papers and patents (titles, citations and abstracts) – a total of 45 million data points over about 60 years, which were then applied in preliminary modelling to 20 million papers in four of the largest publishing databases. The same trends were found across multiple measures.
That said, some critics will allege its definitions of confirmation and disruption were too crude. Others will look to the low-hanging fruit: over-reliance on any single study is problematic. But let’s be open to more self-reflection rather than more research being needed. The notion that cultures and systems incrementally shape what they produce is both theoretically and empirically right. How then, can we encourage and support each other to disrupt more?
Recognize the stakes, costs, and benefits
Firstly, we need to recognize that academics have personal stakes in change. While the value of ever more research is not proportionate to the increasing volume of research being produced, there are also harmful costs. This systems issue has a grave structural accompaniment: researchers across the world are prone to poor mental health, via stress, anxiety, depression, and burnout. Bullying, harassment, and incivility in research environments are also commonplace.
The way we currently approach research publishing is harming ourselves, each other, and the knowledge we produce. It risks harming public trust in academics, our values, and our efforts. In the era of misinformation, this not only harms academics and academia but the benefits of the knowledge we do produce. Change will require us to see and do research a little bit differently – but with personal benefits of doing so for researchers’ work and lives. Doing nothing is not an option.
Research more, publish less
We’ve argued here before that, even beyond other countries, researchers in Canada simply publish too much research. This is not to say that individuals or institutions do too much research. But the working cultures of disciplines, and institutions overly esteem numbers of outputs. There is an urgent need for researchers to spend their time prioritizing the quality of their work – and focus on disrupting more – even when this comes at the cost of publishing less.
Working cultures and processes are needed that better promote psychological and career safety for courageous research and risk-taking, and stop esteeming above all the sheer volume of research individuals and teams produce. Universities also need more holistic conceptions of research impact in hiring and merit processes. This will also require processes and procedures that incentivize and reward quality of research over its quantity and even results.
It is an inconvenient truth that academic publishing is one of the largest and most commercially profitable industries on earth. Consequently, commercial publishers are not going to readily encourage researchers to publish more disruptively, or less. It’s great that Canada’s tri-agencies, the Canada Foundation for Innovation, and Genome Canada are now acting to align their evaluation of applicants’ productivity with the San Francisco Declaration of Research Assessment (DORA). This international declaration, set in 2012, seeks to improve how the outputs of scientific research are evaluated, to value different kinds of output beyond publications, and reduce misplaced links between merit and scientific quality with journal status or impact factor. This should not be seen, as some in Canada do, as compromising expected or actual standards, but as a vital step in strengthening the knowledge that research work produces, and increasing researchers’ influence by reducing interference from commercial interests.
Untangle your identity and ego
Change towards a different kind of research will be slow. There’s something enduringly wonderful about getting the next thing into publication. Publishing is a buzz that never seems to get old, yet also feels like there is never enough.
Indeed, the dopamine of your next scholarly fix flows satisfaction into our otherwise challenging work and life. Publishing less and disrupting more requires us to alter our working motivations and patterns when many are cognitively, culturally, and socially primed to do the very opposite. Changing our frame, emotions, and behaviours around publishing is the hardest of tasks because it touches upon long-established elements of our personal scholarly identity linked to reputation and prestige. It also involves our ego – in our self-esteem, sense of pride, and even fear.
Yet, we make trade-offs between quantity and quality every day very comfortably: do you want 50 pairs of “economy” supermarket shoes or 10 pairs of swanky high-quality designer shoes? Of course, the choice is ultimately for each of us. But as researchers we need to own it: we always have a choice in the values and practices we bring to our work. Self-awareness and agency will be necessary for change.
Review as you want to be reviewed
Overly adversarial review cultures are a petri dish for lack of disruption – via conservative manuscripts and safe grant applications. Change is of course, ultimately, not only individual. Research systems, work cultures, and institutions matter too – especially for those whose work or fields are traditionally marginalized, oppressed, excluded, or disrespected. Much action is needed from many.
Yet, at the individual level, there is no better maxim for each of us than to review others how you would want to be peer-reviewed. Too often researchers’ peer review lacks empathy and true collegiality; not only too critical but too often choosing not to give the benefit of the doubt amid tight word limits and multiple considerations and trade-offs.
Reviewing others how you would want to be peer-reviewed helps our systems more than any top-down exhaustive laundry-list of prescribed requirements or principled considerations. To stay committed to disruptive work in our own research, we have to encourage and support others away from consensus. Shifting both our work and the working cultures that got us here only ever starts with you.
The reason for lack of disruptive science is obvious to scientists. It is a function of the funding process. Here are some reasons that come to mind immediately, no doubt others will be able to provide other reasons: 1) Multidisciplinary projects are encouraged and funded. This means that research directions are decided by the consensus of committees and not mavericks with unique ideas. 2) The projects are generally large and mapped out in a project proposal that does not encourage flexibility and changes of direction. Having to report on the progress of the project steers the results in the original preconceived direction. 3) Industry participation and funding is encouraged or required. This pushes projects towards the low risk end of the trade-off curve, driving the cost up and the innovation down. 4) The requirement for funding and involvement of HQP pushes the research towards young scientists. These scientists are often still learning or being trained in the current state of the science. It is not until they are familiar with the current state of the science that they can disrupt it properly.
A disruptive article on scientific disruption!! The beauty and power of disruptive science is that it makes you think and is memorable. Much appreciated to read an article that is both thoughtful and will stick with me..
Excellent, ALEXANDER & BAILEY
People in positions of power, be it as modest as refereeing, hate disruption. As an example, here
is a widely cited Nature paper (), published in 1993, reporting on an allegedly failed replication of the results in another rather famous and genuinely disruptive paper. The statement in  that “no aspect of the data is consistent with the previously published claims” is simply false, as anyone who reads it carefully may verify: if the results are admittedly incompatible with the null-hypothesis, you can’t pretend the null-hypothesis stands unchallenged (see  for background and pointers). That’s scary and people are duly scared into conformity.
The issue is money! Diminishing funds available, mostly for research into areas with high industrial appeal. Without basic research, there will never be any landmark breakthroughs. To get jobs and funding, you must publish a lot. We need to change what we are rewarded for, and have more money available.