Skip navigation
The Black Hole

Meta-research: improving the way we communicate (and perform!) scientific research?

As with scientific research in the time of “big data”, the critical thing for a researcher to identify is what sorts of questions the data might answer.

BY DAVID KENT | MAR 01 2018

Often on our blog and others like it, a topic is inspired by an event or series of events that elicits quite a visceral reaction from the author of the blog. Something is unfair, the system is corrupt, the solution to this problem is so simple, why can’t we all just do the right thing, etc, etc.  As a result, articles often feel anecdotal, sort of the equivalent of a medical case study, interesting to know, but… how prevalent is it really? Enter meta-research. We have all heard about how much information we have at our disposal and we’ve also witnessed the emergence of major analytics companies deciphering behavioural patterns, voting preferences, and even zombie apocalypse scenarios which suggest the Rockies are the safest place to hide out. The question that this blog entry will try to address is how we can best use this bevy of information to solve real world problems of how scientific research is conducted and communicated. John Ioannidis, one of the pioneers of biomedical sciences meta-research, imagines a better world of research in his excellent summary article on meta-research:

“Many ideas and solutions have been proposed about how to strengthen the research record, including, but not limited to, registration of studies, improved standards for reporting of research, wider (even public) availability of raw data and protocols, sharing, pre-specification of hypotheses, improved statistical tools and choice of rules of inference, reproducibility checks and adoption of a replication culture, team work and consortia-building, minimization of conflicts of interest…”

As with scientific research in the time of “big data”, the critical thing for the researcher to identify is what sorts of questions the data might answer (PLoS Biology has seriously committed to helping promote meta-research). How can the behaviours of scientists in their roles of researcher, reviewer and communicator be analyzed and put to good use? Over the past months, I’ve run a few quick “flier” experiments to try and benchmark the richness of data and type of revelations that one might achieve by digging deeper.

One of the first questions I asked spawned from a Boston friend of mine (not Jonathan!) as to whether or not the city/university where authors were based changed appreciably from the 1970s to the 1990s to the 2010s in articles published in the journal Cell (basically, has any bias for research centre location existed in the journal Cell and, if so, has it changed over time?). It turns out that I could find some fun anecdotal stuff, but on average things are more or less the same (or at least “not statistically different”) as they were in the ’70s. So while bias may exist (it depends on how you ask the question), it’s been around a long time and hasn’t changed very much.

The second question that I investigated came from an annoying bioRxiv article that I had lined up for a PreReview journal club this week. Last time we did a preprint review we got the feedback that we shouldn’t wait so long before getting our comments to the authors – so I purposefully picked an article that came out within the last two weeks (February 16 to be exact) and then discovered on February 21 that Nature had posted it online as an accepted “in press” article – bogus right? Why bother putting it on bioRxiv at all? So, off I went to see how commonplace this was…  again, after trawling through 100 or so articles on stem cells in February, nothing else fell foul of this, so it turns out that this article was a sad “one-off” (unlucky for us!). Probably not too much more to investigate.

My article a couple of weeks ago on the revelation of reviewer identity by Nature magazine was actually an area where I think there might be some very interesting patterns and some early traction. My old friend and colleague Aleeza already did a quick spin through papers in her field of evolution (see comment thread from last article), but it would be quite interesting to see if there are field-wide differences, journal differences (where reviewer identities are revealed obviously), and/or patterns that are troubling to the average scientist. If you would be interested in helping out with the collection of such data, please get in touch with me as it will be one of the projects I try to complete under the newly launched eLife Ambassador programme.

There is real utility in these sorts of studies and I’ll point readers to a couple of examples which should (hopefully!) lead to changes in practice:

  1. A very cool meta-research study on NIH grant review (more than 100,000 articles) suggested that peer review is good at identifying the top three percent of grants, but fails to distinguish quality between the 3rd and 20th percentile.
  2. Tracey Weissgerber and colleagues published a meta-research article on the inappropriate use of bar graphs for scientific studies in the field of physiology. Over 75 percent of figures had sample sizes between two and six and the vast majority of these were being presented as bar graphs with Standard Error (a big “no-no” for those used to looking at such data). It seems that having a graph “look good” is much more important than sharing the data accurately. I’d be confident that the same is true – or even worse – in biomedical sciences and I hope someone looks.
  3. Reproducibility studies such as those completed by the Open Science Collaboration or Chalmers et al. have been very instructive for identifying the frequency of non-reproducible research findings.

So – there’s lots to learn, we just need to start asking challenging questions and let the data speak for itself – surely the scientific community should be the first to jump on board with data-driven suggestions for how to improve scientific research and communication.  Let’s hope that such efforts drive a real culture change.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey