Skip navigation
The Black Hole

Identifying good scientists and keeping them honest

BY DAVID KENT | MAY 22 2011

In a February 2011 interview with Lab Times, Cambridge scientist Peter Lawrence ((thanks to Dean G for forwarding these articles and follow-up discussions)) reflects on his own career and complains that “the heart of research is sick” as he charts the changes in the way in which science is pursued.  Briefly, he cites impact factors and the increased need to assign metrics to scientists (# of publications, H-index, etc) as main drivers of producing low quality research and unfairly squeezing out some good scientists who do not publish simply for the sake of publishing.  Impact factor fever runs deep throughout laboratories but, most damagingly, exists at the funding agency and university administrative level as well.

A very telling anecdote that he shares concerns the content of our favourite journal Nature which he claims has shifted from producing generally interesting and generally understandable findings to a mishmash of data that is generally incomprehensible to a general audience.  This is mostly due to the pressure on high profile scientists to put out papers in high impact journals and the concomitant pressure on those journals to publish big name scientists.  The result is that even if the scientific story isn’t of general interest, it gets forced into the high impact journal because grants and careers rely on it. ((Interestingly, Lawrence also notes that “before” a scientist would submit a short letter to Nature and follow up that interesting tidbit with a full article in a more specialised journal, allowing for full explication of the data. This rarely, if ever, happens these days))

The whole interview is a long but is an excellent read, but for now, I want to pick up on and develop two points:

  1. How do we identify “good scientists”?
  2. Scientific ombudsman to police unethical behaviour

1.  How do we identify “good scientists”
Lawrence argues that instead of giving scientists grants on the basis of what they plan to do they should instead be evaluated by what they have already accomplished, because the former is too heavily invested in good grantsmanship as opposed to good science.  Interestingly, in agreement with Lawrence, a recent study in PLoS ONE identifies the most effective way to keep a lab funded in times of low total funding:

Once available funding falls below 10–15% in our model, however, submitting many proposals, despite the tax that this represents on both individuals and their scientific communities, appears to be the only recourse if the goal is to maintain research funding.

Together, these arguments suggest that scientists should focus less on their science and more on their salesmanship – both in terms of writing grants and selling ideas/papers.  To combat this, Peter Lawrence suggests that we:

…have to gamble with research. You have to give somebody enough money and enough peace of mind to get on with it. If at the end of five years they haven’t done much, then you end the grant.  That’s the way to do it. To look backwards, to see what they’ve achieved and not worry about what they say they’re going to achieve because it is all fiction anyway.

Sounds perfect right?  The big issue here though is how we go about identifying the people who should “get the chance”.  Lawrence was lucky enough to avoid writing grants because he worked as a staff scientist with the UK Medical Research Council – but how does that selection process work in today’s world of science where thousands more trainees are being pushed out every year.  For established scientists, you can perhaps make the argument that this type of funding would work because the ability to judge prior work is present.  But for those at the beginning of their careers, the uncertainty is much greater.  So Lawrence’s system might have some merit for senior scientists, but it will not solve the problem of PhDs and Postdoctoral fellows in every lab across the world feeling the pressure to get their big paper in order to establish themselves.

In fact, a big reason that the impact factor is so revered these days is that it offers an easy way to bin applicants (grant or job) into categories without having to  assess each candidate intensively.  This is especially useful in a world where applicants are as varied as one can imagine in terms of background and research topic.((if a university in the USA gets an application from a scientist in Europe for a position and that applicant is published in many high impact journals, then that application probably gets assessed – and in theory, this is a good thing for opening up the world of science))  However, as Lawrence correctly identifies, these absolutely should not be the only metric used to judge a scientist’s worth.  We’ve written before on alternative metrics here and here, but these are not the only solutions and it will be up to us to address the problem of the “big paper” meaning more than the science inside of it.

2.  Scientific ombudsman to police unethical behaviour
Beth has recently posted on the Canadian Medical Association Journal’s call for academic integrity and the website Retraction Watch which highlight instances of poor scientific conduct. An excellent suggestion buried in this interview is to create some sort of scientific ombudsman to enforce regulations that are designed to crack down on scientists with few scruples.  One of the things Lawrence noted in his preparation for grant writing was the advice to avoid giving away your best ideas because the grant reviewers might simply steal them.  This, combined with many scientists’ habit of not sharing data until it is nearly published is exactly what we should NOT be doing with public research dollars (I’ve discussed a shining example of this before).  Surely individual careers are not worth sacrificing scientific integrity… right?  Yet we have seen evidence of multiple studies about a novel finding being released within months of each other, we have witnessed full fledged scientific fraud, and we have all seen those people who take pictures at poster sessions – who can we complain to?

Currently the answer is nobody, unless you stumble upon a very honourable and just journal editor, university administrator, or granting agency.  As Lawrence correctly points out – there are few teeth in any of the suggested guidelines that funding bodies put forward for scientific conduct.  A powerful deterrent, he suggests, is shame – and online is the place to do it.  I would wholeheartedly support his idea to have an office to report such instances (potentially anonymously) and to investigate the claims and then post the evidence of misconduct online.  In order for these practices to stop, there must be some repercussion for those who steal ideas and purposefully scoop others.  As of now, they are simply rewarded with high impact papers and a career boost.

Finally I’ll leave you with two fundamental questions that we will try to address in future blog entries that run as a subtext to these issues above:

  • Is grant writing good for science? Does it stimulate better ideas than would otherwise have been generated?
  • How can individual researchers help maximise the ethical behaviour of those around them?

Another great read from the opining Peter Lawrence is Real Lives and White Lies where he describes the plight of a junior group leader with three years worth of funding and the harsh realities that set in when new money needs to be obtained.

ABOUT DAVID KENT
David Kent
Dr. David Kent is a principal investigator at the York Biomedical Research Institute at the University of York, York, UK. He trained at Western University and the University of British Columbia before spending 10 years at the University of Cambridge, UK where he ran his research group until 2019. His laboratory's research focuses on the fundamental biology of blood stem cells and how changes in their regulation lead to cancers. David has a long history of public engagement and outreach including the creation of The Black Hole in 2009.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. SB / May 29, 2011 at 08:43

    I wonder if the increasing use of very elaborate, computationally intensive approaches (anything ending with “omics” for example) could be part of the reason why many articles published in CNS journals now are generally incomprehensible to the general reader. I personally find it difficult to get excited about a paper if I don’t have a full grasp of the techniques that were used, and particularly if I don’t understand how the data was analyzed.
    I’m sure I’m not the only person averse to the idea of using bioinformatics-heavy approaches to solve every biological problem, yet it seems that in many fields, using at least one such approach is now necessary to get a Glamour Mag pub. I’m not convinced that more data = more understanding; certainly having more data for its own sake, with fewer explanations of how said data was obtained and analyzed, and with less elaboration on its meaning, does nothing to make the story interesting to the reader.
    I wasn’t around “before”, but it seems weird to submit a short version of your research findings to one journal (Nature) and a longer version to another. I don’t really understand how the longer paper would be considered original research, if a version of it had already been published in a different journal! What would make more sense to me is if all journals gave authors sufficient space to describe their research findings in a way that is understandable to the readership of the journal.

  2. […] for what factors may be having an impact on scientific practice, I’d suggest reading Identifying good scientists and keeping them honest on The Black Hole blog by David Kent. Here’s an excerpt, In a February 2011 interview with […]

  3. KMBS / July 12, 2013 at 14:14

    There is one problem with Prof. Lawrence’s comments: he is actually part of the problem. He rails against the so called “big” journals for not publishing his work, but as an editor of lesser journals, he makes sure subpar papers of his friends are published and good papers of unknown (to him) are rejected.
    The guy is a total mess.

  4. kolowrotki spinningowe / December 19, 2013 at 02:05

    Doing this will alxo help you catch even more crappie when fishing in angler lodge
    alaska the ocean when the shark nabbed her catch.

  5. http://qr.net / December 31, 2013 at 21:09

    Let’s be real nice to the fish cleaning station, something like
    that, or general wear. Professional world carp angler Larysa Switlyk Florida native, goes fishing
    all over the world, is Emerald Pines Lodge. The biggest problem most fisherman encounter
    when fishing in bays. OHI, which announced this past summer the establishment of a new generation of GMs who angler gaming value advanced metrics and unconventional thinking, but here it could
    cost you the Classic.

Click to fill out a quick survey