Two of the most common complaints heard over coffee in medical science labs and the source of much mental anguish are 1) “Is there any hope in getting a Cell, Science or Nature paper” followed by 2) “Does my career hang in the balance”?
Currently, it seems that the sole determinant of one’s first faculty appointment in medical sciences is publication record. This whets the appetite of the junior science trainee for the coveted article in one of the big three, but such pressure can (more likely than we care to admit) result in some rather unwanted consequences. This week’s retraction of a Nature article by a supervisor that the lead author post-doctoral fellow refuses to sign is one such possibility – though I suspect we’ll never get the full story of what happened in this case. In any event, when the metric for hiring is so singularly focused, those who want the job will do most anything to achieve it. It is akin to the vast swathes of medical school hopefuls with MCAT fever and, after there were too many students with “good enough scores”, the almost overnight rampage to give their volunteer energies for everything under the sun.
So, what can be done to help spread out the enormous weight that is put on a publication record without sacrificing good judgment when it comes to deciding which fraction of young scientists will run a good lab in the future? Over the course of the next several months we will try to pitch some ideas for new metrics, present some pros and cons for each method and try to extract our readers thoughts on the merits of such proposals. It is our contention that such diversity in candidate assessment tools will generally help faculties and employers make choices that will best fit their desired hiring criteria (be it “good undergraduate teacher”, “world class researcher”, “good team player”, or “quiet worker bee” amongst many others).
This blog entry will touch on something that was discussed by Daniel Cressy in his article entitled Counting Collaboration published online earlier this week in Nature News.
I do wish to preface any talk of collaboration metrics with a brief reminder that while collaboration (and more specifically “open” science) is highly desirable, it is not always possible or required for the success of a project and too much of a push to collaborate may result in a lack of focus for a research group – a sort of social butterfly syndrome where everybody becomes a generalist and nobody really gets into the nitty gritty of a scientific problem. Collaboration, however, is certainly a powerful driver of new advances, especially in fields that have become dependent on rapidly developing technologies. In my own field of stem cell biology, the number of engineers, chemists, mathematicians, etc that have brought their expertise to the table and made enormous contributions (e.g.: next generation sequencing) is quite telling.
The collaboration network analysis profiled in Cressy’s piece and pioneered by U Penn’s Institute for Translational Medicine and Therapeutics looks at co-authorship on grants and publications as well as the position held (departmental colleague, university colleague, external collaborator, etc) by these co-authors in an attempt to assess the productivity of research institutions. The assumption is that medical science is growing in complexity and requirements for multiple areas of expertise that success in translational research is critically reliant upon teams of researchers from different fields working together.
Working with experts from other fields can certainly catalyze developments in technology and theory that were close to impossible for those rigidly grounded within a field and all of its necessary assumptions. New insights, new techniques, and new possibilities occur when the leading edges of multiple fields intersect. In my own limited experience, the prescient questions that advance my research seem to come as often from people outside my field as from those within and for this reason collaborative networks and inter-disciplinary research teams are a great boon to the research community.
With this in mind, Cressy suggests making non-departmental collaborations a requirement for translational research awards as a possible method to improve research programs. Such hard line requirements, however, are unnecessarily restrictive, and my own suggestion here would be to consider such collaborations when assessing the entire proposal, but not to discard proposals simply because they are within a single department – the easiest example of which would be refusing an application from an already interdisciplinary department.
Alternatively, while there does need to be some measure of a scientist’s ability to work with others and the level of respect they hold amongst their peers, I fear that an intense push to increase collaborations simply for the sake of collaboration will result in an over-extension of some lab’s resources and distract the focus of the research. A slippery slope type argument could be put forward that such required interactions will bring us to a falsely constructed consensus where the requirement for team research dilutes out independent thoughts and experimental approaches.
A final concern surrounding collaboration metrics involves the “rich getting richer” problem where creating and/or expanding a network is substantially easier at the big research universities. So, if utilized, this particular collaboration metric must be used in tandem with additional considerations to the applicant’s location and current resource availability.