While reading Dave’s last blog posting, I was struck by the discussion in the “How do we identify “good scientists”?” section, because it resonated with something that I’ve been reading for work. From Dave’s posting:
Lawrence argues that instead of giving scientists grants on the basis of what they plan to do they should instead be evaluated by what they have already accomplished, because the former is too heavily invested in good grantsmanship as opposed to good science. […] To combat this, Peter Lawrence suggests that we:”…have to gamble with research. You have to give somebody enough money and enough peace of mind to get on with it. If at the end of five years they haven’t done much, then you end the grant. That’s the way to do it. To look backwards, to see what they’ve achieved and not worry about what they say they’re going to achieve because it is all fiction anyway.”
At work, I’ve been doing a lot of work in the area of Developmental Evaluation. In a nutshell, Developmental Evaluation is an approach to evaluating in complex situations, situations where true innovation is happening.
When a situation is stable, we tend to evaluate it in either a formative ((i.e., how are things going? how can we improve them?)) or a summative ((i.e., did this do what it was supposed to do? was it a success?)) way. In those situations, we know what we are setting out to do and then we test if we’ve achieved that. For example, we set out to build X and at the end we ask, “Did we build X?” If the answer is “yes,” then we say the project was a success.
But complex situations are different. In complex situations, we don’t know exactly where we are intending to go. We might set out in a general direction ((e.g., We want to combat poverty or we want to learn more about stem cells)), we learn as we do, and opportunities and challenges emerge along the way. Evaluating complex situations in a summative way doesn’t make sense, since summative evaluations ask, “Did you end up where you intended to?”, but we may have ended up somewhere great, but in a place that no one could have anticipated at the time!
Conducting research falls squarely into the camp of complexity. When you set out on your research, you don’t know exactly where you are going to end up. Of course, you have hypotheses and you design experiments to test them, but along the way you learn things that take your research in a new direction. The findings from one experiment lead to questions for future research, to lines of investigation you may not have considered when you wrote your grant proposal. Expecting someone to plan out the details of their research program for the next three to five years doesn’t really fit with how science actually works. So perhaps there is some merit to Lawrence’s view that we should judge scientists on what they have accomplished – looking at how they were able to learn and innovate through their research – rather than on what they propose to do (which they probably won’t actually do, once they get into it). The major problem with this, as Dave mentioned in his posting, is how does one get started in research? How do you get research under your belt so that you can them be judged on what you’ve done?
Since I brought up the topic of “Developmental Evaluation,” you are probably wondering “Is Developmental Evaluation the answer? Is that how we should evaluate scientists?” The short answer is: I don’t know. I’m only just learning about Developmental Evaluation, but from what I’ve learned so far, it’s quite labour intensive, as the evaluator is a part of the program/project team and does a lot of work documenting the complex situation, how decisions about what to do were made, and what happened as a result. I’m not sure that this approach would necessarily be feasible or appropriate for evaluating scientists, but, as I said, I’m new to this field, so it will certainly be something I’ll keep in mind as I continue to learn about it!