I think I’ve mentioned on here before that I now work in the world of evaluation. One thing I’ve noticed since I’ve been in this world is the tendency of some to view “evaluation” and “research” as different things. I get a lot of confused “how did you get into evaluation after doing a PhD?” when I first tell people I left the academic world for that of health care. Which has always puzzled me, as I’ve always seen evaluation as just one form of research.
When you read about (or talk to) those who conceptualize research and evaluation as very different things, you basically here something along these lines: “research” is seen as something you do for the purpose of generating knowledge for knowledge’s sake (or “to learn more about how the world works”) while evaluation is done to assess the process and/or outcomes of a program (e.g., “do people learn what they are supposed to learn from this educational program?” or “does implementing this health promotion program result in a lower prevalence of childhood obesity?” or “how can we more efficiently implement this service?”). To my mind, the difference isn’t as clear as some might make it out to be. Certainly we can garner things from conducting evaluations that might be useful “knowledge for knowledge’s sake.” We use a lot of the same concepts and techniques in evaluation as one would in research. As I was thinking about writing this blog posting, a blog posting from the American Evaluation Society blog popped into my Google Reader: John LeVelle on Describing Evaluation. I thought the diagram there was an interesting way of looking at this (follow the link to check it out ((as there doesn’t seem to be a Creative Commons license on there, so I couldn’t put the diagram into this posting))) – though I would quibble with the notions that research is always “researcher-focused” (community-based research and participatory action research, for example, aren’t) and that all research starts with a “hypothesis” (this reflects a quantitative research bias – qualitative research doesn’t start with a “hypothesis”). The key thing I note from this diagram is that both research and evaluation use the same methods and data analysis techniques.
One of the implications of the perceived difference between research and evaluation is in the realm of institutional ethics review. Generally, if something is “research,” you have to go through an ethics review, whereas if something is “evaluation,” you don’t ((unless you want to publish the findings in an academic journal that requires an ethical review to have been conducted. Or if you have an external funder that requires it)). Now, I understand that if you want to review your files to check out how efficient your program is or make some process improvements, it would be unnecessary to go through a research ethics review. But in some cases an evaluation uses the exact same techniques (e.g., a survey, interview or focus group) as a research project does (and thus, one would expect the level of “risk” to the participants would be the same), and so it’s not really clear to me why whether or not you are publishing the results is the deciding factor as to whether an ethical review would be needed. At UBC the ethics review board is very clear that if you think there is even a possibility that you might want to publish the results of your evaluation work, you need to apply for ethics approval before you start, as they won’t accept ethics applications after the work has been conducted (e.g., you can’t say “we were doing an evaluation but it turned out that we have publishable findings, so now we want retroactive ethics approval”) – so at the very least, evaluation can’t be used a back door route to avoid the ethics process for research.
As per usual, I don’t have any solutions here – really just putting my preliminary thoughts on this out there. I welcome your thoughts/ideas/feedback!
And, on a related note, for anyone interested in an “alternative career” ((though we don’t like calling any job a PhD holds other than a professorship an “alternative career,” which suggests that the professorship is the “right” job, we haven’t actually thought of a good name for this. Any suggestions?)) in evaluation may want to check out the upcoming Canadian Evaluation Society conference:
Canadian Evaluation Society 2010 Annual Conference
May 2 – 5; Victoria, British Columbia
Going Green, Gold, and Global: New Horizons For Evaluation
Version:1.0 StartHTML:0000000167 EndHTML:0000002743 StartFragment:0000000552 EndFragment:0000002727
Come to beautiful Victoria on the west coast of British Columbia for the 31st CES Annual Conference. Canada is a world leader in evaluation, and we invite you to come and hear what is happening in evaluation; and on our credential. Meet professionals who can provide insight for your evaluations and discuss the challenges we all face. Listen to our dynamic speakers and panels which include:
- Keynote: The Power of One: Simon Jackson (one of Time Magazine’s 60 Heroes of the Planet)
- Keynote: Using Olympic Principles for Success: Jennifer Walinga (Double Olympic Gold Medalist)
- Panel: Los Tres Amigos – Global Excellence in Monitoring and Evaluation – Robert Lahey (representing Canada), John Pfeiffer (USA), and Agustin Caso (Mexico)
- Keynote Panel: Summarizing and Celebrating the Conference – Four Evaluation Association Presidents – Nancy Porteous, Hallie Preskill, Florence Etta, Linda Lee.
Our themes are:
- Green – Evaluation in the context of managing positive environmental change.
- Gold – Evaluation of major events (e.g., Olympic Games) and striving for excellence in evaluation.
- Global – current trends and developments in international evaluation.
We are expecting an inspiring conference. Online registration is now open and the early bird dead line is March 17! Special prices for students, seniors and members of CES, AEA and AES. Check us out at http://c2010.evaluationcanada.ca. See you here.