With the goal of assessing the state of science and technology in Canada, it seems quite reasonable that the STIC report has identified the development of new knowledge as a key metric for assessing Canada’s relative and absolute performance. The difficulty with such a task though is understanding what is meant by knowledge development – should we focus on quantity, quality, impact, novelty or some other metric? The STIC report attempts to break knowledge metrics into manageable chunks, but sadly stays far away from passing any sort of judgment on the relative importance. The attributes they identify are:
- number of publications
- relative impact
- international co-publication
Number of publications
Perhaps the easiest to understand of all metrics is the number of publications produced by a country and as noted by the STIC report:
in 2008 Canada, with a share of only 0.5 percent of global population, accounted for 3.3 percent of scientific publications in the world. In absolute terms, this places (Canada) in 8th position after the United States, China, Japan, Germany, the United Kingdom, France and Italy.
Further notes, mostly derived from a far more comprehensive report from the University of Quebec include the dominance (82%) from the higher education sector and that nearly 70% of publications come from Ontario and Quebec. Unfortunately, all that this tells us is that Canadian scientists, particularly in areas of the country with the most people ((Alberta, BC and Nova Scotia also punch well above their weight in terms of per capita publication output – they simply have fewer people)), write about their research more than others do.
This lack of quality control is where citations come in, though it’s a little unfair to attribute quality of a paper with the number of citations due to items to which we’ve alluded before. Deficiencies aside, the good news is that Canada ranks 4th in the world in citations despite being 8th in publications, though I imagine much of this is due to countries like Japan and China relying on self-language citations rather than Canada which benefits from the English language dominated fields of scientific research ((If anybody knows where to find statistics on how many publications from Japan/China are in Japanese or Mandarin/Cantonese, it would be great to hear from you)).
Here it seems that the STIC report makers were stretching for a bullet point as the relative impact index (again pinched from the UQAM report) is simply the ratio of the citations of a country to the number of publications – making the same point as above and subject to the same possible deficiencies. What would have been really nice under this bullet point is some way of measuring the actual impact of research – how much of our research really advances the field? Perhaps assessing the number of patents and their origin (or lack of origin) in Canadian research publications or documenting the number of healthcare decisions ((Questions like: Were particular drugs prescribed or not prescribed based on lab research? Did it save the public lots of money?)) taken based on studies performed by Canadian researchers?
One of the big reasons for different numbers of citations is the popularity of a field. Whereas biomedical science has good “value” with respect to citations per publication, fields such as astrophysics or comparative animal physiology are cited relatively little. Something which impacts a country’s citation rate therefore is the type of research it embarks on, and one of the interesting things that the STIC report notes is Canada’s areas of specialization:
Canadian researchers account for 4.3 percent of world publications in applied biology and ecology, and 4.2 percent in astronomy, astrophysics and cosmology, as well as 3.9 percent in engineering science. Canadian researchers account for 4 percent of world publications in basic biology, but only for 2 percent and for 2.1 percent respectively in physics and chemistry.
With a small population, Canada almost certainly benefits from specialization in areas where it has established expertise. This allows the universities/institutes to build a critical mass of researchers in particular areas that in turn creates the desire to do research in Canada with those world leaders. My own field of stem cell biology is a great example where Canada has nurtured an enviable cadre of researchers from all across the globe by building on key discoveries ((Canadian researchers Jim Till and Ernest McCulloch were the first to formally prove the existence of a blood stem cell pool in mammals)) and carving out this area of biology as an important one for investment. Through such programs as the Networks of Centres of Excellence and investments in physical infrastructure like the MaRS Discovery District, it appears that Canada has done well to identify and support some areas of research very well. The last good summary on which areas Canada does and might in future benefit from was the Council of Canadian Academies first state of S&T report – an updated version of which should be coming out in 2012.
Working with international collaborators certainly benefits a country and being recognized as leaders in a field by having others working with Canadian research groups is another good indicator of success; however, I really fail to see the value in the “how many publications have a non-Canadian scientist on them” metric as this could as easily suggest that one’s national infrastructure is insufficient to get the job done and you are forced to look elsewhere to complete your research. The STIC report implicitly presumes that it would be better for a Canadian cell biologist to get a genome sequenced with the Sanger Centre in the UK rather than at a similar facility in Toronto or Vancouver. I worry even further when I see the countries that top the list of all star international collaborators: Switzerland, South Africa, Mexico, and Israel – it seems that scientists in the USA, Germany and the UK just go next door to get things done.
Overall, the STIC is quite correct to identify the need to assess “knowledge production” in both a qualitative and quantitative fashion, but therein lies the challenge that plagues the system – How do we best evaluate scientists (and their science)? The shopping list of possibilities provided in this report merely scratches the surface and will require much more in depth and long term evaluation in order to assess the impact of Canadian research. We’ll continue to keep our readers abreast of developments with articles like the Metrics for Evaluating Scientists ones from this year and last year. If there are other ideas out there, please do drop us a line.