Skip navigation

The culture of numbers: from eugenics to innovation

The short, fascinating history of how we measure science reflects the preoccupations of the time.

BY BENOÎT GODIN | NOV 08 2010

In 1994, the OECD directorate for Science, Technology and Industry stated that “if the OECD were to close its doors tomorrow, the drying up of its statistics would probably make a quicker and bigger impact on the outside world than would the abandonment of any of its other activities.” This is certainly a self-congratulatory statement. However, it also reflects the fact that measuring science has become an “industry.” Dozens of surveys and thousands of statistics are produced yearly by academics and governments, and then used for varying purposes.

When, how and why did science come to be measured in the first place? How did a “cultural” activity – science – long reputed to be not amenable to statistics, come to be measured?

Statistics on science began to be collected in the mid-19th century in a context of eugenics. In the view of many at that time, the stock of the population and the quality of the race were deteriorating, and those groups that contributed more to civilization, namely eminent men, were not reproducing enough. The “unfits” were far more fertile – and some suggested policies of sterilization. This gave rise to the idea of measuring the number of available scientists.

In 1869, British statistician Francis Galton (1822-1911) conducted the first measurements of science from this perspective and concluded that the numbers were in decline. Questionnaires sent to fellows of the Royal Society revealed that men of science were not “productive” enough, having fewer children than their parents had. To Galton, the numbers revealed a clear “tendency to an extinction of the families of men who work hard with the brain,” which he saw as “a danger to the continuance of the race.”

In 1895, the American psychologist James McKeen Cattell (1860-1944), a student of Galton and an early sympathizer of eugenic doctrines, acquired the weekly journal Science, established in 1880 by Alexander Graham Bell and Gardiner G. Hubbard. Besides editing Science and other journals for 40 years, Cattell turned to the “scientific” study of science. To Cattell, applying statistics to the study of men of science was highly desirable: “the accounts of great men in biographies and histories belong to literature rather than to science. … It is now time that great men should be studied … by the methods of exact and statistical science.”

Between 1902 and 1906, Cattell compiled a directory (called American Men of Science) for a contract granted by the newly created Carnegie Institution of Washington “to discover the exceptional man in every department of study whenever and wherever found.” The first edition contained about 4,000 biographical sketches of men of science. By 1944, the last year Cattell edited the directory before he died, the document contained biographical information on more than 34,000 men (and a few women) of science. From the directory, Cattell constructed statistics which he published regularly until his death.

Two concepts were fundamental to his work and still guide current measurements. The first was productivity, defined as the number of men of science a nation produces. Cattell compared American states and institutions in terms of both absolute and relative (per million population) numbers of men of science. Massachusetts and Boston were identified as the intellectual centre of the country, while the U.S. South “remains in its lamentable condition of scientific stagnation.”

To Cattell, “methods should be devised by which scientific work will be rewarded in some direct proportion to its value to society.” To this end, he elected a second concept, that of performance. Cattell asked leading representatives of each of the disciplines to arrange the men of science whose names appeared in the directory in order of merit, i.e., rank. A star was assigned to each of the thousand best.

Cattell then ranked the various institutions, constructing the first league table, or scoreboard, of the universities. “I give this table with some hesitation, but it appears in the end it will be for the advantage of scientific research if it is known which institutions obtain and retain the best men. … A table such as this might have some practical influence if the data were made public.” The table showed Harvard, Columbia and Chicago as the leading universities in terms of their share of the top thousand scientific men. All in all, Cattell calculated that about half of the best scientific men were connected with just 18 institutions.

At about the same time as Cattell’s work, psychologists started collecting statistics on such measures as the number of psychologists, their geographical distribution, doctorates conferred, and their journals and publications, in a bid to contribute to the advancement of the field. Psychologists were, in fact, the first to use the counting of scientific papers systematically in order to demonstrate that the discipline was really a science, namely a discipline producing scientific knowledge.

These early measurements were only the precursors of a long series of statistics produced by governments and their agencies. From the 1940s onward, it was public organizations that produced most of the statistics, and these soon had a “monopoly” on the measurement of science, partly because they had the financial resources to conduct systematic surveys.

We owe a large part of the development of official measurement of science in Western countries to the United States, starting in the 1920s. The very first official measurement was by the U.S. National Research Council, a body of the National Academy of Sciences, which compiled many directories, above all on industrial laboratories involved in research and development. However, from the 1940s onward, it was primarily governments and their statistical bureaus that collected statistics, and these focused on a new type of statistic: money spent on research.

Two factors explain this measurement. The first was to control government expenses on R&D, which were, according to the American Bureau of Budget, growing too fast. Secondly, and on a more positive note, statistics on money spent were developed as policy targets for scientific development, and were thus used to convince institutions to devote more money to R&D.

These efforts coalesced into the OECD Frascati manual. In 1963, OECD member countries adopted standards for the measurement of R&D expenditures. The GERD (Gross Expenditures on R&D) is the main statistic originating from the Frascati standards. GERD measures the total money spent on R&D by the following four economic sectors: industry, university, government and non-profit. However, the GERD is not in fact a statistic compiled on a national basis, but rather “a total constructed from the results of several surveys, each with its own questionnaire and [significantly] different specifications.” For this reason, as Statistics Canada admitted recently, “The GERD, like any other social or economic statistic, can only be approximately true. … It is an estimate and as such can show trends.”

Nonetheless, GERD (and its derivative GERD/GDP) is now the most cherished indicator. In every statistical publication, the indicator is calculated, discussed and countries are ranked according to it because, as the OECD once said, “it is memorable” and is “the most popular one at the science policy and political levels, where simplification can be a virtue.”

The main consequence of such an orientation was twofold. First, statistics came to be packaged in an accounting framework. Statistics on science concentrated on costs and were collected within an input or efficiency approach that assumed increased investment produced increased results. The second consequence was a focus on economic growth and productivity.

In Galton’s hands, productivity meant biological reproduction: the number of children a scientist produced. To Cattell, it corresponded to the number of scientists a nation produced. Then, in the mid-20th century, scientific productivity came to mean efficiency with regard to the quantity of scientific or technological output and, later, outcomes such as economic growth and productivity. The statistics are still eugenic in style, but the nature of what is “fit” and “unfit” has changed. It is no longer a matter of increasing reproduction among the best minds, which takes generations, but rather increasing immediate output: counting the volume of activity and its fluctuations, and those responsible for this production. Today, it is the organizations, and the economic sectors to which they belong, that are measured.

The people in our society who are supposed to benefit from science and public funding are not measured – and never have been. In spite of decades, even centuries, of discourse on the social benefits of science, you would look in vain for systematic indicators on outcomes other than productivity. Most current statistics and indicators are economic in type: expenditures on research, technological balance of payments, patents, trade in high-technology products, marketed innovation, etc. The social aspect is relegated to the periphery. The culture of numbers is in fact the cult of economic efficiency.

In their current form, statistics do not play a meaningful role in identifying and encouraging great, leading and emerging science. For that, one has to look at science that is innovative or transformative, rather than safe or usual or expected, or which confirms the status quo and calms public and government nerves.

But statisticians think otherwise, and have turned, all together and at the same time, toward a very restrictive idea of innovation. Until recently, innovation was excluded from the measurement of science, although it has been the central objective of science policy from the very beginning. Innovation has been a contested category since antiquity (as science was in its early days). Until the 19th century, innovation was understood as “introducing change” into religion and politics. The term had nothing to do with originality or creativity. Innovation was subversive and forbidden.

Innovation as a category of measurement was “de-contested” over the 20th century. This was due fundamentally to the contribution of inventions to the progress of societies: innovation is invention turned useful. And, to governments – supported by economists as experts and consultants – usefulness means competitiveness or competition through the commercialization of technological invention. From the 1960s onward, innovation came to be seen as restricted to technological and marketed innovation, and efforts were devoted to its measurement: lags between discoveries and commercialization, gaps between countries in their capacity to innovate, contribution of technological innovation to economic growth, and the like.

Innovation remains difficult to measure. There are two traditions here. One is American and concentrates on innovation as the introduction of technology in industrial processes, i.e., technological change. This is the tradition of the mainstream economists, and it relies on econometrics. Few people believe in these statistics except economists. The other tradition is European. Here, innovation is seen, not as the use of technological inventions in industry, but as the generation of new technological inventions or products for the market. The statistics produced in this tradition are descriptive statistics. They are called indicators, and they cover, to varying degrees, the steps from research to commercialization. However, most of these indicators are proxies.

Each of these two traditions ignores the other, with few exceptions. However, both traditions share similar assumptions. Innovation is the affairs of businesses, and the benefits derived from technological innovation are growth in productivity and in market share. The measurement of innovation continues on the same path or bias – the cult of efficiency – as that of the measurement of science. We are still waiting for innovative statisticians.

Dr. Godin is a professor at the Institut national de la recherche scientifique (INRS) in Montreal. This paper was adapted from an invited talk delivered in May 2010 to council members of the Government-University-Industry Research Roundtable, a joint body of the U.S. National Academy of Sciences, National Academy of Engineering and Institute of Medicine.

PUBLISHED BY
Benoît Godin
Missing author information
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

Click to fill out a quick survey