In somewhat subdued fashion, the federal departments of Innovation, Science, and Industry Canada and Health Canada struck the Advisory Panel on the Federal Research Support System last October. This is the latest (and most certainly not the last) committee formed to ruminate over the Canadian science support system. This panel is a response to a shared commitment made in the two departments’ 2021 ministerial mandate letters to “modernize the federal research funding ecosystem to maximize the impact of investments in both research excellence and downstream innovation, with a particular focus on the relationships among the federal research granting agencies and the Canada Foundation for Innovation.”
Interestingly, the panel’s mandate omits part of the two federal departments’ additional stated commitments in their letters to move “forward with a uniquely Canadian approach modeled on the Defense Advanced Research Projects Agency (DARPA), (…)”. There is no such reference in the advisory panel’s mandate to that uniquely Canadian approach, modeled on DARPA (an agency responsible for researching and developing emerging technologies for the U.S. military), or in recent reporting on the panel, which probably means that the two federal departments have indeed moved past that idea.
If creating a DARPA wannabe is no longer the panel’s ultimate goal (the uniquely Canadian propensity to emulate others as a means to “innovate” was illustrated in serious calls for the new agency to be called “CARPA”) what then was the end goal of this panel’s charge to propose measures for modernizing the funding system?
Its mandate requested that the panel provide advice to ensure that federal research support is “coordinated and cohesive”, “responsive to the multi- and inter-disciplinary, collaborative, and international approaches”, and “agile to seize new opportunities and address emerging (…) needs”. The panel was also expected to advise on “structures and governance” and provide feedback “on the government’s proposed framework for federal decision-making on investments in major research facilities (MRFs)”.
Any science policy observer could go into the merits of all of this, and discuss potential contributions to be made. Many such contributions are already easily available for review in a series of reports produced in recent years that have similarly assessed the governance and funding mechanisms supporting the academic research enterprise. There is no shortage of ideas on the issues involved in supporting international collaboration, interdisciplinary research, and emerging research problems.
Indeed, the New Frontiers in Research Fund was created in 2018, under the Canada Research Coordinating Committee (CRCC), precisely out of the recognition that interdisciplinary and international collaborations lacked a specific source of support among the federal research councils. The CRCC itself was established to “address several of the recommendations made in the Fundamental Science Review, including improving support for international, multidisciplinary, risky and rapid-response research (…) and making the research system more nimble so that researchers can seize opportunities with minimal administrative burden.”
While not a panacea, in principle, mechanisms to ensure policy learning and expert consultation are desirable to inform government decisions. At the same time, these mechanisms can easily turn into formalistic exercises, stimulated by negative incentives for actual policy learning to occur. Satisfying political expectations for expert consultation, regardless of the actual substance of the input sought, or to justify pre-ordained outcomes, are all-too-known behaviours among government actors. The constant reiteration of these exercises is also generally tolerated in the academic community, at least until they fail to deliver positive outcomes for the science support system.
There is a lost opportunity as this ritualistic cycle of reviews and reports on Canada’s scientific support infrastructure continues. The federal government could instead take steps to make policy learning actually count, by building policy learning capacity in key institutions that support the research enterprise. This way, stakeholders would be able to actively and continuously learn from experience. A system that routinely scrutinizes itself, drawing from internal and external input as a matter of course, has the means to continue evolving to better respond to old and new needs. How could we do that? That has not been taken up yet as a question worth asking by an expert panel.