For years, the inadequacy of the Canadian Common CV (or CCV for short) has been the subject of many Twitter vents, some of which are quite amusing, among the scientists who use it. There have been consultations and reiterations of the CCV since it was launched in 2002, but it has never fully overcome its reputation for being buggy, confusing and a time suck for researchers, rather than, as was intended, a time saver.
The CCV is run by the tri-council of the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council and the Social Sciences and Humanities Research Council, and is used by an additional 28 other bodies, including provincial governments and not-for-profit research organizations.
Now, more than 2,000 CCV users and counting have signed an open letter calling for it to be killed. “It is time to abandon the Canadian Common CV and move to a simpler solution,” reads the letter, which was written by eight academics from seven universities. “The CCV is beyond saving, revising or reinventing.”
Signatories are primarily professors, but also include “postdocs, patient partners, industry partners, students, admin assistants, research staff and even a few Canada 150 Research Chairs,” according to Holly Witteman, one of the letter’s authors and an associate professor in the department of family and emergency medicine at Université Laval.
Their gripes are many, including that the CCV discourages international partners from collaborating on Canadian-funded research projects (starting a CCV from scratch is especially cumbersome), that it wastes institutional funds, and that it disadvantages younger researchers, as well as female scientists and people of colour, who are less likely to have staff members to do the data entry for them.
Dr. Witteman says she helped write the letter after hearing that the CCV board was developing “a third iteration” of the CCV. The process is bound to fail, says Dr. Witteman, because “it’s not possible for a system to do everything that the CCV is supposed to do.”
The CCV was meant to allow researchers to enter information that could be used in multiple different versions, and to collect usable, structured data. Both goals haven’t panned out, in part because many different kinds of organizations use the CCV and build their own templates within it, Dr. Witteman explains. “The data is very low quality and extremely incomplete,” she says. For example, funding competitions might ask for all publications, or the last five or two years of publications, making it impossible to compare publication records.
Adrian Mota, acting associate vice-president of research, knowledge translation and ethics at CIHR, says, however, that the tri-council isn’t revamping the CCV. “It’s really some minor stuff to try and improve the user experience,” he says. He also points out that abandoning the CCV isn’t so simple. “It’s quite complicated from a technical perspective because we’ve integrated the application within our other platforms,” he says, explaining that the CCV collects and pushes data across a larger grants management system.
And while Mr. Mota agrees that the data have “limitations,” he says “the value of the data is not zero.” The tri-council uses CCV data to calculate how to distribute indirect costs of research, such as the costs associated with administration and regulatory reporting requirements. The data have also been used to study how leaves of absence impact scientists’ career trajectories, he says.
Pierre-Gerlier Forest, director of the school of public policy at the University of Calgary, says the CCV was designed for those with “traditional, linear careers.” He says he signed on to the letter because it frequently took “two weeks, with the help of an assistant” to complete the CCV for a funding application. He found that the standardized template didn’t allow him to provide adequate detail of his work within government or his research abroad, while demanding information “like how much was my SSHRC grant two decades ago” that he couldn’t easily access.
Colleen Derkatch, an associate professor in the department of English at Ryerson University, agrees that the CCV was designed for a subset of its current users. “It’s way too much infrastructure for people in the humanities,” she says, explaining that when she uses the CCV, she has to click through pages of categories that don’t apply to her research on language and discourse, such as those related to litigations and patents.
Her colleagues, she says, have told her the CCV was a factor in their decisions not to apply to certain grants, and one of her fellow humanities researchers said he recently applied for a SSHRC grant without bothering to update his CCV, which could put him at a disadvantage.
The signatories are calling for a “page-limited CV that people can edit in whatever word processing software they like,” Dr. Witteman says, since additional tools like bibliometric analyses and separate disclosure statements from equity-seeking groups already exist for data collection.
But not all academics are in agreement. Pradeep Reddy Raamana, a neuroscientist and postdoctoral research fellow at the University of Toronto, says collecting structured data ensures accountability. “You invest, let’s say, $500 million last year. Where did that money go? Did funding decisions make sense? To quantitatively understand how the reviewers’ recommendations were made, we need data, and high-quality data can only be obtained from a platform like CCV.”
Rather than going to a free-form biosketch – a short summary of one’s work, such as that used by the U.S. National Institutes of Health – Dr. Raamana wants the tri-council to “re-implement the current interface and have a better architecture for the database, and additional support for researchers.” He’d also like to see far more transparency. “There is no publicly accessible information about the CCV consultations that have taken place,” he says. “There is absolutely no information on the CCV website or anywhere else, what was invested into it so far, and what are they planning to do going forward.”
For the tri-council, the future of the CCV may be part of a larger shift. Mr. Mota explains that other data systems used by the tri-council, like ResearchNet, are aging as well and the three agencies are currently exploring the possibility of a new solution for grants management as a whole. The tri-council is beginning to “journey map” and is consulting this fall with the Canadian Association of Research Administrators and the U15 group of research-intensive universities, he says. In this process, “a lot of the suggestions that people are putting forward, like the NIH biosketch and ORCID (a digital identifier), are all things we’re considering.”
Thanks very much to University Affairs and to Ms. Glauser for covering this issue.
In response to Dr. Raamana, I will add that I agree entirely with his call for more transparency. I also agree strongly with the need for accountability, but I think he may have misunderstood how CCV data are used or can be used. Since he is a postdoctoral fellow, he may not yet have had the experience of reporting the results of a grant, but the accountability for grant funds is handled through different reporting systems, not through the CCV. Doing away with the CCV would not reduce accountability in any way.
Additionally, as reported by Ms. Glauser, CCV data are not usable for analyses of bibliometric data. This is because the data are inherently incomplete due to differences in requirements for different versions of the CCV. There are better, existing systems available. Using them for such analyses would be a much better use of limited resources.
Our goal with this letter is to encourage a more efficient research enterprise in Canada. We believe this would be better for everyone, including, most importantly, Canadian taxpayers.
I stopped applying for SSHRC grants when the CCV became obligatory. I still got all kinds of grant money and fellowships from German and French sources, so I did not need to spend the weeks required to somehow enter 30 years’ worth of work into the CCV framework. The CCV is a travesty, a colossal waste of faculty members’ time — complete CVs in a pre-set standard format in Word should be just fine. Administrators’ mania for counting makes us produce endless amounts of (incommensurable) data so that it can be counted and compared. That is not accountability, it’s just countability. Why does no other country ask for such a thing? Because in other countries, funders do not assume that faculty members’ time is worth nothing and can be consumed at will for minor bureaucratic reasons.
Headline update: now thousands of people have signed.
It appears to me that the lack of any leadership on part of the Tri-council is one of the main reasons for that terrible interface to have caused so much frustration to many Canadian researchers. I take a comprehensive look at this multi-faceted issue, discuss the need for and long term value of CCV, and suggest we must not throw the baby out with the bathwater:
I signed the lettr even though I am not certain that the CCV should be scrapped because I saw not alternative given the apparent tone deafness of the Tri-Council agencies. I have attempted in the past to communicate multiple times in multiple ways with the Tri-Council about problems with the CCV to no avail. There are fundamental and deep-rooted problems in the CCV, but there are also many “no brainer” problems that even the most cursory effort could fix and this would improve the experience right away for all Canadian researchers.
Example 1: The CCV wants to know whether a grant was competitive or not. When entering the information about an NSERC Discovery Grant (DG), one has to select NSERC and then the DG program and then make a number of other navigation and selection choices, including selecting “competitive” from a sub-menu. Why? Choosing NSERC/DG should be more than sufficient for the CCV to auto-fill the competitive field because there is no DG that is not awarded competitively. Better yet, why does anyone have to enter this information at all when NSERC has it already and the CCV includes everyone’s unique NSERC ID?
Example 2: The various use cases for the CCV, such as applying for an NSERC DG, have different requirements about what can / should be included. When applying for new grants, one has to remove publications and grant funding that is outside the window for the particular presentation version. This is strictly enforced, with a warning that this has to be done for each item that is outside the window. Failure to do so blocks submission of the CCV until the changes are made. Why is this? Why not have a message pop up that says “These N publications are outside the allowed window for inclusion. They will be automatically removed from the list of publications that are included in this presentation version of the CCV.” The pop-up would have the usual “OK” or “Cancel” options to proceed or not with the automatic de-listing.
Example 3: Obsessive-compulsive attributes such as whether publications are peer-reviewed or grants are competitive (which are relevant, but that does not mean they have to be entered in such awkward ways) could be much better dealt with in a spreadsheet-like format within the CCV system, which could allow one to easily view a list of publications in tabular format and quickly check off those that are peer-reviewed (“check all” and “uncheck all” options for selected publications and the ability to sort by fields — especially the publication venue) would make this requirement much easier to deal with.
There are many, many more of these low-level problems that could be easily fixed with straightforward user interface changes or augmentations that would not require changes to the underlying database.
At a higher level, but still well within the capability of current technology, would be shortcuts for entering publications such as providing simply a DOI for a publication from which the CCV could retrieve full citation information (title, author list, publication venue, date, pages, etc.) or the Tri-Council grant number for Canadian funding. My guess is that 90% of peer-reviewed publications and a non-trivial percentage of grants can be dealt with this way. That would leave individuals to deal with the other 10% of publication on their own (as they do now) and to deal with publications in other venues that are not indexed with DOIs or similar mechanisms, and grant funding from non-Tri-Council sources. Here there might be a need to develop interfaces between the CCV and the various DOI types, and perhaps other customization to deal with grant numbers for non-Tri-Council funding. This seems a small price to pay for relieving all of the researchers in Canada from the tedious and error-prone tasks now required by the CCV. These changes again would not effect the underlying database.
The documents produced by the CCV are just plain ugly. Compared to (for example) the NSERC Form 100, which manages to convey a lot of pertinent information in a small number of pages (these are the sections generated by the system from data entered into the system — not the five-page documents that researchers write themselves and upload). In comparison, the documents produced by the CCV look like the aftermath of a tornado or tsunami, with entries sprawled all over the page apparently with no involvement of a graphic designer and with little regard for the goals of those tasked with reading these monstrosities. This too could be fixed without any changes to the underlying database.
There are other issues that may not be as easy to address. Whether the solution is to reboot entirely and develop something new or try to salvage what we have now and make it better, attention to issues such as those described above needs to be a priority. I do not see how this can happen without widespread consultation with the research community. The Tri-Council needs to take this issue seriously and respond in a manner that respects the needs of researchers while also meeting the legitimate goals of the CCV (or its replacement).