This year at the Congress of the Humanities and Social Sciences, I was invited to a University of Calgary faculty of graduate studies event to respond to a talk by Paul Yachnin about the TRaCE project. The project acronym stands for “track, report, connect and exchange”, and its major goals are to find and make contact with PhD graduates in humanities disciplines to discover what careers they are pursuing, as well as providing space for those grads to develop a supportive community of humanities PhDs and doctoral students. If you’re looking for a summary of the project and the rationale for it, this University Affairs article by Dr. Yachnin provides an outline.
In my talk I explained the things that I expected to see when looking to understand the project, its goals, its methodology and its outcomes, but which I found to be missing or unclear. In this piece, I’m elaborating on what I said in Calgary, and I’ve added a few ideas that didn’t occur to me until after I’d spoken. I should emphasize that I fully agree with most of what the project is trying to do. We certainly need to know more about Canadian doctoral outcomes; I’ve been writing about that for a while now too. I think connecting doctoral students and alumni for mentoring and support is a great idea. But I do have quite a few criticisms of the way TRaCE has been undertaken. They fall into the following general categories: communicative clarity; methodological consistency; theoretical framing; research ethics; and the how the results are (or will be) put to use.
Before I continue I want to thank the organizers and, in particular, the University of Calgary’s associate dean Lisa Hughes and vice provost and dean of graduate studies Lisa Young, for inviting me to speak about TRaCE — it’s probably unusual for a grad student to be asked to comment on something of this scope, and for me it was a great opportunity to participate in the discussion about Canadian doctoral education. Many thanks also to those who showed up at the event and engaged in lively discussion during and after it.
Firstly, I think the way the project’s goals, methods and results are being communicated is at present not coherent, and this was one reason it took me a while to form a reasonable understanding of the nature and scope of the research. What I wanted to know, as someone who has done research myself, was what framework the study was employing, what other work it was drawing on or responding to, how this translated into the use of particular methods, and how these related to the results.
One major example of the communication trouble: many of the issues I raise in this post could have been addressed in a “methodology” section on the website. But there is no such section, nor any other coherent overview of the approach taken (beyond basic explanations). Partly as a result, the research questions weren’t really clear. If there are multiple research questions, do they work well together? What about the project’s purpose? If the goal is to find out what happens in PhD programs, and what happens after graduation, then will this research ultimately be used to inform changes to doctoral programs? If it’s also about outreach and community-building, how is this combined appropriately with the other goals — is the network element a part of the research or an outcome of the research?
For example, It’s not clear how the sample for this project was selected. Was every university with a humanities PhD program included? What counts as a humanities program? The project has on board at least two programs per university, 60+ programs in total, looking at doctoral graduates from a 10-year period (2004–2014). So not all humanities programs in the participating universities are involved. If the project allows for programs to choose whether they participate, that means inclusion across disciplines is variable; we don’t know the participation rate per discipline. How does all this affect the results? When I asked about the “hit rate” for using convocation lists and then “tracking” doctoral grads online using only publicly-available information, I was told the response rate was around the same as HECQO’s (~85 percent). What assumptions are being made about the 15 percent plus of grads who can’t be located? Do the researchers then contact every person they locate for an interview — and if not, why not? If people refuse, how is that absence accounted for?
Data is meaningless without a coherent frame
Another example is the use of survey tools on the project website. As far as I can tell, the data collected in this way could not be seen as comparable to those generated from the other sample (i.e. the one created by gathering information from online searches). In fact we don’t even see the numbers, or such was the case when I was checking (throughout May 2016). The “data” page didn’t help because there was nothing on it, other than a general explanation of why there are no data. The lack of a systematic description of a methodology anywhere on the site means there’s no way to know the presumed role of those surveys and the information they produce, or to get at the idea(s) behind their construction.
One survey, now gone from the site, also showed validity problems: it allowed only yes/no responses for questions like “did your program adequately prepare you for non-academic work?”, where Likert scales would have been a more appropriate format. This is a sign that there’s trouble with whatever assumptions are guiding the selection of questions and the available responses. Sure, surveys provide “data” but these data are meaningless without a coherent frame that informs collection and interpretation.
One last question about methodology: if the approach is experimental and this is considered a pilot project, I’m wondering how the current results being posted will be considered comparable later on if there are methodological changes. Since those changes are clearly necessary, how will the project evolve in ways that don’t obviate what’s being collected and shared right now? This is a Catch-22, because while those changes need to happen, the project doesn’t really seem like a “pilot” because it’s nationwide and involves 25 universities already. It looks more like a full-scale study being done with methods that may or may not end up providing reliable results. At the same time, there isn’t very much acknowledgement of the limitations of those methods.
One cause underlying the methodological confusion is that the project lacks theoretical framing. Because the approach described doesn’t appear to be grounded in existing theory or research about graduate education and its outcomes, it’s very difficult to tell why research decisions have been made, and therefore why methodological problems are present throughout. If we can’t see the logic behind those decisions, how can we evaluate the success of one approach or another? How can we make decisions about how to improve?
Lack of connection to prior work
There’s a field of research on postsecondary education; and within that field we have a lot of research about the PhD. Was it used in the design of this project? There are scholars in Canada who are doing research in these areas. Were any of them consulted? Without a literature review, I couldn’t tell. Because of this apparent lack of connection to prior work, ideas that may be used regularly in other areas of research aren’t being articulated clearly. For example, using terms and ideas that have meanings in other fields, but without specifying what’s meant by them. A couple of examples I noticed were “community of practice” and “narrative research.”
This is particularly important given that the methods being used in the project are borrowed from the social sciences. Using these methods and approaches effectively would require training from people who have experience working with them. Unless such researchers are involved, or others who have undertaken research using these frameworks and methods, there’s significant potential for the work to be derailed by avoidable problems — and for it to become an exercise in reinventing the wheel.
Given that these methods are used in the social sciences and elsewhere, I don’t think it makes sense to describe this as a uniquely “humanities methodology”; work along these lines has been done, and is still being done, elsewhere in academia. In fact there are already areas in the humanities that are using quantitative and qualitative methods (such as communication studies, and digital humanities research). The TRaCE project might benefit from less of a focus on difference, and more emphasis on building connections to existing research that would inform its approach and the interpretation of its results.
I think at a more fundamental level — in fact as a root cause of many of the issues I’ve described — it’s difficult to tell if this is a scholarly research project or a public outreach exercise. You might be asking “why can’t it be both?” and indeed it could; the “community building” element addresses a need. But not in its current form, because the project seems to be confusing or conflating these two different approaches that are geared towards different goals, and in the process, it’s achieving neither of them effectively. One important outcome of this conflation is that the goals are also in conflict in ways that compromise the results.
Problems with the research ethics
I can illustrate what I’m talking about here by describing some of the complications that come up with the interview component of TRaCE. A core part of the project is that interviews with PhDs are used as a means of generating “narratives,” some of which are shared on the website. When interviewing PhDs, the negative stories as well as positive ones will come out; there is some acknowledgement of this on the site’s FAQ page. But will participants who had difficulty during (and/or after) their PhD, or who had outright unpleasant or toxic experiences, be willing to share those first with an interviewer — and later on the website? It seems unlikely those details would end up as public narratives, so what effect does that have on the message about “results”?
My suspicions about this conflict were confirmed when I heard speakers on the last panel of the afternoon. This was where Heather Zwicker contributed really insightful, critical comments about TRaCE. Dr. Zwicker demonstrated exactly what the project would look like if it were being framed with humanities methods and approaches. Her concerns with ethics, in particular, regarding the negative stories of graduates and the non-participation (in interviewing) of so many of those who didn’t have a positive time, raised vitally important questions. She explained that given the content of some of the interview responses, the project is caught between “airing dirty laundry” in public or “whitewashing” the situation. This kind of self-reflection is what’s needed, not another web survey or Google search.
Dr. Zwicker’s comments highlighted the deeper conflict here: this work is not just about “telling stories”; it’s about telling stories to current and prospective PhD students, and potentially to policy-makers, governments, university administrators, career services professionals and non-academic employers. These stories help paint a picture that informs decision-making from the individual to the institutional to (potentially) the federal level. Therefore we need them to reflect as much of that picture as is possible, not just the parts of it that ultimately highlight more of the positive accounts than the negative ones.
Results are being compromised
One answer to producing scholarly work or policy research on this would be to anonymize the interviews and then use content analysis and discourse analysis, pulling out themes in ways that didn’t reveal the identities of participants. Maybe this is already part of the project, but I didn’t see a description of it as part of the methods employed. This is one reason why the assumed need to attach individual stories to actual people seems like a promotional need, not one that actually supports the validity of the results.
This problem with honesty and its consequences is one that undermines the validity of the project. It creates ethical quandaries for those who are doing the interviewing and deciding what to do with the material that comes from it. It compromises the results by clearly placing some interviewees in a position where their responses could bring risk, for example if they mention people they might still be working with or who still have influence over their careers — but where if they don’t participate, important perspectives are excluded. I think much of this could have been avoided if the project had been designed taking these kinds of issues into account, building on the existing research on doctoral education.
As a related point regarding the “connect and engage” part of TRaCE, I’m wondering whether PhDs who have had negative experiences will be keen to come back to the university and contribute to doctoral mentoring and professionalization.
People’s time is worthy of compensation
I also asked if these grads would be compensated for their time if they were coming back to the university, to provide mentoring, to teach courses, to offer their skills and special knowledge and time. The answer was that it’s hoped the project’s participants will volunteer their time. While I understand the concept of “service,” I’m not sure how many grads will be on board with this. People’s time is worthy of compensation and when they have a lot going on, providing (further) unpaid labour to the university might not be the top priority.
In closing, I want to repeat my point about the importance of the intended audiences for these stories. There is a bigger story being pieced together here, one that will be told to governments, universities, non-academic employers, and prospective and current doctoral students. I feel concerned that this research, which has many flaws, will be pushed as a foundation for decision-making and that a project where there are stakes in the game for university participants, will provide “answers” that reflect an incoherent approach as well as a tendency to over-represent positive results.
How can we trust the conclusions drawn from the project?
There are some key questions not being addressed at all. An important issue acknowledged on the TRaCE website is that ABDs are not included in the project. It follows that the results aren’t inclusive with regards to enrolled PhDs, because doctoral programs tend to have significant attrition rates. All results represent a proportion of graduates only. If the project were to be expanded, seeking ABDs might be a helpful direction to take.
If one of TRaCE’s goals is to inform changes to doctoral programs, we also need to see more discussion of the role of supervision in this process. Even in the report produced in 2013 (PDF), there is a lot of focus on structure and content but hardly any comments on supervision. Lastly, is this study taking a critical look at who ends up with a faculty career? What types of people find “success”, where do they come from, and what kind of supervision and mentorship do they receive along the way?
The framing of a study has implications for the assumptions you can make, the theory you draw on, the methods you use, and the people you involve in producing the research. But since so little of this is made clear, and so many questions come up when you look closely at the methods being used, how can we trust the conclusions drawn from this project? I’m as frustrated as anyone about the lack of quality research on PhDs’ educational experiences and career outcomes, but I don’t think TRaCE in its current form can fill that gap.