Skip navigation
Speculative Diction

The TRaCE Project isn’t the answer to tracking PhD outcomes

BY MELONIE FULLICK | JUN 15 2016

This year at the Congress of the Humanities and Social Sciences, I was invited to a University of Calgary faculty of graduate studies event to respond to a talk by Paul Yachnin about the TRaCE project. The project acronym stands for “track, report, connect and exchange”, and its major goals are to find and make contact with PhD graduates in humanities disciplines to discover what careers they are pursuing, as well as providing space for those grads to develop a supportive community of humanities PhDs and doctoral students. If you’re looking for a summary of the project and the rationale for it, this University Affairs article by Dr. Yachnin provides an outline.

In my talk I explained the things that I expected to see when looking to understand the project, its goals, its methodology and its outcomes, but which I found to be missing or unclear. In this piece, I’m elaborating on what I said in Calgary, and I’ve added a few ideas that didn’t occur to me until after I’d spoken. I should emphasize that I fully agree with most of what the project is trying to do. We certainly need to know more about Canadian doctoral outcomes; I’ve been writing about that for a while now too. I think connecting doctoral students and alumni for mentoring and support is a great idea. But I do have quite a few criticisms of the way TRaCE has been undertaken. They fall into the following general categories: communicative clarity; methodological consistency; theoretical framing; research ethics; and the how the results are (or will be) put to use.

Before I continue I want to thank the organizers and, in particular, the University of Calgary’s associate dean Lisa Hughes and vice provost and dean of graduate studies Lisa Young, for inviting me to speak about TRaCE — it’s probably unusual for a grad student to be asked to comment on something of this scope, and for me it was a great opportunity to participate in the discussion about Canadian doctoral education. Many thanks also to those who showed up at the event and engaged in lively discussion during and after it.

Missing methodology

Firstly, I think the way the project’s goals, methods and results are being communicated is at present not coherent, and this was one reason it took me a while to form a reasonable understanding of the nature and scope of the research. What I wanted to know, as someone who has done research myself, was what framework the study was employing, what other work it was drawing on or responding to, how this translated into the use of particular methods, and how these related to the results.

One major example of the communication trouble: many of the issues I raise in this post could have been addressed in a “methodology” section on the website. But there is no such section, nor any other coherent overview of the approach taken (beyond basic explanations). Partly as a result, the research questions weren’t really clear. If there are multiple research questions, do they work well together? What about the project’s purpose? If the goal is to find out what happens in PhD programs, and what happens after graduation, then will this research ultimately be used to inform changes to doctoral programs? If it’s also about outreach and community-building, how is this combined appropriately with the other goals — is the network element a part of the research or an outcome of the research?

For example, It’s not clear how the sample for this project was selected. Was every university with a humanities PhD program included? What counts as a humanities program? The project has on board at least two programs per university, 60+ programs in total, looking at doctoral graduates from a 10-year period (2004–2014). So not all humanities programs in the participating universities are involved. If the project allows for programs to choose whether they participate, that means inclusion across disciplines is variable; we don’t know the participation rate per discipline. How does all this affect the results? When I asked about the “hit rate” for using convocation lists and then “tracking” doctoral grads online using only publicly-available information, I was told the response rate was around the same as HECQO’s (~85 percent). What assumptions are being made about the 15 percent plus of grads who can’t be located? Do the researchers then contact every person they locate for an interview — and if not, why not? If people refuse, how is that absence accounted for?

Data is meaningless without a coherent frame

Another example is the use of survey tools on the project website. As far as I can tell, the data collected in this way could not be seen as comparable to those generated from the other sample (i.e. the one created by gathering information from online searches). In fact we don’t even see the numbers, or such was the case when I was checking (throughout May 2016). The “data” page didn’t help because there was nothing on it, other than a general explanation of why there are no data. The lack of a systematic description of a methodology anywhere on the site means there’s no way to know the presumed role of those surveys and the information they produce, or to get at the idea(s) behind their construction.

One survey, now gone from the site, also showed validity problems: it allowed only yes/no responses for questions like “did your program adequately prepare you for non-academic work?”, where Likert scales would have been a more appropriate format. This is a sign that there’s trouble with whatever assumptions are guiding the selection of questions and the available responses. Sure, surveys provide “data” but these data are meaningless without a coherent frame that informs collection and interpretation.

One last question about methodology: if the approach is experimental and this is considered a pilot project, I’m wondering how the current results being posted will be considered comparable later on if there are methodological changes. Since those changes are clearly necessary, how will the project evolve in ways that don’t obviate what’s being collected and shared right now? This is a Catch-22, because while those changes need to happen, the project doesn’t really seem like a “pilot” because it’s nationwide and involves 25 universities already. It looks more like a full-scale study being done with methods that may or may not end up providing reliable results. At the same time, there isn’t very much acknowledgement of the limitations of those methods.

One cause underlying the methodological confusion is that the project lacks theoretical framing. Because the approach described doesn’t appear to be grounded in existing theory or research about graduate education and its outcomes, it’s very difficult to tell why research decisions have been made, and therefore why methodological problems are present throughout. If we can’t see the logic behind those decisions, how can we evaluate the success of one approach or another? How can we make decisions about how to improve?

Lack of connection to prior work

There’s a field of research on postsecondary education; and within that field we have a lot of research about the PhD. Was it used in the design of this project? There are scholars in Canada who are doing research in these areas. Were any of them consulted? Without a literature review, I couldn’t tell. Because of this apparent lack of connection to prior work, ideas that may be used regularly in other areas of research aren’t being articulated clearly. For example, using terms and ideas that have meanings in other fields, but without specifying what’s meant by them. A couple of examples I noticed were “community of practice” and “narrative research.”

This is particularly important given that the methods being used in the project are borrowed from the social sciences. Using these methods and approaches effectively would require training from people who have experience working with them. Unless such researchers are involved, or others who have undertaken research using these frameworks and methods, there’s significant potential for the work to be derailed by avoidable problems — and for it to become an exercise in reinventing the wheel.

Given that these methods are used in the social sciences and elsewhere, I don’t think it makes sense to describe this as a uniquely “humanities methodology”; work along these lines has been done, and is still being done, elsewhere in academia. In fact there are already areas in the humanities that are using quantitative and qualitative methods (such as communication studies, and digital humanities research). The TRaCE project might benefit from less of a focus on difference, and more emphasis on building connections to existing research that would inform its approach and the interpretation of its results.

I think at a more fundamental level — in fact as a root cause of many of the issues I’ve described — it’s difficult to tell if this is a scholarly research project or a public outreach exercise. You might be asking “why can’t it be both?” and indeed it could; the “community building” element addresses a need. But not in its current form, because the project seems to be confusing or conflating these two different approaches that are geared towards different goals, and in the process, it’s achieving neither of them effectively. One important outcome of this conflation is that the goals are also in conflict in ways that compromise the results.

Problems with the research ethics

I can illustrate what I’m talking about here by describing some of the complications that come up with the interview component of TRaCE. A core part of the project is that interviews with PhDs are used as a means of generating “narratives,” some of which are shared on the website. When interviewing PhDs, the negative stories as well as positive ones will come out; there is some acknowledgement of this on the site’s FAQ page. But will participants who had difficulty during (and/or after) their PhD, or who had outright unpleasant or toxic experiences, be willing to share those first with an interviewer — and later on the website? It seems unlikely those details would end up as public narratives, so what effect does that have on the message about “results”?

My suspicions about this conflict were confirmed when I heard speakers on the last panel of the afternoon. This was where Heather Zwicker contributed really insightful, critical comments about TRaCE. Dr. Zwicker demonstrated exactly what the project would look like if it were being framed with humanities methods and approaches. Her concerns with ethics, in particular, regarding the negative stories of graduates and the non-participation (in interviewing) of so many of those who didn’t have a positive time, raised vitally important questions. She explained that given the content of some of the interview responses, the project is caught between “airing dirty laundry” in public or “whitewashing” the situation. This kind of self-reflection is what’s needed, not another web survey or Google search.

Dr. Zwicker’s comments highlighted the deeper conflict here: this work is not just about “telling stories”; it’s about telling stories to current and prospective PhD students, and potentially to policy-makers, governments, university administrators, career services professionals and non-academic employers. These stories help paint a picture that informs decision-making from the individual to the institutional to (potentially) the federal level. Therefore we need them to reflect as much of that picture as is possible, not just the parts of it that ultimately highlight more of the positive accounts than the negative ones.

Results are being compromised

One answer to producing scholarly work or policy research on this would be to anonymize the interviews and then use content analysis and discourse analysis, pulling out themes in ways that didn’t reveal the identities of participants. Maybe this is already part of the project, but I didn’t see a description of it as part of the methods employed. This is one reason why the assumed need to attach individual stories to actual people seems like a promotional need, not one that actually supports the validity of the results.

This problem with honesty and its consequences is one that undermines the validity of the project. It creates ethical quandaries for those who are doing the interviewing and deciding what to do with the material that comes from it. It compromises the results by clearly placing some interviewees in a position where their responses could bring risk, for example if they mention people they might still be working with or who still have influence over their careers — but where if they don’t participate, important perspectives are excluded. I think much of this could have been avoided if the project had been designed taking these kinds of issues into account, building on the existing research on doctoral education.

As a related point regarding the “connect and engage” part of TRaCE, I’m wondering whether PhDs who have had negative experiences will be keen to come back to the university and contribute to doctoral mentoring and professionalization.

People’s time is worthy of compensation

I also asked if these grads would be compensated for their time if they were coming back to the university, to provide mentoring, to teach courses, to offer their skills and special knowledge and time. The answer was that it’s hoped the project’s participants will volunteer their time. While I understand the concept of “service,” I’m not sure how many grads will be on board with this. People’s time is worthy of compensation and when they have a lot going on, providing (further) unpaid labour to the university might not be the top priority.

In closing, I want to repeat my point about the importance of the intended audiences for these stories. There is a bigger story being pieced together here, one that will be told to governments, universities, non-academic employers, and prospective and current doctoral students. I feel concerned that this research, which has many flaws, will be pushed as a foundation for decision-making and that a project where there are stakes in the game for university participants, will provide “answers” that reflect an incoherent approach as well as a tendency to over-represent positive results.

How can we trust the conclusions drawn from the project?

There are some key questions not being addressed at all. An important issue acknowledged on the TRaCE website is that ABDs are not included in the project. It follows that the results aren’t inclusive with regards to enrolled PhDs, because doctoral programs tend to have significant attrition rates. All results represent a proportion of graduates only. If the project were to be expanded, seeking ABDs might be a helpful direction to take.

If one of TRaCE’s goals is to inform changes to doctoral programs, we also need to see more discussion of the role of supervision in this process. Even in the report produced in 2013 (PDF), there is a lot of focus on structure and content but hardly any comments on supervision. Lastly, is this study taking a critical look at who ends up with a faculty career? What types of people find “success”, where do they come from, and what kind of supervision and mentorship do they receive along the way?

The framing of a study has implications for the assumptions you can make, the theory you draw on, the methods you use, and the people you involve in producing the research. But since so little of this is made clear, and so many questions come up when you look closely at the methods being used, how can we trust the conclusions drawn from this project? I’m as frustrated as anyone about the lack of quality research on PhDs’ educational experiences and career outcomes, but I don’t think TRaCE in its current form can fill that gap.

ABOUT MELONIE FULLICK
Melonie Fullick
Melonie Fullick is a PhD candidate at York University. The topic of her dissertation is Canadian post-secondary education policy and its effects on the institutional environment in universities.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to Paul Yachnin Cancel reply

Your email address will not be published. Required fields are marked *

  1. Stewart Rood / June 17, 2016 at 08:45

    All research is flawed, but most research has some merit. This critique seems harsh and surely there’s something positive that could be recognized.

  2. Sheri O / June 17, 2016 at 12:21

    Thank you for this thoughtful critique of TRaCE.

    I agree with your analysis. To me, the TRaCE project would not pass a research defense, or at least should not. The promotional aspect of TRaCE misappropriates scholarly practices risking a false confidence.

    Likewise I worry about the glibness in the White Paper on the Future of the Ph D in the Humanities which also comes out of the IMPLAI at McGill. Where is the process for examining the proposed changes, studying them, learning, faculty development, assessment of research work done by students to any sort of standard? But I digress…

    The openness of TRaCE and the White Paper to acknowledging deep rooted problems in research training seems just too easily brushed aside with a fix-em up process that needs much more and much better grounding to literature, to research design and to purpose.

  3. Paul Yachnin / June 18, 2016 at 05:54

    Melonie Fullick’s response to my recent presentation on the TRaCE project (at the Mobilizing Graduate Students for Diverse Careers symposium at the University of Calgary) and to the TRaCE project itself has some valid and valuable things to say about how to make the project better. But most of what she says is misconceived and unfounded. In what follows, I will acknowledge what is valid in the critique, address her numerous, local points, and contest her general complaints against the project, which have to do with methodology, theory, and ethics. At the outset, I should say that, although I disagree strongly with her views, I value the exchange as a good in itself and as a benefit to the TRaCE project.

    She is right that we should explain in more detail how we are doing the work of the project. There is already a basic account of the how and the why on the website. I undertook a fuller explanation at the Calgary meeting, at the Future Humanities conference at Carleton University in May, and in a report in University Affairs, May 17. We will expand the account of how we do the work on the website.

    It is true also that we should provide background, especially information about the number of respondents, to our online surveys. This doesn’t apply, however, to the foundational data collection and reporting on the 2,400+ humanities PhD graduates (cohorts from 2004 to 2014) from 60+ humanities departments across the country. That work was not done by survey.

    True also that we will have to reflect on what we have done to date and introduce necessary changes. We knew that we’d want to reflect and revise when we started out. That means that it is true that we have to think about the relationship between the data collected and presented in the pilot project and the data that will issue from a revised version of TRaCE. We will have to be clear about the differences between the two data sets, where differences exist.

    Her local criticisms include (1) a question about how the sample for the foundational data-gathering work was selected, (2) that TRaCE doesn’t look like a pilot project (because it includes 2,400 graduates from 60+ departments), (3) that it lacks connection with prior work on the humanities PhD, (4) that the project doesn’t include social scientists, (5) that grads won’t volunteer to be part of how we reform doctoral programs, (6) that the project doesn’t include an account of ABDs, and (7) that the project needs a greater focus on the role of doctoral supervision.

    As we say on the website, (1) each participating university agreed that a minimum of two departments would take part in the project. Departments chose to take part. We didn’t choose them. We said from the start that we weren’t aiming to do a full or systematic study of humanities PhD outcomes but rather a pilot project that would enable us to work toward an effective way of gathering data and creating an archive of narrative knowledge (the stories of individual grads). (2) So even though we have gathered information on a significant number of graduates in the first year of the work, it is nevertheless still a pilot project. We want to build on what we have achieved so far, learn from our shortcomings, and provide a fuller, more systematic account of PhD outcomes as the work moves forward. It is important to say that at no point do we claim to be providing a full account of PhD outcomes.

    (3) Does the project lack connection with prior work on the humanities PhD? The answer is no. TRaCE is grounded in three years of work across the country that started with the White Paper on the Future of PhD in the Humanities (December 2013). The group that wrote the White Paper included a number of leading figures in the fields of the humanities PhD and the humanities—people like Sidonie Smith (author of A Manifesto for the Humanities: Transforming Doctoral Education in “Good Enough” Times) and Geoffrey Harpham (author of The Humanities and the Dream of America). Over the past three years, I have also presented and published on the situation facing the humanities PhD and humanities PhDs themselves. A second project, Future Humanities, gathered 26 Canadian universities at a conference in May 2015 that was based on advance work at all the universities and that featured sessions at the conference itself on a range of key questions, including tracking and reporting, time to completion, supervision, etc. (4) As a matter of fact, the core advisory group of the project includes a number of social scientists who have helped develop the data side of the project. We have also been advised by statistical methods experts at the Higher Education Quality Council of Ontario.

    It might indeed be true that (5) some PhD grads won’t be willing to mentor PhD students or undertake other kinds of work toward the reform of the PhD on a volunteer basis. That question figured largely in two recent cross-Canada phone conferences with students, grads, faculty, and administrators. This matter will best be dealt with on a case-by-case basis. (6) We have been talking for over a year about how to count those who start PhDs but don’t complete them. We address the question also on the website. We decided that we could not include the ABDs in a one-year pilot project. The challenges of doing what we had set out to do from the start were already stretching our resources. The ABDs are indeed an important population. We seek to include them on the narrative-knowledge side of the project, and we also seek ways of including them on the data-gathering side as the work moves forward. The universities clearly have a lot to learn from them. Finally (7), the project does pay attention to the important role of graduate supervision. The students who have been interviewing the grads from their home departments ask them about supervision. Their answers form part of the stories the grads tell on the website (though not in all the stories).

    Melonie Fullick’s more general criticisms have to do with methodology, theory, and ethics.

    The methods of the project are straightforward. First, with the support of HECQO and of Maren Wood (Lilli Research), we developed a data-gathering template and a set of ways of searching publicly accessible websites in order to find out what PhD grads were doing in their careers. Second, we contacted as many of the grads as we could contact and sent them a brief explanation of the project and a consent form. We are still working to improve the response rate of grads to the invitation to be interviewed. The student researchers across the country have been interviewing as many of the grads as have been willing to be interviewed. The interviews are based on a question template that we developed. The results of the interviews provide additional data (about grant support, number of courses taught, years to completion, etc). They also form the basis of the stories that the grads tell about their education and subsequent careers. Their stories on the website provide accounts of many pathways to, through, and from the PhD, especially so that Canadians in general and PhD students in particular will be able to see that the PhD leads to a multiplicity of career pathways rather than to only one and that humanities PhDs contribute to the economic and social life of Canada and beyond in a number of ways and in a range of sectors of work and action. Third, participating universities will invite a number of grads back to contribute in various ways to the rethinking of the PhD and the advancement of the programs and the students. We (students, grads, faculty, administrators) are just now thinking about the best ways of doing this work.

    Why should this threefold methodology be characterized as incoherent? Apparently, it lacks “theoretical framing.” Here we get to the central misconception of the critique. As a matter of fact, TRaCE is not a project about the theory of doctoral education. It is not in any strong sense a theoretical project. It is a practical, operational way of addressing a number of urgent questions about the humanities PhD and the place of the humanities and the PhD in society. I think and write about the humanities and the PhD with a theoretical and historical bent, but the TRaCE project takes an on-the-ground approach and seeks to address both academic and non-academic audiences. That is also why we use ordinary language throughout the site and avoid terms of art from academic disciplines, whether in the humanities or social sciences.

    The critique claims that there are problems with research ethics. The question is, will the project post negative stories as well as positive ones? The answer is yes. Will people who have had negative experiences and outcomes come forward and tell their stories? We hope so. And indeed—and this is an odd contradiction in the critique—apparently people with negative stories to tell are coming forward. Where exactly is the problem with research ethics here? We are seeking all kinds of stories, and we seem to be getting them. By the way, we won’t post anything defamatory. And also the great majority of stories are not exactly either positive or negative, not either completely happy or completely sad. They are, as we might expect, a mix of good and bad elements. And it is far better to have the stories than not to have them.

    Finally, the critique suggests that we do away with the stories and instead comb through the interviews for anonymized kinds of data. We are said to be attaching “individual stories to actual people” in order to satisfy “a promotional need, not one that actually supports the validity of the results.” In fact, we take seriously the value of individual stories and actual people as providing another way of understanding the situation of humanities PhDs in modern Canada and as indispensable, yes, for promoting the visibility, to the public and to itself, of a national community of humanities teachers and researchers, inside and outside the academy.

  4. Donia Zhang / June 22, 2016 at 15:18

    I really appreciate the TRaCE project’s goals and think it is a humane approach because it is the first project as such that starts to address individual PhD graduate’s problems and concerns rather than just creating some abstract theory. I feel compensated this way even without offering money for participation.

Click to fill out a quick survey