Skip navigation
Speculative Diction

High-profile academic fraud a symptom of underlying dysfunction

BY MELONIE FULLICK | JUN 12 2015

If you follow the higher education news, stories of scandal and corruption relating to academic research pop up on a fairly regular basis. One such case, which exploded last month in the U.S., is that of Michael LaCour. Described in a New York Magazine article as “one of the biggest scientific frauds in recent memory,” it was hard to miss. LaCour, a PhD candidate in political science at UCLA, co-authored a paper titled “When contact changes minds: An experiment on transmission of support for gay equality.” The paper made the claim that “a brief conversation about marriage equality with a canvasser who revealed that he or she was gay had a big, lasting effect on … voters’ views.” It was published in Science in December 2014, and the research received a great deal of (mostly positive) national media attention, discussed everywhere from the New York Times to This American Life.

David Broockman and Joshua Kalla, graduate students at UC Berkeley, were impressed by LaCour’s findings and sought to extend them in their own study. When they ran into problems trying to match his response rate, they contacted the survey firm that LaCour had supposedly used; the firm had no knowledge of his project, nor even the capacity to carry out the study as described. Broockman and Kalla discovered further problems with the data, and reported these to Donald P. Green (of Columbia University), senior co-author of the paper—who reported it to LaCour’s superviser and confronted LaCour.

With Yale professor Peter Aronow, Broockman and Kalla created a report detailing their findings (“Irregularities in LaCour”). The report was tweeted, which in turn was noticed by Lila Guterman, who passed it along to Ivan Oransky of Retraction Watch. Due to the immediacy of social media and digital documents, the story had broken widely within a day and a half. On May 28th, the paper was retracted by Science at Green’s request. LaCour objected to the retraction and promised a rebuttal.

It turns out that when asked to produce the raw survey data, LaCour could not; nor would he provide contact information of survey respondents so their participation could be verified. Aside from serious problems with the data, LaCour had also listed agencies as funders who had not in fact funded the research. In short order, other research projects he’d worked on (and other items on his CV) were suddenly placed under the microscope and found to be spurious. While LaCour did post a rebuttal on May 29th, it did little or nothing to answer the questions raised by Broockman, Kalla and others.

This is a serious, and very public, scandal involving gross violations of professional ethics. But the first questions that come to mind are, how could this happen—and why?

Firstly, LaCour’s research was attention-grabbing because it directly addressed a core problem in political communication: can you change someone’s mind on a divisive issue? And if so, how? Imagine the implications of having the answer to that question, for political campaigns of all kinds. It’s why the study was focussed on a current and controversial political issue (marriage equality), and it’s how the results came to be so influential; the “findings had huge implications for people who were trying to advance the cause of equality and have changed how advocates do their work” (Broockman, in Retraction Watch). The research not only offered a solution to a deep-rooted problem; it also seemed to provide “an affirmation of the power of human contact to overcome disagreement” (Datalab). Because LaCour’s study had this political goal, when the paper was retracted some conservative commentators were happy to point to the “liberal bias” in academe as the reason for falsified results.

What about the grad student at the centre of this scandal? Until very recently, LaCour looked by all accounts as if he stood at the pinnacle of the sorting system that feeds the U.S. academic hierarchy. His example was the prototype of academic success: the young scholar who publishes ground-breaking research while still completing his PhD.  LaCour was to become an assistant professor at Princeton University in July; thus he’d scored not only a tenure-track job, but one at an Ivy League institution to boot. He was the epitome of the well-groomed young research star. And yet we’re now forced to question not only how much of this is a mere facade, but also how he could possibly have made it this far if that is the case. If the supposed cream of the academic crop is doing this, where else is it happening?

One thing that stands out about much of the reaction to this incident—which brought pillorying on social media—is the focus on LaCour as being individually responsible. While the incident was certainly described by some as shameful for the whole field of political science (and for the social sciences in general), LaCour has borne the brunt of the criticism. According to these accounts, it seems no one was to blame for LaCour’s behaviour other than himself. But how can that possibly be the case? While his actions are those of an incorrigible fraudster, how could he have gotten to his current position without systemic support and the endorsement of established researchers?

Take for example LaCour’s senior co-author Donald P. Green, a distinguished scholar of political science. Although he had not seen the data, “simply having Green’s name in the byline ensured that the paper was published in a top-tier journal” (New York Times editorial). It was ultimately Green who wrote the retraction letter to Science; his letter is appended to the report by Broockman, Kalla and Aronow (the Washington Post’s headline re-frames the issue thus: “Co-author disavows highly publicized study on public opinion and same-sex marriage”). Was Green following standard practice here? What about LaCour’s doctoral advisor, Lynn Vavreck? Or the others with whom he must have worked? This paper clearly isn’t a one-off for LaCour; it’s apparently more like the tip of the iceberg. But he had to be allowed into the academic elite—because as much as we’d like to believe it, outcomes don’t depend solely on one’s personal effort.

Not everyone is buying the individualization angle. In the New York Times, Adam Marcus and Ivan Oransky (co-founders of Retraction Watch) wrote that “we’re surprised only that retractions aren’t even more frequent … The incentives to publish today are corrupting the scientific literature and the media that covers it.” There are plenty of possibilities for fiddling academic numbers, but this is really a question of motivation. Academic careers are hard to build and there’s tough competition for scarce rewards; there are also professional penalties to be paid for rocking the boat. Broockman was repeatedly told to keep quiet about his misgivings regarding the data, lest his own academic career should be jeopardized.

I’m going to digress for a moment, to connect all this to a recent article from The Atlantic: “Why it pays to be a jerk.” This one isn’t targeted at academia, but the connections are clear: the crux of the (research-based!) argument is that “semi-obnoxious behavior not only can make a person seem more powerful, but can make them more powerful […] The same goes for overconfidence. Act like you’re the smartest person in the room” and you’re likely to win more influence in the workplace. The question I’d ask is this: whether or not this kind of behaviour can pay off—should it? And why would we want it to? In what kind of professional culture does that happen?

To return to LaCour, I’d say there’s a link between the idea of acting overly confident to gain influence and, for example, listing “$793,000 worth of grants received from various foundations” on your CV—grants that were not, in fact, awarded to you. Why was no-one checking in, when LaCour’s CV contained such egregious falsehoods, and when his research results “flew in the face of just about every established tenet of political persuasion”? Why did no-one catch him out before now, in spite of this pattern of deception? My guess is that part of it relates to how successfully he performed the role of an elite early-career academic, how well he seemed to “fit.” So what does that tell us about the context (and causes) of success?

As I’ve argued before, if you build a system that rewards this kind of performance then you should expect unpalatable results to emerge. If you want a gamified system, people will find ways of gaming it. The really interesting question, I think, is about how much of this kind of thing is happening in a more low-level way every day in the profession; and who is getting ahead by engaging in it. If we ask ourselves what kind of people we want to see as scholars and teachers in our future universities, this is a key point.

We can’t complain about high-profile cases like this without also engaging in some critical reflection on the system in which such incidents can happen. We can’t say we want to hire the best people, when only one particular, narrow version of “best” is noted and rewarded. Cases like LaCour’s are in this sense red herrings, because they make it easy for us not to notice what’s happening around us every day to varying degrees, depending on our contexts; they make it easy for us to assume the problem is concentrated in particular, deviant individuals. Drew foster writes that “if academics are satisfied to view this incident as an anomaly that was effectively handled by a functional system of checks, it would be a regrettable missed opportunity.”

When the job market has become so competitive, all this is even less surprising (but perhaps more angering). Even though most of us realize that academe is not a meritocracy, reading about LaCour’s case feels like having salt rubbed into a wound. Not only that, but this was research that had clear “real-world” implications, and was already being applied—for example, in Ireland for the same-sex marriage referendum. Faking these kinds of results is a betrayal of trust, and regardless of who is directly or indirectly responsible, it causes another dent in the relationship between university researchers and those from outside the institution who work with them.

This is yet another case that shows why we should care about universities as places of knowledge creation and education. It’s why we need to think hard about how academic culture works, who is allowed to participate, and the effects this has on what we get to “know” about the world.

ABOUT MELONIE FULLICK
Melonie Fullick
Melonie Fullick is a PhD candidate at York University. The topic of her dissertation is Canadian post-secondary education policy and its effects on the institutional environment in universities.
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. Marco Boniface / January 6, 2016 at 21:45

    Degree qualifications are not checked by many universities and institutions. I was aware of many academics claiming doctoral degrees in Americsn universities in which I worked, including the Ivy League. Despite warnings to administrations of this I was never aware of any of the frauds being challenged or dismissed – as though anybody who had navigated the hiring process could not possibly be hoodwinking the hiring committee. If today the claimed degrees of all senior figures in the academic world were checked diligently with their supposed degree-awarding universities, there might be some very embarrassing revelations – Catch me if you can scenarios. But, it will never happen as figures in positions of power, such as museum directors and full professors at very famous institutions, would block such checks, to protect their own careers. All frauds know that once they overcome the first hurdle and get a foot in the door then they can solidify their fraud by laying an ever thicker smokescreen, and the higher up they climb the less chance that they will ever be detected. There are none so blind as those who will not see…or don’t want to.

Click to fill out a quick survey