Debate in the Age of Deepfakes: Revisiting the Presumption of Credibility for Published Evidence

Adam Rawnsley broke a story today at The Daily Beast that should prompt serious reflection by debaters and debate coaches. It highlights a disturbing trend in the information ecosystems we depend on for the evidence that shapes our debates. In light of these revelations, it’s important for debate teachers and coaches to individually and collectively revisit our norms about what constitutes “credible evidence.”

So, what’s going on? Titled “Right-Wing Media Outlets Duped by a Middle East Propaganda Campaign,” Rawnsley’s article reveals a sophisticated conspiracy by an unknown actor that included “a network of at least 19 fake personas that has spent the past year placing more than 90 opinion pieces in 46 different publications. The articles heaped praise on the United Arab Emirates and advocated for a tougher approach to Qatar, Turkey, Iran and its proxy groups in Iraq and Lebanon.”

The type of articles that this network publishes are exactly what debaters are often looking for: relatively short, direct, strongly-worded works of political advocacy from apparently-qualified expert commentators. The authors seem real: they have photos, Twitter accounts, and bios, and most have been published in legitimate (sometimes even well-respected) news sources. If you were to come across one of these articles while doing debate research, you would have no qualms about cutting it. I know, because I unwittingly cut at least one article from this network. I’m sure others have, too.

It is upsetting to know how easily I was duped — especially because I thought I was quite familiar with the “disinformation” techniques I was supposed to be watching out for.

Let’s back track a bit — to last May and June, when I first started researching the arms sales topic. One of the most interesting things I learned early in that process is the significant influence that Saudi Arabia, the United Arab Emirates, and other Gulf States have on public research and advocacy about U.S. foreign policy in the Middle East. This wasn’t often the central focus of debates about the Saudi/UAE case during the season, but arguments about a “disinformation campaign” often made an appearance in those debates.

If you’re not familiar with these issues, the Foreign Influence Transparency Initiative at the Center for International Policy has published a lot of great work about the role the Gulf States play in funding think tanks, college and university programs, and online and print publications — in addition to the significant sums they spend lobbying Members of Congress and the Administration.

As part of my research into these “disinformation” arguments, I learned a lot about the many (many) Middle East-focused think tanks and their sources of funding: the Middle East Institute, the Washington Institute for Near East Policy, the Foundation for the Defense of Democracies, the National Council on US-Arab Relations, the Arabia Foundation, and countless others. Each of these institutions have their defenders, and I’m not suggesting that they’re all part of a coordinated disinformation campaign. But there are certainly reasonable questions to ask about the independence and credibility of many “experts” on U.S. policy toward the Middle East.

At the time, the most stark example I encountered of the potential for intentional disinformation to poison the public debate involved Fahad Nazer. Ben Freeman from CPI’s Foreign Influence Transparency Initiative explains:

Despite MEI head Salem’s claim that “independence is sacred,” there’s reason to question how independent scholars can be when their work is, at least in part, dependent on foreign funding. In at least one case, for instance, Salem’s institute published the work of Fahad Nazer, who was directly on the Saudi payroll. While earning $7,000 a month as a foreign agent for Saudi Arabia, Nazer wrote several pro-Saudi articles for both think tanks and mainstream media outlets, including one for MEI that made no mention of his financial ties to the Saudis. That March 2018 article did, however, encourage yet more U.S. support for the country’s ruling crown prince, Mohammed bin Salman, who, Nazer wrote, would “be good for Saudi Arabia, the United States, and the world.”
Just seven months later, bin Salman would reportedly authorize the brutal murder of Jamal Khashoggi and, in January 2019, Nazer himself would become the official spokesperson for the Saudi Embassy in Washington.

Ben Freeman, Director of the Foreign Influence Transparency Initiative at the Center for International Policy, Core Faculty Member at the Institute for Defense and Business—a nonprofit education and research institute that delivers professional education programs to government agencies, military, non-governmental organizations, and private industries, former Deputy Director of the National Security Program at Third Way, former National Security Fellow at the Project on Government Oversight, former Instructor at the American Military University, holds a Ph.D. in Political Science from Texas A&M University, 2019 (“Following the Foreign-Policy Money Trail in Washington: How Middle Eastern Powers Fund Think Tanks,” TomDispatch, February 21st, Available Online at http://www.tomdispatch.com/blog/176530/, Accessed 06-27-2019)

I had cut several cards from Nazer before I read Freeman’s article (and several others documenting the same and similar examples). But while this conflict of interest seems clear, his evidence continued to be read in debates throughout the season. And while the affirmative would sometimes push back about his (and others’) credibility, it didn’t seem like their effort often proved worthwhile. Given the intense time constraints of a contest round, developing and proving these arguments traded off with other, more important arguments. By the end of the season, my sense is that these “disinformation campaign”-style arguments were only somewhat helpful — and affirmative teams therefore mostly abandoned them.

The way that these arguments played out during the season concerned me at the time, and it concerns me even more now. I can’t remember an example from earlier in my career of higher-quality indictments of commonly-used evidence. The closest example I can think of is from the military presence topic. In debates about Afghanistan, there were strong indictments of some of the neoconservatives that were often cited by the negative. But the indictments of those authors were more about whether they were right than whether they were bought and paid for, and they tended to be relatively successful for the affirmative. On the arms sales topic, even very high-quality indictments of particular sources weren’t particularly successful.

Even this is not a crisis, though. Some affirmative teams did find occasional success with “disinformation”-style arguments. And while I wish that these arguments were easier to execute effectively, that might require only relatively small tweaks to the way judges treat evidence.

But with the publication of Rawnsley’s article, it has become clear that our community’s long-settled “presumption of validity” about published evidence is facing a much more fundamental challenge.

This isn’t the first time this has happened (even during my career). With the rise of internet sources in the late 1990s — and especially of blogs in the early 2000s — debate’s norms about published evidence were severely tested. The norm that had been imported from the pre-internet era — that published writing was credible until proven otherwise — started to make less sense when the barrier to publication was nearly (if not entirely) eliminated, and the historic gate-keeping role of the editor or publisher was removed.

During that era, there was a significant backlash by some judges and coaches to “blog evidence” — even when the blogging was done by respected experts. In retrospect, that backlash seems both wrong and naive; from our perspective in 2020, the early- and mid-2000s blogosphere can seem like a golden age of high-quality debate research. But the skeptics of that era were asking the right questions, and they helped ensure that our community’s evidence standards remained relatively adequate even as the availability of lower-quality publications exploded.

Eventually, something of a consensus developed. For the most part, evidence from blogs could be attributed to someone — and if that someone was a credible expert, their evidence was presumptively valid. Evidence from unattributable blogs or from blogs attributed to non-expert authors is still sometimes read, but it (rightly) faces a lot of skepticism from opponents and judges.

One of the reasons that this soft consensus has remained relatively stable for the last decade is that we (rightly, I think) assumed that attributable evidence was genuine. While there were occasionally suspicions about “planted” evidence — articles written by debaters or debate coaches with the explicit goal of creating cards that would not otherwise exist, often published under pseudonyms — relatively few examples have been uncovered. Gordon Mitchell addressed this in his prescient 2009 post “Debate and authority 3.0,” which included a discussion of the infamous solar-powered satellites/Space Review/”John Marburry” scandal from earlier that year.

But in 2020, our shared presumption of genuineness no longer seems defensible. More than ever before, “fake” articles are being published — often in “fake” publications, but sometimes in legitimate publications — and attributed to “fake” personas. By “fake,” I mean that in the most literal sense: “not genuine; counterfeit” (Oxford Languages). The supposed author of the article does not exist. Their supposed credentials are fabricated. The organizations they work for and the publications they write for only exist “on paper” and online.

No one knows who really wrote these articles or why they wrote them, and it’s not easy to figure this out even if you’re an experienced journalist with lots of time to investigate. As professor Marc Owen Jones puts it in Rawnsley’s article, “It’s not just fake news we need to be wary of, but fake journalists.”

This phenomenon poses a very significant challenge to our shared norms about debate evidence. If we can’t trust that the authors we cite — and the articles and publications we cite them from — are real, we need to revisit the foundational norm that treats published writing as credible until proven otherwise. This is easier said than done, especially because these fake sources are so convincing.

This might therefore be a good time to contemplate more broadly the appropriate role of evidence in debate. On this point, Travis Cram’s 2012 article “Putting Debate Back Into Debate: Digital Debate and Evidence” offers one way forward:

Digital debate has enabled debaters to seamlessly file, read, and utilize mass quantities of evidence at a level unthinkable under debate’s traditional paper medium. This shift has altered how evidence is both produced and tested. A dominant trend in debate has been to view evidence’s chief function as making claims. As a result, most debate research in a digital age is characterized by the “production” of specific claims; finding and constructing precise wordings to fit strategic expectations takes priority over providing and comparing the reasoning aspect of argumentation. This approach, which I dub “debate as information production,” leads students further away from developing the skills of critically evaluating and judging competing truth claims in evidence. In this paper, I attempt to rectify this imbalance by arguing for prioritizing the perspective of “debate as argument-judgment” over one of “debate as information production,” while articulating a set of best practices that educators can call upon to improve debates about evidence when moving forward.

Travis Cram, Director of Forensics at the University of Wyoming, Ph.D. Candidate in Communication at the University of Kansas, 2012 (“Putting Debate Back Into Debate: Digital Debate and Evidence,” Contemporary Argumentation and Debate, Volume 33, pp. 134-170)

Cram’s article was way ahead of its time; it recognized that “paperless debate and continued changes in web-based resources have accelerated and compounded several problematic trends in the use of evidence in debate” and accurately anticipated the ways that this would shape debate practices for the rest of the decade. But even Cram did not anticipate the emerging methods of deepfake disinformation uncovered by Rawnsley.

Now more than ever, Cram’s suggested paradigm shift from “debate as information production” to “debate as argument-judgment” is vital if we are to preserve debate’s important role in training students to become savvy, skeptical, and well-informed consumers of information. As it becomes harder and harder to distinguish “real” evidence from “fake” evidence, the answer might not be to continue trying to improve our standards for what counts as “real.” Instead, it might be time to put more emphasis on the validity of the argument rather than the validity of the evidence.

As Cram explains,

To a large extent, the solution to the dilemma of source authority is beyond the boundaries of this essay because this is a problem that is largely external to the issue of debate’s dominant relationship to evidence (though it certainly implicates it, as the previous section argued). Even if one entirely disagrees that “debate as information production” is a problem at all, the challenge of resolving the erosion of traditional standards of authority remains. Furthermore, even if the alternative of “debate as argumentative judgment” is incapable of resolving the present crisis of source authority, it fulfills its core objective if it creates a world where students are more actively involved in comparing reasons and warrants throughout the course of a debate.

Travis Cram, Director of Forensics at the University of Wyoming, Ph.D. Candidate in Communication at the University of Kansas, 2012 (“Putting Debate Back Into Debate: Digital Debate and Evidence,” Contemporary Argumentation and Debate, Volume 33, pp. 134-170)

In an era when we can’t tell the difference between genuine evidence and fake evidence — and when sophisticated techniques for fabrication are becoming ubiquitous — we can’t rely on the same shared norms about evidence that were developed in the 1990s and 2000s.

We’ve long known that fabricated evidence would break our debates, and we’ve mostly done a good job of preventing it. But now it’s not just our colleagues that could be fabricating evidence: it’s the authors of the evidence themselves and the publications that distribute their work. And they’re fabricating evidence not to win competitive contest debates, but to win the public debate about the issues that our students are debating. This is a challenge that previous generations of teachers and coaches would never have needed to contemplate, but it is increasingly our new reality.

I certainly don’t have all the answers, but I’m confident that Cram was right in 2012 — and that his advice is even more prescient today. Debate will better serve its students — and by extension, the larger world they live in — if it is conceived as a way to teach “argument judgment” more than “evidence production.” The sooner we collectively shift our norms in that direction, the less vulnerable we will be to disinformation campaigns like the one uncovered by Rawnsley.

And while we can’t stop these trends from polluting the world’s information ecosystems, making these changes can prepare students to better cope with this new reality. For debate teachers and coaches, this seems likely to be one of the defining challenges of the 2020s.


If you would like to share your thoughts about evidence norms in the deepfake era, please do so in the comments below.