Tag Archives: Judging

The Layne Kirshon Hypothetical: Resolving Un(der)-discussed Impacts

I was first introduced to this hypothetical scenario at NFL Nationals by Will Thibeau of Glenbrook South. Originally proposed by Layne Kirshon of the Kinkaid School (although probably not for the first time), it provides an interesting litmus test for an individual’s judging philosophy.

The Hypothetical:

The affirmative reads a topical plan and argues that its adoption will trigger nuclear conflict. The 1AC isolates several internal links but does not articulate a terminal impact — their only contention is that the plan will trigger nuclear conflict. The 1NC “link turns” the case for eight minutes, answering the affirmative’s internal links and advancing several internal links of their own contending that the adoption of the plan will prevent nuclear conflict. The rest of the debate is narrowly focused on this nexus question: does the plan cause or prevent nuclear conflict? At the conclusion of the debate, the judge determines (based on the arguments advanced by both teams) that the plan’s adoption will in fact cause nuclear conflict (and thereby sides with the affirmative). Should s/he vote affirmative or negative?

Post your answer in the comments along with the reasoning that brought you to it. Many people have already spent hours discussing and debating this hypothetical, so it seems like a perfect way to kick-off the return of The 3NR after a much-needed vacation. Ready set go.

Reading Author Qualifications Aloud: A Response To The Critics

Several readers provided thoughtful commentary about my recent essay about evidence analysis, “Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate.” This post is an attempt to further develop the arguments advanced in the initial article while addressing the concerns of critics.

1. Normalizing the verbal citation of author qualifications will “nudge” the debate process in the direction of the development of new “metrics of scholarly authority.”

Much of the feedback regarding the article has centered around the competitive outcome of this change in the norm about evidence citation: how will debates about qualifications be resolved?, what qualifications will be preferred?, etc. This largely misses the point: the function of this change in norm is to emphasize the importance of these discussions and encourage debaters and judges to address them explicitly.

Nick Bubb highlighted many of the issues in a thoughtful comment:

[M]any people perceive authors’ opinions to be a politically motivated response to a given issue, rather than an independent evaluation of the truth. … For example, do we minimize Howard Dean’s opinion on health care reform because he’s a democrat and advocates for health care reform? Or do we prefer his analysis because he knows the policy? Or minimize his opinion because he stands to gain politically from the enactment of health care reform? Or do we prefer his opinion because he’s a doctor? What about his opinion on the political implications of health care policy? There are fair arguments to be made on all of these questions, but the structure for interpreting who is qualified to speak to the truth of a given issue is difficult. Certainly some individuals are more qualified than others, but how can we answer that question? If you are to believe some aspects of a hermeneutical process, authors’ qualifications are really their biases and we as the listener have biases for/against their experiences. We can be jaded and dismiss them or we can listen to their reasoning. But which action corresponds with finding the truth? The answer can’t be as simple as to listen to everything, because that degrades back into the problems you’re attempting to address: the prevalence of questionable evidence quality.

There’s also something odd about needing qualifications to speak to an issue. You don’t need a degree from Harvard to talk about poverty. A narrative from a poor person may be equally as powerful. I suppose the “qualifications” can change depending on the context, but then what do qualifications mean?

[A]s a judge, I wouldn’t know how to handle comparative claims. Do I prefer evidence from an economics professor about poverty policy or is it more important to listen to the people that the policy affects?

This is exactly my point: these issues are difficult, but they are also important. In a world where students are exposed to ever-expanding volumes of information, learning to intelligently separate the good from the bad is essential to informed citizenship.

I do not pretend to know the answers to the questions that Nick has posed. I can offer no mechanism for cleanly separating the intellectual wheat from the chaff. But the current model we have adopted in debate is certainly subject to criticism: “if it’s published, it’s evidence” has absolved us of our responsibility to take these issues of scholarly credibility seriously and of teaching students to intelligently navigate through the maze of information at their fingertips.

Effectively determining whom to believe—and more importantly, why to believe them—is arguably the most essential life skill that debate can teach. Perhaps better than any other activity, debate can effectively train students to think critically—to question others’ arguments and to evaluate their claims with skepticism. Working through the complicated business of analyzing sources and comparing qualifications is part and parcel of this facet of debate pedagogy.

The current norm—evidence should be verbally cited only by author’s last name and date of publication—hamstrings our ability to emphasize this aspect of critical thinking and in fact actively undermines it by framing the issue of qualification as separate from instead of intrinsic to the evidence itself.

As I argued in the article, this effect occurs at two levels:

  1. Excluding qualifications from verbal presentation implicitly de-values their importance when considering the quality of a piece of evidence. If the author(s)’ qualifications are not important enough to read aloud, after all, how important can they really be? …

  2. Requiring students to locate the qualifications of a given piece of evidence “privately”—during speech or prep time—prevents the judge from considering qualifications as part of their initial understanding of the evidence as it is being presented.

Shifting the norm to require verbal citation of author qualifications uniquely addresses these concerns.

Continue reading

Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate

The State of Evidence Evaluation In Debate

The recent discussions of evidence quality in high school policy debate have highlighted the need for debaters, coaches, and judges to revisit the prevailing assumptions about the proper role of cited material in our activity. While a drastic shift in the community’s approach to the evaluation of evidence remains exceedingly unlikely, there is an emerging consensus among debate educators that improving this facet of our pedagogy is both possible and necessary.

What is the problem? In short, the explosion of content enabled by new media has shattered traditional constraints on what constitutes “published” scholarship. While debaters in past decades were limited in their research to published books, journals/magazines, and newspapers, the debaters of today have access to a nearly limitless stream of information—all at their fingertips, and searchable in ways never before thought possible. As Gordon Mitchell describes in “Debate and authority 3.0,” the resulting information abundance has created a need for new ways of separating the good from the bad.

Publication, previously a one-to-many transaction, has become a many-to-many enterprise unfolding across a complex latticework of internetworked digital nodes. Now weblogs, e-books, online journals, and print-on-demand book production and delivery systems make it possible for a whole new population of prospective authors to publish material in what Michael Jensen (2008), National Academy of Sciences Director of Strategic Web Communications, calls an “era of content democracy and abundance.”

In content abundance, the key challenge for readers and referees has less to do with finding scarce information, and more to do with sorting wheat from the proverbial chaff (the ever-burgeoning surplus of digital material available online). The pressing nature of this information-overload challenge has spurred invention of what Jensen (2007) calls “new metrics of scholarly authority” – essentially, new ways of measuring the credibility and gravitas of knowledge producers in a digital world of content abundance.

Policy debate’s “metrics of scholarly authority” have developed slowly—changes in dominant assumptions about what constitutes “good evidence” have occurred over decades based on the organic back-and-forth of the contest round. At the high school level, the influence of summer debate institutes and the trickle-down from intercollegiate competition have played a major part in this evolution. While regional differences remain, the vast majority of those that participate in policy debate on the “national circuit” hold remarkably similar views about what makes a piece of evidence “good”. Indeed, the dominant conception of “good evidence” has become so normalized that it is often framed as self-evident: good evidence “speaks for itself”.

Continue reading

New Affirmatives and Source Credibility

In the first post on his blog (which finally inspired the creation of The3NR.com, an idea that had been milling around for a while), Roy criticizes the recent trend toward breaking many poor-quality new affirmatives at the end-of-the-year national championships. He concludes:

I … hope … everyone agrees that it is becoming increasingly more common for affirmatives to be afraid of defending their “house.” We do a disservice to the debaters and the quality of the debate if we allow this to continue. If you are a coach challenge your kids to find the best possible aff and learn everything about it. If you are a student, work hard, debate is most satisfying not just when you win but when your pour your heart out defending something and the work you’ve done translates into overall success. Judges, be willing to disregard bad evidence, be sympathetic to good smart arguments made by a team even if not evidenced. Winning is obviously an important function of debate, but if debate becomes a race to the bottom of crappy affirmatives what is the point? We change topics yearly to learn about different arguments and issues, why then do some of the most important rounds and major tournaments ultimately get decided on generics that can be read year round vs unsustainable new affs?

I agree 100%. One thing that I feel is important to add is that teams are far too afraid of negative teams finding the silver bullet negative strategy, especially at a tournament like the TOC where hired gun researchers are given way too much respect in terms of their ability to change the game. If you have been read an affirmative before and you are confident that it is based on sound arguments and quality evidence, chances are good that other squads have spent some time researching it, too. Between your research and theirs, it seems exceedingly likely that the “silver bullet” strategy would have been uncovered if it did indeed exist. If you haven’t found it and you haven’t heard another team read it, the most likely reason is not because the collective research ability of the high school community is poor but because no such strategy exists.

What is the silver bullet strategy against an RPS affirmative? Is there really something written that Sovacool and the other aff authors just haven’t thought about and therefore haven’t written a response to? If you read the journals every month and subscribe to RSS feeds for the major search terms relevant to your case, do you really think you’re going to miss the big new thing that came out and which the negative will catch you unprepared for?

The only reason to be afraid is because you lack confidence in the quality of your preparation. Maybe you haven’t kept up with the journals and you haven’t read all of the latest articles about your case. If that’s true, then you don’t deserve to win affirmative debates against negative teams that have worked hard to prepare to engage your case… maybe you should be reading stupid new affs that the other team will be unprepared to debate. But that is the debate equivalent of the trick play and an explicit admission that you are not as good as your opponents and that they have outworked you. If that’s an admission you’re willing to make, then so be it. But every team should strive to be the most prepared team possible when it comes to their affirmative(s), and you should feel a sense of shame and disappointment with yourself if you don’t think that’s the case.

This is decidedly not an argument “against” new affirmatives. There are times when it makes sense to try to catch the opposition off-guard with a case that you haven’t read before, and sometimes it even makes sense to read a new affirmative only once based on the teams that you are debating. But breaking new affirmatives that lack the credibility to survive even minimal negative research is an unfortunate but growing trend. If season-ending championship tournaments become battles between terrible new affirmatives and generic critiques and process counterplans, what does that say about our activity? Instead of pushing our students to become experts in the issues that they discuss throughout the season, it seems in many ways that we are telling them to forget what they’ve learned because, as critics say about Billy Beane, “that [stuff] doesn’t work in the playoffs.”

The obvious rejoinder to this line of reasoning is that the poor quality of these new affirmatives should make it easy for the negative to win. While I agree with this principle in the abstract, it doesn’t seem to play out that way in practice because of the approach that a majority of judges take. As Roy argues:

Debaters are not good at calling people out for reading bad evidence and judges have become too comfortable saying “Yeah well I agree it might not be qualified, it might be from a random blog, but I mean they’ve got a card.” It used to be only at the NDT in college that judges would use the “well they have a card” guise for making decisions, but this has now reverberated to almost every debate judged. We’re told not to believe everything we read on the internet, but it seems like in debate rounds a place for intellectual discussion on issues we often settle for evidence from people who are less qualified then the kids debating on the issue. Debaters CALL OUT TEAMS FOR BAD EVIDENCE read. Judges BE WILLING TO SAY THAT DESPITE HAVING A CITE, TAG, AND URL, THE TEXT READ IS NOT EVIDENCE.

I’m probably one of the best judges one could find for these kinds of approaches/arguments (“their ev is garbage,” “this doesn’t make sense,” “prefer qualified academic scholarship,” etc.), and I constantly tell students that they would be rewarded if they were more diligent about taking this kind of approach. Even “mainstream” cases (e.g. not stuff about alien invasions of Iraq to steal antigravity technology) often contain “evidence” that I would gladly disregard out of hand if only the negative challenged it. The Bearden card? If the neg says “he is unqualified, he said it is already too late, and he said that our only hope is zero point energy,” then it goes away. “Still evaluate his warrants” is stupid in the worst sense of that word and an excuse for judges to avoid making judgments (which, of course, is the function of the judge) about what counts as evidence and what sources should be relied upon when crafting policy.

Debaters would be pleasantly surprised by the reception they would receive if they made a bigger deal out of source quality in their debates. While there is certainly a segment of the judging pool that adheres to the “but they’ve got a card” school, I do not think that it is the majority (or even close to it). In front of most judges, arguments about source quality and author credibility will receive a very favorable listening—in many cases, you will be preaching to the proverbial choir. Remember, judges are the ones who have to listen to terrible evidence over-and-over again. After not very long, it gets old. Take advantage of that and challenge your opponents to justify the evaluation of the things they submit as “evidence” and you will win a lot more debates.

The Meaning Of "Offense/Defense: There's Only A Risk"

Debaters say a lot of things in debates that are not arguments in themselves but which contain cues that trigger meaning in the minds of their audience (their opponents and, most importantly, the judge). As Roy discussed in an article about “Defense”, one such cue is used to frame the way the judge approaches his or her evaluation of the debate. In many 2NRs or 2ARs, the debater starts with something like this:

Evaluate this debate through an offense/defense paradigm—they only have defense so there’s only a risk that we outweigh.

In many cases, the opposing team does not refute this framing of the debate. In a few cases, they respond by insisting that the judge not evaluate the debate using an offense/defense paradigm and then extend their defensive arguments.

But what does it mean to evaluate a debate using the “offense/defense paradigm?” Distinguishing between offensive and defensive arguments is easy enough; categorizing arguments this way is indeed one of the most helpful ways for new debaters to conceptualize a round. Put most simply, offensive arguments are those that provide a reason to vote for you while defensive arguments are those that provide a reason not to vote against you. Easy enough.

Deploying this distinction between offensive and defensive arguments as a decision-making calculus, however, is a little more complicated. As Roy argues, too many judges use “they’ve only got defense” as an excuse not to make judgments about each teams’ arguments. If the negative goes for a disadvantage and the affirmative goes for “this disadvantage does not make sense (because it is missing internal links, is empirically denied, links more to the status quo than our plan, etc.)”, it is nonsensical for the negative to implore the judge to employ an offense-defense paradigm and therefore exclude consideration of the affirmative’s responses to the disadvantage.

“Offense/defense—there’s only a risk” is not a reason to only evaluate offensive arguments. Offense/defense is a way of categorizing arguments, not resolving them. In the vast majority of debates, it does not provide any helpful guidance for judges as they evaluate the two teams’ arguments.

Instead of reciting this line at the top of the 2NR or 2AR, debaters should explicitly compare the offensive and defensive arguments made by both sides. If one’s best shot of winning is to minimize the importance of defensive arguments against a high-magnitude impact, one should make those arguments explicitly instead of relying on the “offense/defense” crutch. Separating out offensive from defensive arguments is a helpful way to approach a rebuttal, but it does not obviate the need for debaters to win their framing of the impacts… it is a starting point, not the destination.

This should seem obvious to many readers, but it is important to unpack the meanings that we imbue upon certain phrases. “Offense/defense—there’s only a risk” is by no means the only instance in which a few words have come to mean much more than that, but it is certainly one of the most frequently used.

The bottom line is that debaters should strive to make their impact arguments and framing of debates more sophisticated and judges should be leery of assigning meaning to utterances that do not fully communicate a complete argument.