Tag Archives: evidence

Norms Regarding Disclosure: Citations or Full Text?

There is an interesting discussion occurring on the Minnesota Debate Teachers Association Forum (the successor to the long-running and incomparable MN Debate Web) about the debate community’s norms regarding disclosure of citations. Dan Kauppi, the debate coach at Eden Prairie High School, proposes that the existing norm in favor of disclosure of citations be replaced with a norm in favor of disclosure of the full text of evidence.

Here’s an open question for the community: Why is it a convention that teams should be obligated to give cites to their opponents after a round, but not the full text of the cards?

If we believe in open exchanges of evidence that has already been used in the interests of improving the quality of future debates, why do we force each other’s squads to go through the time waste and hassle of finding the original source materials when it’s just as easy to copy and paste the card in its entirety as it is to send someone “Smith, ‘Article Name,’ 76 Journal of Expensive Access 274, 1993?” To the extent that disclosure of evidence previously used is beneficial (increases equity, allows for more in depth debate and preparation and research), those advantages are much better accessed by just handing over the evidence.

While some might not be persuaded by the “it takes too much time to look up” argument, figuring that part of the value of the debate is in learning research skills (regardless of how tedious that may be), I think another more serious issue is one of resource access. Lending privileges at university libraries, access to journals, Westlaw, and/or Lexis are all extraordinarily expensive and out of reach for most squads.

I’m not so concerned with cites that just have a webpage attached, but in my experience a lot of teams purposely cite to difficult sources (and I’ve seen camp lectures this year where the instructors tell students to choose sources that are hard to look up). A norm which includes cite disclosure but doesn’t require card disclosure really makes research unnecessarily difficult when you have expensive database access, and impossible when you don’t.

A very easy solution to implement would just be the purchase of a simple sheetfed scanner to be used after the round for whatever relevant evidence the debaters want for later examination. In addition to solving all the problems I mentioned above, it would save the requesting team the time and delay of waiting for opponents to fulfill requests for cites, and save those getting the requests the hassle of fulfilling them.

The responses to Kauppi’s initial post have been varied but mostly against the proposed change in norms. Do you have an opinion either way regarding this issue? Should teams be willing to share the full text of evidence they have read in debates instead of just a citation? Please take a look at the MDTA thread and share your thoughts.

Going Paperless: Can High School Programs Effectively Make The Transition?

Whitman’s nationally recognized debate team fundamentally changed what debating looks like this semester by ditching their 40 pound plastic tubs and thousands of pages of evidence for sleek dell laptops. Whitman’s team is the first college to fully transition to paperless debate.

— Gary Wang, The Pioneer

While the University of Denver was the first team to go paperless way back in 2006, Whitman College’s decision to make the switch during the 2008-2009 college season has once again brought the issue of paperless debating to the attention of programs throughout the nation. Thus far, the bulk of the conversation has centered around the needs and concerns of college squads that compete in NDT and CEDA tournaments; the unique needs and concerns of the high school policy debate community have remained largely unaddressed. This article is an attempt to remedy this shortcoming.

Continue reading

Bad Cards #1: The “South China Morning Post ‘96” Disease Impact

Many of the pieces of evidence that students frequently read in debates are unquestionably terrible. Often, the desire to bolster an impact’s magnitude and raise it to extinction-level leads debaters to rely on evidence with a host of problems including but not limited to:

  • evidence used to advance arguments outside its intended context;
  • evidence citing unqualified, (functionally) anonymous, or even nefarious authors;
  • evidence culled from (typically internet or tabloid) sources that are at best unedited and at worst contemptible;
  • evidence advancing hyperbolic arguments supported by vitriolic and/or over-the-top language;
  • evidence so old that it no longer makes sense given subsequent events or changes in the topic it discusses; and
  • evidence which must be liberally interpreted in order for it to be used to support the desired conclusion.

The “Bad Cards” series is an attempt to highlight some of the most egregious examples of poor-quality evidence that is nonetheless commonplace in high school policy debates. It is not the author’s intention to “scold” or “shame” those who have read these pieces of evidence in the past or who will do so in the future. Instead, it is an attempt to influence the way that evidence is selected for inclusion in debate arguments by arming opposing students with the tools they need to defeat bad cards.

Continue reading

Interesting Quote re: Politics

http://bgdailynews.com/articles/2009/06/11/opinion/commentary/comment2.txt

Recall that Kennedy’s successor, with far more political capital than Kennedy had, promised to defeat poverty. Historian Steven Hayward notes that in 1966, Lyndon Johnson’s commander in the War on Poverty, Sargent Shriver, told Congress that the White House believed poverty in America would be eliminated within 10 years. “Why,” Hayward wryly asks, “should social science be more difficult than rocket science?”

I don’t know that one is more difficult than the other, but I do know that they are not interchangeable. Physics is good at figuring out how to split atoms. Sociology, not so much.

Reading Author Qualifications Aloud: A Response To The Critics

Several readers provided thoughtful commentary about my recent essay about evidence analysis, “Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate.” This post is an attempt to further develop the arguments advanced in the initial article while addressing the concerns of critics.

1. Normalizing the verbal citation of author qualifications will “nudge” the debate process in the direction of the development of new “metrics of scholarly authority.”

Much of the feedback regarding the article has centered around the competitive outcome of this change in the norm about evidence citation: how will debates about qualifications be resolved?, what qualifications will be preferred?, etc. This largely misses the point: the function of this change in norm is to emphasize the importance of these discussions and encourage debaters and judges to address them explicitly.

Nick Bubb highlighted many of the issues in a thoughtful comment:

[M]any people perceive authors’ opinions to be a politically motivated response to a given issue, rather than an independent evaluation of the truth. … For example, do we minimize Howard Dean’s opinion on health care reform because he’s a democrat and advocates for health care reform? Or do we prefer his analysis because he knows the policy? Or minimize his opinion because he stands to gain politically from the enactment of health care reform? Or do we prefer his opinion because he’s a doctor? What about his opinion on the political implications of health care policy? There are fair arguments to be made on all of these questions, but the structure for interpreting who is qualified to speak to the truth of a given issue is difficult. Certainly some individuals are more qualified than others, but how can we answer that question? If you are to believe some aspects of a hermeneutical process, authors’ qualifications are really their biases and we as the listener have biases for/against their experiences. We can be jaded and dismiss them or we can listen to their reasoning. But which action corresponds with finding the truth? The answer can’t be as simple as to listen to everything, because that degrades back into the problems you’re attempting to address: the prevalence of questionable evidence quality.

There’s also something odd about needing qualifications to speak to an issue. You don’t need a degree from Harvard to talk about poverty. A narrative from a poor person may be equally as powerful. I suppose the “qualifications” can change depending on the context, but then what do qualifications mean?

[A]s a judge, I wouldn’t know how to handle comparative claims. Do I prefer evidence from an economics professor about poverty policy or is it more important to listen to the people that the policy affects?

This is exactly my point: these issues are difficult, but they are also important. In a world where students are exposed to ever-expanding volumes of information, learning to intelligently separate the good from the bad is essential to informed citizenship.

I do not pretend to know the answers to the questions that Nick has posed. I can offer no mechanism for cleanly separating the intellectual wheat from the chaff. But the current model we have adopted in debate is certainly subject to criticism: “if it’s published, it’s evidence” has absolved us of our responsibility to take these issues of scholarly credibility seriously and of teaching students to intelligently navigate through the maze of information at their fingertips.

Effectively determining whom to believe—and more importantly, why to believe them—is arguably the most essential life skill that debate can teach. Perhaps better than any other activity, debate can effectively train students to think critically—to question others’ arguments and to evaluate their claims with skepticism. Working through the complicated business of analyzing sources and comparing qualifications is part and parcel of this facet of debate pedagogy.

The current norm—evidence should be verbally cited only by author’s last name and date of publication—hamstrings our ability to emphasize this aspect of critical thinking and in fact actively undermines it by framing the issue of qualification as separate from instead of intrinsic to the evidence itself.

As I argued in the article, this effect occurs at two levels:

  1. Excluding qualifications from verbal presentation implicitly de-values their importance when considering the quality of a piece of evidence. If the author(s)’ qualifications are not important enough to read aloud, after all, how important can they really be? …

  2. Requiring students to locate the qualifications of a given piece of evidence “privately”—during speech or prep time—prevents the judge from considering qualifications as part of their initial understanding of the evidence as it is being presented.

Shifting the norm to require verbal citation of author qualifications uniquely addresses these concerns.

Continue reading

Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate

The State of Evidence Evaluation In Debate

The recent discussions of evidence quality in high school policy debate have highlighted the need for debaters, coaches, and judges to revisit the prevailing assumptions about the proper role of cited material in our activity. While a drastic shift in the community’s approach to the evaluation of evidence remains exceedingly unlikely, there is an emerging consensus among debate educators that improving this facet of our pedagogy is both possible and necessary.

What is the problem? In short, the explosion of content enabled by new media has shattered traditional constraints on what constitutes “published” scholarship. While debaters in past decades were limited in their research to published books, journals/magazines, and newspapers, the debaters of today have access to a nearly limitless stream of information—all at their fingertips, and searchable in ways never before thought possible. As Gordon Mitchell describes in “Debate and authority 3.0,” the resulting information abundance has created a need for new ways of separating the good from the bad.

Publication, previously a one-to-many transaction, has become a many-to-many enterprise unfolding across a complex latticework of internetworked digital nodes. Now weblogs, e-books, online journals, and print-on-demand book production and delivery systems make it possible for a whole new population of prospective authors to publish material in what Michael Jensen (2008), National Academy of Sciences Director of Strategic Web Communications, calls an “era of content democracy and abundance.”

In content abundance, the key challenge for readers and referees has less to do with finding scarce information, and more to do with sorting wheat from the proverbial chaff (the ever-burgeoning surplus of digital material available online). The pressing nature of this information-overload challenge has spurred invention of what Jensen (2007) calls “new metrics of scholarly authority” – essentially, new ways of measuring the credibility and gravitas of knowledge producers in a digital world of content abundance.

Policy debate’s “metrics of scholarly authority” have developed slowly—changes in dominant assumptions about what constitutes “good evidence” have occurred over decades based on the organic back-and-forth of the contest round. At the high school level, the influence of summer debate institutes and the trickle-down from intercollegiate competition have played a major part in this evolution. While regional differences remain, the vast majority of those that participate in policy debate on the “national circuit” hold remarkably similar views about what makes a piece of evidence “good”. Indeed, the dominant conception of “good evidence” has become so normalized that it is often framed as self-evident: good evidence “speaks for itself”.

Continue reading

Learning From Your Elders: How To Find and Use Published Scholarship To Improve Your Theory Debating

Policy debate is a specialized activity with a unique vocabulary and a rich history. Its evolution has been shaped in large part by the broader developments in argumentation and rhetoric that have taken place in the academic field of communication. For many years, this connection between contest round debating and the academy from whence it spawned was made explicit by the frequent publication of scholarly articles about debate theory and praxis. Communication scholars, many of whom served as directors of the nation’s leading debate programs, contributed to the development of the activity by authoring texts about the major issues faced by competitors, coaches, and judges.

While the heyday of academic scholarship about competitive debate has passed, its voluminous legacy remains a vibrant source of inspiration and knowledge for contemporary students. Tapping into this rich history of debate scholarship is a fruitful way for students to deepen their comprehension of key theoretical issues while improving their overall ability to debate them effectively in contest rounds.

This article provides advice for students wishing to leverage debate theory research toward improvements in their debating. First, it provides an overview of the sources accessible to most debaters. Second, it provides a list of suggestions for making use of these articles. It is my hope that this article will give interested students the basic guidance they need to dive head first into the world of academic debate scholarship.

Continue reading

The Cult of Evidence and the Importance of Source Quality

The discussion of new affirmatives and Scott’s most recent post about the SPS article controversy intersect at the issue of how we are teaching students to evaluate evidence. I will write more about this over the coming days, but I wanted to chime in with a few thoughts about this meta-issue before discussing more about new affirmatives or about the SPS article controversy in particular.

My agreement with Roy’s initial post was not intended as an indictment of new affirmatives. Instead, I think the proliferation of poor-quality new affirmatives at season-ending tournaments reveals something important about the state of our activity. In particular, the following questions come to mind:

  1. What does it say about the way we are teaching our students that breaking new affirmatives is seen as so strategic at end-of-the-year championships? Why is it that students feel that they have a better chance of winning when they break even a poor-quality new affirmative than they do when reading one of their existing affirmatives?

  2. Does this represent a positive or negative trend? What should we be doing to nudge the competitive advantage toward a style of debate that rewards engagement with the topic literature and the opposition’s arguments more than evasion and trickery?

I don’t think it was Roy’s intention to “call out” those teams that consistently broke new affirmatives at this year’s TOC or to discourage teams from reading new affirmatives in the future. As I have written, there are certainly strategic benefits to breaking new cases and it is good to encourage students to invest the effort required to write a new case and prepare to defend it.

Continue reading

New Affirmatives and Source Credibility

In the first post on his blog (which finally inspired the creation of The3NR.com, an idea that had been milling around for a while), Roy criticizes the recent trend toward breaking many poor-quality new affirmatives at the end-of-the-year national championships. He concludes:

I … hope … everyone agrees that it is becoming increasingly more common for affirmatives to be afraid of defending their “house.” We do a disservice to the debaters and the quality of the debate if we allow this to continue. If you are a coach challenge your kids to find the best possible aff and learn everything about it. If you are a student, work hard, debate is most satisfying not just when you win but when your pour your heart out defending something and the work you’ve done translates into overall success. Judges, be willing to disregard bad evidence, be sympathetic to good smart arguments made by a team even if not evidenced. Winning is obviously an important function of debate, but if debate becomes a race to the bottom of crappy affirmatives what is the point? We change topics yearly to learn about different arguments and issues, why then do some of the most important rounds and major tournaments ultimately get decided on generics that can be read year round vs unsustainable new affs?

I agree 100%. One thing that I feel is important to add is that teams are far too afraid of negative teams finding the silver bullet negative strategy, especially at a tournament like the TOC where hired gun researchers are given way too much respect in terms of their ability to change the game. If you have been read an affirmative before and you are confident that it is based on sound arguments and quality evidence, chances are good that other squads have spent some time researching it, too. Between your research and theirs, it seems exceedingly likely that the “silver bullet” strategy would have been uncovered if it did indeed exist. If you haven’t found it and you haven’t heard another team read it, the most likely reason is not because the collective research ability of the high school community is poor but because no such strategy exists.

What is the silver bullet strategy against an RPS affirmative? Is there really something written that Sovacool and the other aff authors just haven’t thought about and therefore haven’t written a response to? If you read the journals every month and subscribe to RSS feeds for the major search terms relevant to your case, do you really think you’re going to miss the big new thing that came out and which the negative will catch you unprepared for?

The only reason to be afraid is because you lack confidence in the quality of your preparation. Maybe you haven’t kept up with the journals and you haven’t read all of the latest articles about your case. If that’s true, then you don’t deserve to win affirmative debates against negative teams that have worked hard to prepare to engage your case… maybe you should be reading stupid new affs that the other team will be unprepared to debate. But that is the debate equivalent of the trick play and an explicit admission that you are not as good as your opponents and that they have outworked you. If that’s an admission you’re willing to make, then so be it. But every team should strive to be the most prepared team possible when it comes to their affirmative(s), and you should feel a sense of shame and disappointment with yourself if you don’t think that’s the case.

This is decidedly not an argument “against” new affirmatives. There are times when it makes sense to try to catch the opposition off-guard with a case that you haven’t read before, and sometimes it even makes sense to read a new affirmative only once based on the teams that you are debating. But breaking new affirmatives that lack the credibility to survive even minimal negative research is an unfortunate but growing trend. If season-ending championship tournaments become battles between terrible new affirmatives and generic critiques and process counterplans, what does that say about our activity? Instead of pushing our students to become experts in the issues that they discuss throughout the season, it seems in many ways that we are telling them to forget what they’ve learned because, as critics say about Billy Beane, “that [stuff] doesn’t work in the playoffs.”

The obvious rejoinder to this line of reasoning is that the poor quality of these new affirmatives should make it easy for the negative to win. While I agree with this principle in the abstract, it doesn’t seem to play out that way in practice because of the approach that a majority of judges take. As Roy argues:

Debaters are not good at calling people out for reading bad evidence and judges have become too comfortable saying “Yeah well I agree it might not be qualified, it might be from a random blog, but I mean they’ve got a card.” It used to be only at the NDT in college that judges would use the “well they have a card” guise for making decisions, but this has now reverberated to almost every debate judged. We’re told not to believe everything we read on the internet, but it seems like in debate rounds a place for intellectual discussion on issues we often settle for evidence from people who are less qualified then the kids debating on the issue. Debaters CALL OUT TEAMS FOR BAD EVIDENCE read. Judges BE WILLING TO SAY THAT DESPITE HAVING A CITE, TAG, AND URL, THE TEXT READ IS NOT EVIDENCE.

I’m probably one of the best judges one could find for these kinds of approaches/arguments (“their ev is garbage,” “this doesn’t make sense,” “prefer qualified academic scholarship,” etc.), and I constantly tell students that they would be rewarded if they were more diligent about taking this kind of approach. Even “mainstream” cases (e.g. not stuff about alien invasions of Iraq to steal antigravity technology) often contain “evidence” that I would gladly disregard out of hand if only the negative challenged it. The Bearden card? If the neg says “he is unqualified, he said it is already too late, and he said that our only hope is zero point energy,” then it goes away. “Still evaluate his warrants” is stupid in the worst sense of that word and an excuse for judges to avoid making judgments (which, of course, is the function of the judge) about what counts as evidence and what sources should be relied upon when crafting policy.

Debaters would be pleasantly surprised by the reception they would receive if they made a bigger deal out of source quality in their debates. While there is certainly a segment of the judging pool that adheres to the “but they’ve got a card” school, I do not think that it is the majority (or even close to it). In front of most judges, arguments about source quality and author credibility will receive a very favorable listening—in many cases, you will be preaching to the proverbial choir. Remember, judges are the ones who have to listen to terrible evidence over-and-over again. After not very long, it gets old. Take advantage of that and challenge your opponents to justify the evaluation of the things they submit as “evidence” and you will win a lot more debates.