Tag Archives: qualifications

Reading Author Qualifications Aloud: A Response To The Critics

Several readers provided thoughtful commentary about my recent essay about evidence analysis, “Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate.” This post is an attempt to further develop the arguments advanced in the initial article while addressing the concerns of critics.

1. Normalizing the verbal citation of author qualifications will “nudge” the debate process in the direction of the development of new “metrics of scholarly authority.”

Much of the feedback regarding the article has centered around the competitive outcome of this change in the norm about evidence citation: how will debates about qualifications be resolved?, what qualifications will be preferred?, etc. This largely misses the point: the function of this change in norm is to emphasize the importance of these discussions and encourage debaters and judges to address them explicitly.

Nick Bubb highlighted many of the issues in a thoughtful comment:

[M]any people perceive authors’ opinions to be a politically motivated response to a given issue, rather than an independent evaluation of the truth. … For example, do we minimize Howard Dean’s opinion on health care reform because he’s a democrat and advocates for health care reform? Or do we prefer his analysis because he knows the policy? Or minimize his opinion because he stands to gain politically from the enactment of health care reform? Or do we prefer his opinion because he’s a doctor? What about his opinion on the political implications of health care policy? There are fair arguments to be made on all of these questions, but the structure for interpreting who is qualified to speak to the truth of a given issue is difficult. Certainly some individuals are more qualified than others, but how can we answer that question? If you are to believe some aspects of a hermeneutical process, authors’ qualifications are really their biases and we as the listener have biases for/against their experiences. We can be jaded and dismiss them or we can listen to their reasoning. But which action corresponds with finding the truth? The answer can’t be as simple as to listen to everything, because that degrades back into the problems you’re attempting to address: the prevalence of questionable evidence quality.

There’s also something odd about needing qualifications to speak to an issue. You don’t need a degree from Harvard to talk about poverty. A narrative from a poor person may be equally as powerful. I suppose the “qualifications” can change depending on the context, but then what do qualifications mean?

[A]s a judge, I wouldn’t know how to handle comparative claims. Do I prefer evidence from an economics professor about poverty policy or is it more important to listen to the people that the policy affects?

This is exactly my point: these issues are difficult, but they are also important. In a world where students are exposed to ever-expanding volumes of information, learning to intelligently separate the good from the bad is essential to informed citizenship.

I do not pretend to know the answers to the questions that Nick has posed. I can offer no mechanism for cleanly separating the intellectual wheat from the chaff. But the current model we have adopted in debate is certainly subject to criticism: “if it’s published, it’s evidence” has absolved us of our responsibility to take these issues of scholarly credibility seriously and of teaching students to intelligently navigate through the maze of information at their fingertips.

Effectively determining whom to believe—and more importantly, why to believe them—is arguably the most essential life skill that debate can teach. Perhaps better than any other activity, debate can effectively train students to think critically—to question others’ arguments and to evaluate their claims with skepticism. Working through the complicated business of analyzing sources and comparing qualifications is part and parcel of this facet of debate pedagogy.

The current norm—evidence should be verbally cited only by author’s last name and date of publication—hamstrings our ability to emphasize this aspect of critical thinking and in fact actively undermines it by framing the issue of qualification as separate from instead of intrinsic to the evidence itself.

As I argued in the article, this effect occurs at two levels:

  1. Excluding qualifications from verbal presentation implicitly de-values their importance when considering the quality of a piece of evidence. If the author(s)’ qualifications are not important enough to read aloud, after all, how important can they really be? …

  2. Requiring students to locate the qualifications of a given piece of evidence “privately”—during speech or prep time—prevents the judge from considering qualifications as part of their initial understanding of the evidence as it is being presented.

Shifting the norm to require verbal citation of author qualifications uniquely addresses these concerns.

Continue reading

Nudging Evidence Analysis In The Right Direction: The Case For Reading Author Qualifications Aloud In High School Policy Debate

The State of Evidence Evaluation In Debate

The recent discussions of evidence quality in high school policy debate have highlighted the need for debaters, coaches, and judges to revisit the prevailing assumptions about the proper role of cited material in our activity. While a drastic shift in the community’s approach to the evaluation of evidence remains exceedingly unlikely, there is an emerging consensus among debate educators that improving this facet of our pedagogy is both possible and necessary.

What is the problem? In short, the explosion of content enabled by new media has shattered traditional constraints on what constitutes “published” scholarship. While debaters in past decades were limited in their research to published books, journals/magazines, and newspapers, the debaters of today have access to a nearly limitless stream of information—all at their fingertips, and searchable in ways never before thought possible. As Gordon Mitchell describes in “Debate and authority 3.0,” the resulting information abundance has created a need for new ways of separating the good from the bad.

Publication, previously a one-to-many transaction, has become a many-to-many enterprise unfolding across a complex latticework of internetworked digital nodes. Now weblogs, e-books, online journals, and print-on-demand book production and delivery systems make it possible for a whole new population of prospective authors to publish material in what Michael Jensen (2008), National Academy of Sciences Director of Strategic Web Communications, calls an “era of content democracy and abundance.”

In content abundance, the key challenge for readers and referees has less to do with finding scarce information, and more to do with sorting wheat from the proverbial chaff (the ever-burgeoning surplus of digital material available online). The pressing nature of this information-overload challenge has spurred invention of what Jensen (2007) calls “new metrics of scholarly authority” – essentially, new ways of measuring the credibility and gravitas of knowledge producers in a digital world of content abundance.

Policy debate’s “metrics of scholarly authority” have developed slowly—changes in dominant assumptions about what constitutes “good evidence” have occurred over decades based on the organic back-and-forth of the contest round. At the high school level, the influence of summer debate institutes and the trickle-down from intercollegiate competition have played a major part in this evolution. While regional differences remain, the vast majority of those that participate in policy debate on the “national circuit” hold remarkably similar views about what makes a piece of evidence “good”. Indeed, the dominant conception of “good evidence” has become so normalized that it is often framed as self-evident: good evidence “speaks for itself”.

Continue reading