Monthly Archives: October 2009

Chaudoin Method Applied to Carroll Evidence

Another post from Stephen about the card at hand

Let’s apply the Chaudoin method to the Carroll evidence that has been heralded as the bringer of death for Consult:

Before doing that, let’s observe that this “piece of evidence” fails the old debate test of “claim + warrant = argument.”  I feel like a novice saying this, but, there’s no warrant in that card.  Also note that this is really unfair to Jamie because we’re talking about a footnote, not his actual argument.  I blame the author of the post for this silliness in the first place.

1)      Do the predictions logically flow from the assumptions:

No.

This footnote seems to conflate “allowing a veto” with “subservience of foreign policy to the whims of other countries.”  It also seems to ignore the potential for the US to decide when to consult and when not to consult.

2)      Are real world data consistent with these predictions?

No.

This footnote doesn’t mention any, so this is really a nonstarter anyways, but we could easily think of some pretty prominent examples where allowing other countries to veto foreign policy would have saved us some serious mockery.  Had the US actually consulted the UN on Iraq II, Jon Stewart would be out of a job.

3)      What are some more rigorous academic arguments related to the subject? (Khalilzad and a Friedman rant/op ed don’t count).

–          We might check out articles by Chapman and Reiter, or Chapman alone that are about the rally round the flag effect and international effects of consultation.  We might develop a better notion of “cooperation” by reading Carrubba’s “Courts and Compliance in International Institutions” etc.  Ikenberry’s “After Victory” is about hegemons “smoothing” their power trends by binding themselves to particular institutions.

Here’s another quick way to apply the Chaudoin method.  Ask: “Does the piece of ‘evidence’ I’m reading have all the depth of a Fox News transcript or does it actually make an argument?”

Debate “Evidence” and Evaluating Theories

This is a post written by my former college debate partner Stephen Chaudoin (Emory alum 2006) Phd Candidate in the Department of Politics at Princeton University.

—-

In academia, the term “evidence” means “observations about the world that may or may not be consistent with the hypothesis they’re designed to test.”  In debate, the term “evidence” means “some shit somebody got published.”

Observe the difference…

Professor: “I think that X causes Y and as evidence I have measured X and found it to be correlated with this measurement of Y.”

Debater: “I think that X causes Y and as evidence I present to you this article from Foreign Affairs that says ‘X causes Y.”

It isn’t hard to tell which one I think is actual evidence and which one is paraphrasing someone else’s publication that may or may not contain evidence.

It isn’t accidental that debaters use the second interpretation as opposed to the first one:

Reason 1 (not debate’s fault):  Debate is about prediction.  “I think if you do policy X then Y will result in the future.”  It is not about empirically testing hypotheses.  “In the past, did policy X tend to result in Y or Z?”  It is hard to predict the future and doing so with empirics necessitates certain assumptions that may or may not be “true.”  This is a fundamental problem that is not debate’s fault because assumptions aren’t testable.

Reason 2 (sorta debate’s fault, but not really):  The core principles of debate do not lend themselves well to in depth evaluation of evidence or to in depth research sources.  In 8 minutes, I can probably summarize the theory and evidence in a Foreign Affairs article.  (Actually, I could probably explain the entire volume with “none and none” but I digress).  I would be hard pressed to do the same with American Economic Review article.  Google scholar some and see if I’m wrong.  I could give you the tagline like “Risk aversion explains behavior in a first price auction laboratory settings better than prospect theory” but I would not be able to cover the theories involved or the evidence, at least not so that you could reach the same point on the research frontier as the article.

I won’t say speed or emphasis on taglines are bad.  They most certainly are not.  Conciseness and organization are just as important as depth.  (I vaguely remember some cards about speed and memory, irony much?)  But realize that the setup of debate ensures a ceiling on the quality of evidence debate.

Reason 3 (probably debate’s fault):  I’m going to assume the 3NR is at the frontier of debate thought both because I know (two of) the authors and because it seems pretty darn astute.  Bill, I’m sure you rock; we just haven’t met.  But even the frontier minds emphasize only one half of theory testing: logical consistency.

A theory can be evaluated in (at least) two ways:

1) Logical consistency:  I start with these assumptions.  I derive this prediction.  Logical consistency asks “does this prediction logically follow from these assumptions?”

This is the one that debate focuses on almost entirely, probably because we all possess good logic skills and that’s part of why we selected ourselves into the activity.

2) Empirical consistency: Are real world data consistent or inconsistent with the hypotheses derived from the theory?

There are two words to look at, “data” and “consistency.”  Bill and Paul’s responses to Roy’s Toulmin revision get at this nicely.  “Data” refers to observations from the world used to measure a certain concept.  “Consistency” refers to the way in which you think about sets of observations to determine whether or not they are consistent with a theory.  Some potential objections are “the researcher didn’t measure something correctly,” “the researcher did not account for this other thing,” etc.

Two easy solutions:

  1. Focus on implementing the Chaudoin method (I don’t know who Toulmin was, but he has the word “tool” built in and he’s probably old and won’t care if I steal his method’s spotlight.)  After reading evidence as a debater or judge, ask “does the claim follow logically from the assumptions used to generate it?”  Next, ask “how convincing are the empirics used to test this theory?”  I would be willing to bet that 50% of debate “evidence” fails the first test and 95% fails the second.
  2. Cut longer cards.  You don’t gotta read it in the round, but the judge probably will afterwards.
  3. Read journals that are more academic:  Google something like “political science journal rankings” for a list of the top political science ones (APSR, AJPS, IO, etc) or do the same for economics (AER, QJE, JPE, etc) or for any other relevant disciplines from sociology to biology.

Implementing the Chaudoin method will win you 50-60 more rounds next year.

Full disclosure: I debated competitively for a long time before moving on to a PhD program in Political Science.  I study empirical methods and game theory which for sure affects my opinions on this subject as well.

Also, I’m trying to get Roy to put me on as a guest writer on the 3NR, so maybe commenters should back me up.

The lost art of the Literature Review

Second, I would argue that the skill of literature searching, in general, may be approaching extinction. One would think that the wonderful improvements in data bases and computer search engines over the past quarter century would have dramatically improved literature searching. Instead, they may have made it worse. My conclusion derives from reading several too many research articles on standardized testing with a wholly erroneous assessment of the research base on the topic.

Continue reading