The latest edition of the 3NR podcast is now available for download and on iTunes (pending approval, probably). Remember, you can always subscribe to our feed in iTunes by clicking on the “Advanced” menu and choosing “Subscribe to Podcast”: simply copy-and-paste our regular RSS feed into the box and you’ll be good to go.
This week’s podcast features a discussion of several issues including the 100-point scale, critiques of disadvantages (and how to answer them), politics theory arguments, the use of technology in debate (including the Synergy software), and a few other topics. We are planning on discussing negative strategies against critique affirmatives next week but we would love to hear from our listeners: if there’s something you’d like us to discuss, please let us know!
Podcast: Play in new window | Download
Some of the data that I talked about re: the 100-point scale:
David Mullins, the top speaker at Greenhill, received 115.5 high-low points — that is an average of 28.875. That would translate to 88.75 on the 100-point scale. At St. Mark's, nine speakers averaged more than 88.75 per round (or 355 points per round dropping high-low). Ellis Allen, the top speaker at St. Mark's, averaged 92.75 points—that would have been an average of 29.275 at Greenhill (or 117.1 high-low points). For comparison's sake, the point totals for the top speaker at St. Mark's for the past 6 years were:
2008 – Peyton Lee – 116.5
2007 – (can't find this one at the moment)
2006 – Stephen Weil – 115.5
2005 – Mat Andrews – 115.5
2004 – Sam Crichton – 115.0
2003 – Jake Ziering – 115.0
Thanks to Tim Alderete, by the way, I now have access to an abundant collection of results packets dating back to the late 1980s. I am in the process of digitizing that information and will be posting about it occasionally… one thing that it will definitely make possible is a study of point inflation over time.
The 2007 St.Mark's top speaker was Ryan Beiermeister from Kinkaid – 114.5.
http://www.joyoftournaments.com/tx/stmarks/2007/2…
As much as I want to say the 100 point scale produces clearly objective results, I think the 30 point decimal scale makes the most sense. It solves variance and flexibility for judges just as well but has a few key advantages, namely avoiding the psychological 90-point threshold and preventing judge confusion. Like Bill said, it's easy to label a 28.5 or 29 debater but what the hell does 86 mean. Are there any specific benefits to the 100 point scale that the 30 point decimal scale doesn't access?
Please read the Larson post on this issue. He did not issue a comparison of the two systems, but did have some intelligent criticism of a decimal system. I hope his credibility on matters statistical forces people to read his post, at a minimum.
If some judges adopt the 30-point decimal scale, but others simply continue to cluster around 28/28.5, it's objectively no better a measure than a 30 point non-decimal scale. While the nuance may *feel* better for judges, because they feel all free and stuff, their false feeling of liberation bears no more weight for measurement than nuanced ballot comments.
To answer your specific question – a 100 RKS point scale is supposed to be better than a 30 point decimal scale for revaluation because it disrupts sedimentation. Lazy judges won't stop clustering unless a new system forces them to do so. It's also an opportunity for tournaments to use published rubrics, which should help to maintain (establish?) the bell curve that we're all shooting for.
Where should I look to find the Larson article?
Anyways I think this podcast makes the point that judge sedimentation is inevitable, even at a tournament like St. Mark's with a high quality pool and a 100 point scale that includes an established rubric. For example, 90 points has some psychological resonance with judges that causes them to inflate their points and clump debaters together. Or maybe judge laziness is inevitable and they clump on any scale, in which case the 30 point decimal system is just as good.
Also, even if the 30 point decimal scale doesn't solve variance (after all, judges can just give 28.1, 28.2, 28.3, 28.4 every time) it still gives a better tiebreaker. Greenhill was frustrating because there was something like a 2-point spread (4 increments) between Mullins and the 20th speaker. Even if that gap doesn't grow under the 30 point decimal scale, it would technically be something like 20 increments instead of 4 so we wouldn't default to arbitrary tiebreakers like JVAR.
The Larson post is available on edebate.
http://www.ndtceda.com/pipermail/edebate/2005-Oct…
I don't really listen to the podcasts; they are quite long. I do listen to the songs, though.
I think the solution is more data – not reversion to a broken system.
Published rubrics are really the best way to do this.
*If* some judges adopt the nuance of a decimal scale, while others just stay stuck in clusters – then JVAR is actually much much more meaningful? That's actually a fairly non-arbitrary measure; it's always the first measure I look at, ironically. It corrects for arbitrariness, to some small extent.
Relative merits of JVar aside, it is not a meaningful tiebreaker if only some adopt it – it skews the scale by introducing more variation. If some judges shade into 28.2 where another is stuck in the clusters and gives a 28, someone just got (very slightly) screwed.
Nooch,
With how much time you spend on this site, and discussing it on other sites, its a shame you can't squeeze in the podcasts, they are very educational.
I will try. Thank you for the close reading; I applaud your efforts to promote intellectual diversity in debate.
In the pre-SM podcast, Mr. Levkovitz said “I think there’s a direct correlation between not being good and flowing on your laptops”, “we should put a poll up on the 3nr – do you flow on laptops and do you ever clear? I think you’ll see a direct correlation between lack of clearing and laptop flowing”, and something about teams that show up unprepared (i.e. without flow paper or pens) not being in more than 6 debates.
Not exactly sure how incriminating that evidence is, but thanks for the shout-out.
seems to support my position that I did not say ALL teams flowing on laptops will not clear. Rishee you've done well
bill,
maybe im just stupid. but i can't access this condo lecture from jarrod.
http://www.mediafire.com/billbatterman/ is not working.
Mediafire just upgraded their interface and it's being flaky, I think. The direct link to that lecture is:
http://www.mediafire.com/?mniokzmgzmn