Topic For Discussion: The NDCA Baker Award

Established in 2006, The David P. Baker Award for Season Long Excellence is presented at the National Debate Coaches’ Association Championship to the high school policy debate team with the highest point total using the tournament’s qualification system.

Modeled to some extent after college debate’s Copeland Award, the Baker is calculated based on a mathematical formula rather than on a poll of coaches or voters. This basic statistical approach to evaluating a debate team’s performance over the course of a season has been criticized by some participants and coaches who have advanced several critiques of the formula.

This article is an attempt to first explain the way that the Baker Award is calculated and then to highlight the major complaints that have been levied against it.

How The Baker Award Is Calculated

The Baker Award is calculated based on the same formula that determines admission into the NDCA Championship Tournament. Points are calculated for each tournament that a team attends during the regular season; the five highest point values count toward the team’s overall total.

The formula for the Baker Award is unique in that it does not subjectively assess the quality of a given tournament (in the way that the Tournament of Champions does by awarding bid levels based on expected tournament quality). Instead, the formula includes only four basic variables:

  1. The number of entries at the tournament. This number can be no greater than 100 (so tournaments with 100 entries or 300 entries count the same for this variable).

  2. The Diversity of Tournament Multiplier (DTM). This number reflects the number of states that are represented at the tournament. The number of entries at the tournament is multiplied by the DTM on the following scale: 1-2 states = 1, 3-5 = 1.2, 6-8 = 1.4, 9-12 = 1.6, 13-15 = 1.8, and 16+ = 2.0.

  3. The team’s preliminary round winning percentage. If a team’s record is 5-0, 6-0, or 7-0, their total is 1.0; if their record is 4-2, it is .667 (and so on).

  4. The Elimination Success Multiplier (ESM). The team’s winning percentage in the preliminary rounds is multiplied by the ESM on the following scale: didn’t clear = 1, clear but don’t win an elim debate = 1.1, octafinals = 1.2, quarterfinals = 1.4, semifinals = 1.6, finals = 1.8, and winning finals = 2.3.

The four numbers are multiplied together to establish a point total for a given tournament. For example, Glenbrook South’s victory at the Barkley Forum earned Richard Day and Will Thibeau the maximum point total available under the system: 460 points (100 (teams) x 2.0 (16+ states) x 1.0 (6-0 prelims) x 2.3 (winning finals) = 460).

A team that accumulates five maximum-value tournaments would finish with 2,300 total points (last year’s winner from Westminster finished with 1,956.7 points).

Criticisms Of The Baker Award Formula

Individuals have levied several criticisms of the formula used to determine the Baker Award. I will attempt to outline a few of the most common criticisms in hopes of spurring a discussion of the system and ways to improve it.

Criticism #1: It Undervalues Tough Tournaments

The formula determines the quality of competition at a given tournament only indirectly by multiplying the number of teams (no greater than 100) by the DTM (to account for tournaments attended by schools from multiple states). In reality, however, the size of the tournament and the number of states that are represented does not by itself determine the quality of the competition.

This criticism is probably best noted as The MBA Critique—the Southern Bell Forum, because it caps its entries at around 70 teams, counts for far less points than a tournament like Wake Forest or the University of Michigan. In reality, the competition at MBA is much more intense and reaching the elimination rounds (much less the late elimination rounds) at MBA is far more impressive than reaching the equivalent round at Wake or Michigan.

Criticism #2: It Overvalues Preliminary Round Records

The formula includes a multiplier for preliminary round winning percentage, severely penalizing teams that lose prelim debates even if they eventually go on to win the tournament.

For example, if Glenbrook South had accumulated a 4-2 record in the prelims of the Barkley Forum (instead of the 6-0 record they racked up in reality), their total points for the tournament would have fallen from 460 to 307. Westminster’s total—400 (6-0 record and losing in finals)—would have actually been greater than Glenbrook South’s even though GBS emerged with the championship.

Criticism #3: It Doesn’t Include Round Robins

The formula was created to encourage the creation and maintenance of strong regional and local circuits. As a result, it was explicitly designed not to include teams’ performances at round robins. Some have argued that it is impossible to fairly judge the relative records of the country’s best teams without factoring in all of their debates against one another—many of which occur at round robins.

Criticism #4: It Removes The Human Element

The formula is statistical: it establishes a system to value certain aspects of debate performance and then objectively calculates the results. The formula itself, however, is not “objective”—it is value-laden in its construction. Many have argued that a superior system would replace this statistical formula with a subjective one that polls coaches or is determined by a committee of voters (as is done with college debate’s Copeland Award).

What are your thoughts about the Baker Award and the formula that is used to determine it? Is the system a good one? Can it be improved? Should it be replaced? Please share your thoughts by posting a comment.

22 thoughts on “Topic For Discussion: The NDCA Baker Award

  1. David Heidt

    To me, criticism #2 is the most valid criticism, and #4 is a strength of the award, not a weakness. Any decision to vote is more political than a formula (note to the K sensitive: yes, the criteria for determining a formula is also political. But the degree of link matters). I wish the college community would adopt an NDCA style formula instead of relying on the current voting process.

    The inclusion of the prelim win-loss record at all makes sense to me only because for a team that is struggling to clear at most tournaments still needs to be able to generate points for NDCA tournament eligibility. Beyond that, I don't think that prelims should count for very much in determining the strongest teams for any given year. Prelims are already reflected in elim wins by determining seeding – in that you are much more likely, though certainly not guaranteed, to have an easier elim draw if you have a 6-0 record than if you have a 4-2 record.

    The current formula weights a prelim loss heavily against a team, and, as Bill points out, a loss in prelims can have a greater effect on the total points than a loss in elims. This seems backwards, in that it is much harder to win a tournament, or be in late elims, than it is to be undefeated in prelims. The formula was modified after 2006 to attempt to correct this by adjusting the winning finals modifier from 2.0 to 2.3. In my opinion, this didn't go far enough, and ALL elims should count for more.

    #1 and #3 are very valid criticisms. There was talk at the end of last year towards modifying the formula to make MBA count for more. I don't know what became of that. Modifying the formula to account for round robins seems considerably harder, though not impossible. It would probably require a separate formula for calculating round robins that would be added on to someone's point total for regular tournaments.

  2. Bill Batterman Post author

    I did my best to present the arguments fairly—my hope is that those who believe in them strongly will develop them more fully in the comments. For the most part, however, I support the current NDCA formula with a few exceptions.

    First of all, the formula-based method is superior to a poll—on this I agree with David 100%. While there is merit in a "coaches' poll"-type system, I do not believe that it should be used to determine The Baker Award. The benefits of a human-generated ranking—most notably that it accounts for head-to-head matchups and the specific circumstances of each team's competitive record—are more theoretical than practical. In reality, very few high school debate coaches would be knowledgeable enough to assess the records of the country's top teams and even fewer would have the desire or ability to thoroughly analyze the data in a way that would produce predictable, accurate conclusions. Will Repko's system for college debate is wonderful, but I don't think it is portable to the high school level.

    Second, I propose the following tweaks to the system:

    A. Separate the formulas for "tournaments in which a team reaches the elimination rounds" and "tournaments in which a team does not reach elimination rounds." Preliminary round records would be included in the formula only for the former, not the latter. The maximum number of points that a team that does not reach the elimination rounds should receive should be less than the minimum number of points that a team receives for reaching the elimination rounds.

    B. The cap on "Size of Tournament Multiplier" should be reduced to fifty. This is a dramatic change—it would mean that most "medium" to "large" tournaments would be counted the same. The "Diversity of Tournament Multiplier" would remain the same; I might suggest adding an additional category (so that 16-25 and 25+ are differentiated) but I do not think this tweak is essential (although I may be wrong).

    C. The "Elimination Round Success Multiplier" should be doubled.

    This proposal would provide a formula that can simultaneously evaluate the performance of teams that do not reach the elimination rounds (in order to determine qualification for the NDCA Championship) and provide a better evaluation of teams that *do* reach the elimination rounds (in order to determine the Baker Award). It would dramatically increase the importance of winning elimination rounds while removing the penalty of preliminary round losses. Reducing the tournament entry size cap to 50 would pace all tournaments with at least 50 entries (including everything from MBA and Greenhill to regional tournaments) at an equal starting line while multiplying their importance by virtue of the number of states that attend them; this seems to be a more accurate way to objectively determine the most prestigious tournaments than the raw size of their pool.

    Finally, I do not think that round robins should be counted in the formula. All teams should have a fair opportunity to compete for The Baker Award, not just those whose connections and budgets allow them to attend these elite events. More fundamentally, adding round robins to the formula would require a substantial "except in the case of XYZ" amendment, something that would both undermine the formula's elegance and invite further exceptions that will inevitably invite controversy and accusations of politicking (e.g. give MBA a special multiplier, etc.).


  3. Tara Tate

    Long time reader, first time poster. 🙂

    This post was forwarded to the NDCA Board. I know that Tim Mahoney (the tournament director) as well as the Board will be keeping track of this discussion. We want this award to be considered one of the highest honors that can be reached in high school debate.

    It seems like the "over valuing" of prelim records is definitely one that the NDCA should take a look at. I know when this formula was calculated we consulted quite a few math gurus so there may be a mathematical reason for this.

    In regards to cricisms #1 and #3, I believe they are warranted but we have to balance that with one of the goals of the NDCA tournament/Baker Award. The tournament (which is why the qualification system was created) is supposed to support *regional* debate. Not that we don't love the TOC, but we wanted to make the NDCA tournament somewhat different. A school that wins a bunch of large, regional tournaments (i.e. those schools that can't consistently travel out-of-state) needed a place to play…a place to show off their skills and compete. If we start evaluating tournaments on a subjective approach based on difficulty, how is our tournament really any different than the TOC?

    Again, I am on board with the examination of prelim vs. elim win/losses. I will need more persuasion that the tournament should start evaluating subjectively the "strength" of tournaments which will inherently only continuing ballooning national circuit elitism.

    Tara Tate
    GBS Debate

  4. Tara Tate

    I think that is a good question that the NDCA should discuss. The formula first/foremost is to drive qualification for the tournament which is to support regional debate. The Baker Award has always been framed as having the goal of rewarding the best. Hmmm….TT

  5. David Heidt

    I agree with Tara about #1 an #3, I don't see any problem with balancing goals, even if there is some conflict. My only objection to the formula, really, is #2.

  6. Roy Levkovitz Post author

    I have a longer post in the hopper about why I think that the "objective" nature of the Baker Award is bad (AKA I will prolly not find time to write it) but one quick question, how do size of tournament and DTM help smaller and more regional teams qualify for the tournament?

    People who can't go to large tournaments with max points like blake, glenbrooks, harvard and michigan don't get max points right?

    I do agree with scott (we had a gchat before he posted it) that the baker runs in conflict with the idea of inclusion, what if the formula was used to calculate qualification and then a panel or committee voted on the top 16 teams (first round style?)

  7. Tara Tate

    There are a lot of regional tournaments in debate-rich states like Texas, Kansas, and Missouri that have huge fields (100-150 teams). We did include the DTM in order to provide some sense of strength without making the formula overwhelmingly biased to the national circuit. The goal was that the NDCA tournament, unlike the TOC, would be able to include very talented teams that won tournaments in Kansas, etc. that dominate very large tournaments within their own state.

    Initially, I am not adamantly opposed for a different system of getting to the Baker Award than the tournament. However, I don't begin to claim that I speak for the NDCA nor the other individuals that had a hand in devising the system. I don't think the Baker Award the past few years has been off-base so I am not sure the system is that broken.

  8. David Heidt

    You're right that the DTM modifier may not help purely regional tournaments very much, but the size modifier easily could. And if a school doesn't travel nationally but is in the region of one of the high point but lower competition tournaments, they will benefit from it. That school might clear at a quarters or semis bid TOC tournament and get a lot of NDCA points even if they don't get a TOC bid.

  9. Michael Antonucci

    Social engineering is inevitable. The TOC, institutes, and the3nr all attempt to engineer outcomes.

    The best sort of social engineering, IMO, explicit lays out its project and allows democratic, open contestation of its goals and method.

    Many forms of debate social engineering over the years have been deleterious because they've disguised their goals under a (very thin) veneer of objectivity. Claims of pure and neutral competition or completely outcome-blind process should always be treated with suspicion.

  10. Roy Levkovitz Post author

    A couple of questions / comments.

    1.) Separating the Baker award specifically from qualifying for the tournament. Correct me if I'm wrong but do people currently not qualify for the tournament? I don't think the NDCA has reached capacity in the last 3 yrs? Miami, MSU and UNLV? I understand that there is the potential to fill up eventually but is there any possibility of just making it "CEDA" style where you get to take 2,3 or 4 teams per squad and that is it?

    2.) This "social engineering" thing. Scott does raise a valid point that part of the baker award does encourage teams to avoid the best teams and pick their best spots. The Harvard RR is hosting 14 top teams this year, a team in the baker hunt and who wants to win it could easily say well I like my chances at Berkley alot more and go there, both will be max points, and their chances go up significantly with 14 top teams debating each other at Harvard. Going to wake over greenhill etc are all decisions teams could make.

    While Berkley could lose its octos bid and become a quarters bid, its size guarantees it max points permanently, the same probably is true for wake, there needs to be a recognition that not all tournaments are created equal

  11. Whit

    A couple of points…

    1) I like Bill's idea that prelim record percentage be used solely as a substitution for the outround multiplier in the event that a team does not clear.

    2) I don't see a need to change the size of tournament modifier. The tournament that most people point to as a reason to change this is MBA. I love the MBA tournament, and respect its unique traditions. It is certainly one of the most difficult tournaments of the year, but it should not be counted like other tournaments for several reasons:
    a) The size cap: It's not small because people don't want to come. The unique nature of the tournament means that a school can only enter one team. This means that even though the tournament is tough, some competitive teams are locked out. For example, a team like Hooch RZ who was in the Semis of the BF would not have gotten a chance to compete at MBA. Only tournaments that are open to a reasonably large talent pool should get max points.
    b) Limited Prep Time: The rules are different at the tournament. There is plenty of room for argument over whether this helps or hurts the quality of debates, but it certainly changes the game. Anyone who has kept up with the radical changes in college tournament structure knows that there is a lot of resentment to fiddling with the rules at a tournament that is important for ranking.
    c) Unique ranking system. The breaks at MBA are tough. The opp win plus speaks method they employ has created heartbreak for many teams throughout the years. A team that would have cleared at any other tournament should not be punished in their baker standing because of this unique method.

    Again, not hating on MBA. It's one of my favorite tournaments. It's a special tournament. Let's let it be a special tournament. No one is going to stop going just because it doesn't get max baker points. Just like no one is going to stop going to round robins. It's like Scott said, teams are attracted to competition, and MBA provides it (…and outstanding hospitality). Altering the rules to make a special case for MBA in the Baker standings doesn't seem necessary or justified.

    One question for Tara or someone else who knows the NDCA rules… The max points for a tournament are 460. What if two different teams achieve max points at 5 tournaments? Is there a tie breaker?

  12. Bill Batterman Post author

    There is definitely tension between “deciding the best team” and “promoting regional debate”. I think it is important to consider the powerful effect that this formula can have on tournament schedules—it can generate powerful incentives for teams to choose certain tournaments instead of others, and this effect will only increase as the prestige of the NDCA Championship increases. Travel schedules are currently determined in large part by the TOC—squads go (a) where the bids are and (b) where they think they can win them. The Baker Award formula could easily piggyback on the existing TOC system—replace the DTM with a “TOC Multiplier” (1.0 for not-TOC, 1.4 for octas bids, 1.3 for quarters bids, 1.2 for semis bids, and 1.1 for finals bids). Would that recognize “the best”? Maybe.

    But recognizing the best in that way also means encouraging an incentive system that many find objectionable/undesirable/unsustainable. If The Baker Award is calculated using a different formula that doesn’t place any added value on mega-tournaments, then it has the potential to change the way teams determine their travel schedule. Why take a three-hour flight to go to the tournament with 100 teams when you can get the same NDCA points at the tournament with 70 teams that you can drive to? There are obviously reasons—quality of competition/judging, value of the experience, etc.—but it would certainly encourage *some* teams on the margins to choose the regional tournament instead of the national one. If that occurs, we’re not far from a situation in which the “regional” tournament becomes nearly as competitive as the “national” tournament.

    Essentially, my argument is that “who is the best team?” begs the question “what are the best tournaments?”. But answering “what are the best tournaments?” without asking “*should these be the best tournaments*?” misses an important opportunity to nudge the debate community toward a more sustainable travel schedule. The mega tournaments *are* the best tournaments in the status quo *because people attend them*. And people attend them *because they have substantial incentives to do so*, one of which is artificial/engineered/centrally planned (TOC bid status). There’s no reason the NDCA can’t use its formula to incentivize a slightly different tournament schedule—and in doing so change the answer to the question about best tournaments.

  13. Scott Phillips


    Your comment is deeply troubling to me due to its explicit endorsement of social engineering. I would like to hear from heads of the NDCA if this is actually their goal as well.

    Top teams chase competition, not bids. They go to RR’s that have no bids, dont go to easier bid tournaments, the list of proof could go on and on. TOC bids follow tournament strength, not vice versa. No tournament starts out as an octos bid, then gets competitors. The bid status of many tournaments has changed over the years based on level of competition. Obviously the bid allocation process is political/manipulated, I don’t see how this fixes that problem though.

  14. Bill Batterman Post author

    I’m sorry to hear that you’re troubled by social engineering, Scott. If that is truly the case, you might want to rethink your judging philosophy incentives… you’re tricking the poor children into altering their behavior to suit your utopian vision. 🙂

    You’re not wrong to claim that “top teams chase competition,” but the decisions that squads make about their travel schedules are complicated and reflective of subtle and not-so-subtle incentive structures. Inertia plays a large part, of course. So does tournament quality, location, etc. But relative levels of “prestige” are socially constructed and self-fulfilling in many ways.

    Top teams go to round robins, yes. That obviously makes sense—it is a unique opportunity to get several rounds against other top teams and there is a lot of prestige attached to (most of) these events. It is unlikely that this will ever change: top teams will go to a few round robins, but these are usually offered in conjunction with an invitational.

    Invitational decisions are more complicated. Why do top teams not attend lower-bid-level (and regional, non-bid-level) tournaments? Because other top teams do not attend them. This has a trickle down effect—the second tier of teams wants to debate the top teams, so they follow them to the larger tournaments. A subset of teams values obtaining a bid more than they do the competition against the top tier, and so they attend the smaller tournaments.

    What would happen if a few top teams decided to attend a “mid-major” tournament? For instance, what if Westminster AT, Chattahoochee CR, MBA HM, and Woodward PP all decided to attend the Vestavia Hills tournament in Birmingham? I can guarantee that others will follow. Some might be deterred—”we can’t get a bid now b/c those teams will get them”—but that just demonstrates the significance of the TOC bid system in determining travel schedules. I think many more teams would respond by adding Vestavia to their travel schedules, at least those teams who can get to Birmingham relatively easily/inexpensively.

    And what happens to those teams? They get to debate four of the top tier teams, hopefully improving in the process.

    It would be possible to tweak the formula for the Baker Award to encourage teams to make travel decisions like this. Is this “social engineering” unethical? I guess that’s up for debate, but I don’t think so.

    BTW—I could be convinced that the Baker Award is the “wrong forum” for encouraging regional debate… the above (and the previous post) are just food for thought. My point is simply that there is an opportunity cost to resolving the conflict between “honoring the best team” and “encouraging regional debate” in favor of the former.

  15. gulakov

    I don’t have the requisite knowledge or position to say whether the criticisms of the NDCA system are valid, however, commenting from a purely technological standpoint, I think that there is a solution to the first three criticisms, as well as a way to make the system simpler. Fundamentally, the best team had the most success at the best tournaments. This implies that a global variable of Tournament Quality ought to be multiplied by that team’s individual variable for Tournament Success.

    Tournament Quality: The first and third criticism underscore that size isn’t the best determinant of quality. Size is often assumed to be the best variable available, however this isn’t the case. Bruschke brought up this point when describing his rating system: “Simply make the weighting for each tournament depend on two things instead of one: The number of teams at the tournament AND the average point totals for the teams attending the tournament. Each factor could count equally or some unequal weighting could be generated (tournament weights could, for example, depend 75% on the average points of teams in attendance plus 25% based on size). This is not done here due mostly to my basic dislike for round robins, but analytically it poses no problem. ” So, the points thus far of the teams at the tournament would represent part of the Tournament Quality score (the linked post has some ideas that could be used in dealing with technicalities, like early season tournament aren’t penalized since a teams score is taken as a percentage of the top team’s score, etc ). However, this is a flexible formula and could (if one wished to) incorporate other things like size, diversity, and other social engineering concerns. Competition quality makes more sense to me because when I peruse the entries list, what I look for isn’t size or number of states, but whether a few specific teams are going — for a local tournament, you’d look whether the top teams from your rival schools were in attendance. If you’re a highly competitive team, you’re going to look for your rivals, not the large number of teams you assume pose no threat to you. Thus, it’s logical that the algorithm should mimic the way many of us judge how hard a given tournament’s going to be. Yet for every debate coach to figure out the sum of a hundred teams and look up each team’s score at that point would be impractical. So, to make it easier, the NDCA website should post the tournament quality totals for each tournament. This would require some very basic database technology that the directors could use to track the points for teams and add up the cumulative score based on the entries. This would also carry the secondary benefit of making the team ratings automatic. Certain people and websites release NDCA rankings every month or so, but this could automate the system and rank all teams. It would be easier for debate coaches to figure out tournament quality if it’s posted in one place, and maybe have competition between tournament hosts with this scoreboard. Coaches would pay even more attention to NDCA ratings, such as when deciding between two tournaments to go to, if those rankings were more aligned with how they and their debaters sized up the tournament, which I argue is based on how good the teams are rather than size. Finally, while RRs would be included in this by default and count their fair share, one could exclude them if one wished to do this sort of social engineering. I’m sure this competition quality ranking idea has been discussed before, so I hope more is done with it.

    Tournament Success: The ideal formula here would work well for both teams that clear and those that do not. Bill brought up the idea of having two different formulas, but I think another way is simpler. This is essentially what Bill is proposing, while still making it appear as a single, simple formula. Look fundamentally at how debaters describe their success: When you get to quarters of a tournament, that is what you mention — not what your prelim record was. When you don’t clear, you say you were 3-3, or 4-2 and missed on speaks, etc. We’d mention how impressive it is that a small team unexpectedly got to semis of a national tournament, not that they went 5-1 if they then lost in the first break round. If you break, the prelim record is not as significant or emphasized. So the algorithm ought to reflect how debaters speak about their success, because that’s essentially what we use to size up other teams. The idea here is to have a single number that is your tournament success variable. It’s a scale of your success, with winning the tournament counting as 1. Assuming a tournament with 6 prelims and breaking to octos (4 elims): If you clear, you have at least 0.6; if you went 2-4, you have 2/6 of 0.6, which is 0.2; if you lose in quarters (you won 1/4 elims), you have 0.7; if you lost in finals, you have 0.9. There’s two totals – the number of prelims, and of elims. 0.6 is the dividing line. If you didn’t break, you take the percentage of prelim rounds as a percentage of 0.6. If you did break, then you would take the percentage of elim rounds you won (out of the total that there were) as a percentage of that upper 0.4 part – so you’d always have at least 0.6 if you at least broke. This single number is designed to measure how far you got into the tournament, which is the best measure of success, as indicated by how we speak to each other when asked how we did at a given tournament. I realize this point is somewhat essentialist, but that is outweighed by the simplicity of the equation between “how far in you got” = your success, as well as allowing the same simple formula to serve as two different formulas for clearing and non-clearing teams. Teams that clear are already going to be differentiated by how far they get in elim rounds, which generally reflects their prelim performance anyways. But prelim performance isn’t always indicative since there is less “head to head” — you can avoid debating the better teams sometimes, and the way high-high/low, pullups, presets, number of rounds, mpj, etc all varies for each tournament and affects your prelim road. Making the prelim performance count another time, with an equal amount for all teams, just introduces an extra element that could affect the team’s success in a way many might consider inaccurate.

    This method isn’t necessary better than the current one, I am posting it in the hope that it could be useful as a solution to the criticisms, if it is the case that those criticisms are valid.

  16. Michael Antonucci

    Why not use a version of reflective equilibrium? Take the data from a year. Apply several different formulas. See what variations they produce, and whether those variations conform to your own instincts as to who the “best” teams were. If you have a formula that fixes an empirical problem, then it may be compelling.

    This method is neither a big revelation nor a deductive guarantee. It seems preferable to any pure mathiness, though. Any possible changes to the squo should be filtered through concrete references to a data set. Put more simply, if you’re going to advance an aff, you should read a card, and that’s a good minimum standard for a card.

    Postseason formula tweaking seems to incorporate most of the benefits of first-round style voting while avoiding the significant politicization DA.

  17. Nick Fiori

    Some of the previous posts have refereed to making the baker into something similar to the first round bid process for the NDT. One element that I think is missing from the formula discussion is a consideration of quality of wins. When college coaches go about submitting their first round list, head-to-heads is a major factor to differentiate those that make the top 16 and those that don't. One of the reasons I think the prelim record multiplier is unfair now is not just just that it matters how far you get in the out rounds, but it also matters who you lose to in prelims. If you debate the winner of the tournament in prelims and it should not be as punishing as if you lose to a team that doesn't clear. Likewise, a team 16 seed that beats a 1 seed and then loses in quarters, should probably be rewarded.
    This could potentially be a factor that serves to differentiate the best teams from those that are dominating regional tournaments. I'm not saying I have any idea about how to implement this mathematically. But it could potentially be a factor added in after all eligible tournaments are completed and what constitutes a quality win could be determined based those initial ranking

  18. Roy Levkovitz Post author


    I agree completely. Part of the post I plan on writing does deal with that. All tournaments are not created equally and all wins are not equal either. If we plan on crowning someone as the top team in the country it is important to analyze head to head records, and who they beat to win those tournaments. If some people are afraid of "politicization" I'm guessing there is some way to include a measure or metric that records a team's win / loss vs the top 5, 10, or 16 baker points teams.

  19. John Smith

    What do people think about making the formula similar to that of the BCS-I know-toxic to even mention. I think taking the good parts of the BCS, in this context, to determine a regular season champion external of a competition to decide who is the “national champion” could quite possibly work here. Basically, take the “computers” or in this instance the formula and have that be a component of the overall standings, but also have some Copeland-esque coaches poll? to decide another component. These can re rationed so that one has more input than the other, like 75%/25%, but I think this allows for objectivity and the recognition, that, hey team a has gone 5-0 against team b.

  20. Pingback: The 3NR » Unofficial 2009-2010 Baker Award Standings

Comments are closed.