
Welcome To The Matrix: The Computers Run The World
This wasn't supposed to happen.
In the 80s and 90s, the traditional bowl system consistently failed to pair the top two teams leading to controversy as to which should be national champion. That led to the Bowl Alliance. But the stodgy Rose Bowl refused to join, and so we still had the controversies, such as when Michigan could not play Nebraska for the 1997 national championship. That led to the Bowl Championship Series. It would pick the top two teams. But how?
The other problem in the history of college football, according to many, was that national champions were not determined in any "objective" fashion - such as a playoff. They were determined by polls. If that wasn't bad enough, there were the occasional stories of coaches "phoning in" their votes without doing their homework, or writers unfairly boosting or demoting a team to exert more influence in the overall ranking. How could the BCS use the fallible polls to determine the top two teams?
Enter the BCS formula. The BCS formula, in its original incarnation, had 4 components: Traditional Rankings, Losses, Strength Of Schedule, and Computer Rankings. The first component takes an average of the traditional polls, losses adds one point for every loss, strength of schedule ranks how tough a team's competition was based on the records of a team's opponents (and, less importantly, the records of the opponents' opponents), and the computer rankings sample some number of computer formulae and average their rankings. Higher the number, the poorer.
I have written about the flaws inherent in this system before (see BCS or Just BS?). Most notably, the four categories are designed to be equally important, but in actuality, losses and strength of schedule are dominant. That's okay for losses - it should be the most important component - but it far over-emphasizes strength of schedule and makes the importance of your opponents' opponents records germaine in such a salient way that leaves the system open to ridicule. The reason is that the traditional rankings and the computer systems also take into account strength of schedule (and losses). Why else is Boise State ranked below Oklahoma? Of course that is valid, but why must Boise State get dinged three times - once by the traditional rankings, once by the strength of schedule component, and once by the computers - all for the same reason? Or, if that's not a convincing argument, why should Southern Cal?
The idea of an "objective" ranking was to do what the AP writers and coaches do, but eliminate the biases and unfairness that sometimes occurs. Ironically, though, people want the BCS rankings to conform to the traditional rankings. When they don't, the fault is invariably placed with the computers. We hate the fallibility of human beings, but time and again we are faced with the truism that human beings rank college football teams far more fairly than any static formula. Or, if you want to cling to the fallibility axiom, then you'll have to deal with this unfortunate fact: college football fans are human beings too, and they reason the same way the traditional pollsters do, so if the system is actually right and all of us are wrong, we wouldn't know it.
The BCS formula has been tweaked. It created an objective system, and whenever that system doesn't meet our subjective analysis, the objective system changes. There is, of course, the Miami Rule, instituted after the Hurricanes finished just below Florida State, a team it had beaten, and was denied a shot at Oklahoma in the 2000 national championship game. Rather than simply make an exception for head-to-head opponents (like the SEC's conference championship tie-breaker), the formula added an unfair and unwieldy quality win component. Other complaints have been registered, such as when Oregon was snubbed in 2001 in preference to Nebraska, a team that lost its conference championship.
In each case, the tweaks, or call for tweaks, were stimulated by our objective system not meeting our subjective desires.
See, we want to be objective. But we don't really want to. And I'm in the same boat. I think Miami got jobbed in 2000, I think Oregon got jobbed in 2001, and I think Southern Cal got jobbed in 2003. I think the human brain does the figuring better than the computers. Sure, there's still room for argument. Oklahoma does have the best resume, but many see the conference championships (rightly) as the first round of a playoffs, a must-win for a would-be national champion. Sure, Southern Cal does have the weakest schedule and their loss came to the lowest ranked team, but Southern Cal has shown more consistency and dominance than LSU and made a better-faith effort to schedule tough teams (Notre Dame's 2002 record and Auburn's preseason ranking attest to that; LSU played the Louisiana directional schools out of conference). I can see a valid argument for any of the three match ups, in fact.
That's why we have polls, not one person deciding. Throw all the fallible human rankers together and take the average. It's called the central limit theorem. It generates a pretty reliable ranking - as our quest to make the BCS look like the tradtional polls testifies.
I think it is time to scrap the BCS and give the human beings the credit they deserve. They already weigh in losses and strength of schedule. They also factor in other information that is pertinent: head to head matches, home vs. away performance, performance at the end of the year, performance relative to key injuries, and a number of other subtleties. I propose that they be given information: go ahead and compute the strength of schedule statistic, for example, and publish them on a readlity-accessible web page, but then sit back and let the human beings do what they do. Make the vote of every writer and every coach equal (trim the number of writers if you want each poll to have an equal say). Take the top two teams from there.
Let's follow in Keanu Reaves' footsteps. Lets yank out the cables and kick some virtual ass.