As I am writing this, this week's Bowl Championship Series ranking has not been released, but there is certain to be some shuffling at the top. Two weeks ago, UCLA was named #1 despite the fact the Ohio State was the consensus #1 in the AP and ESPN polls. The next week, UCLA struggled against Stanford, and took a tumble in the BCS rankings-- driven by a fall to #4 in the traditional polls.
At the time I was suspicious of that. Had the AP balloters and the coaches purposefully knocked UCLA down to #4, despite winning, precisely because they didn't like the BCS rankings usurping their authority? In other words, did they put UCLA down further than they might have normally to try and effect a change in the BCS rankings?
This week, Ohio State lost to Michigan State, and will deservingly fall to the lower reaches of the top 10. Any chance of UCLA rising from the ashes will most likely be squashed by their second straight poor performance against a Pac-10 doormat (a last minute touchdown winner over Oregon State). And so ends a month of whining and pleading from Knoxville and Manhattan: Tennessee and Kansas State, formerly seeking a miracle, formerly planning their protests about finishing the year 12-0 only to be sent to a lesser BCS bowl, are suddenly on top, looking down. Now it will be time for fans of those to schools to callously shrug off the earnest protests of Wisconsin, UCLA, and Tulane boosters. We had a right to complain, they'll think, but come on! Wisconsin? Who have they played? UCLA? They need miracles every week!
Funny how things change!
So what about these BCS rankings? Are they any good? I have two main problems with the BCS rankings. One the the human factor. The AP poll and coaches poll each have a one in eight weighting in the overall BCS rankings. My worry is if pollsters perceive an unfairness in the BCS rankings, as I believe they did when it looked like an 11-0 Ohio State could be denied a chance to win the national championship, they might change their ranking to offset the perceived inequity. If Florida beats Florida State and ends up with a higher BCS index than an undefeated Wisconsin and Kansas State, and therefore portends a perhaps unpalatable Tennessee-Florida rematch in an all-SEC East National Championship game, might they not rank Florida 9th, 10th, or 11th to overcome the weighting and force a Wisconsin-Tennessee matchup? Maybe that's fair, but if it is, why not throw out the other three factors in the BCS formula altogether?
The second problem is the redundancy in the formula. The formula has four components:
Now here's what I mean by redundancy. If the idea is that these four components should have equal weight, then the system is a failure. Strength of schedule appears three times: the traditional pollsters figure it in (why else is Tulane lower than UCLA?), the computer rankings figure it in, and there is an independent BCS formula for strength of schedule.
Same with losses. Losses have their own component in the formula, but obviously they also go into the AP and coaches rankings and the computer rankings.
What's the solution? I hate to say it, but the solution is to go to the only computer and only set of algorithms that completely incorporates all features important in determining a team's ranking: the human brain. I started doing top 20s and top 25s because I was perenially disappointed by the AP and coaches polls. But I still believe that a poll of a lot of human minds is the purest reflection of how teams stack up. The human mind takes all the relevant factors into consideration, and by polling across a bunch of them, outliers are minimized in importance. Just take an average of the AP and coaches poll, or better, take each voter in each poll as an independent vote and merge the polls that way for the BCS.
But there is one success in the BCS: a flawed system will bring about a playoffs quickly. And then we can have fun writing commentaries about the flawed selection process for that!