Explanation and Accuracy of the Rankings


How are the ratings calculated?

Each team's rating can be thought of as that team's average performance throughout the year in all games for which a score is listed on this site. A team's performance in a game is defined as the rating of the opponent plus or minus the goal differential in that game. Therefore, the fewer the game scores that go into a teams rating, the less likely that team's rating is to truly reflect the season average performance of that team. That sounds simple enough, the only catch is that by this definition all teams' ratings are dependent on all their opponents ratings. The solution is found through the use of a recursive algorithm that performs hundreds of iterations until the solution is found for which the smallest possible cumulative error is obtained when comparing actual goal differentials to predicted goal differentials. For 2001-2002, a cap on goal differential considered for the rankings from any HS game has been set at 4. This will result in a higher past "predictive" accuracy and a higher weight on strength of schedule. Hopefully, playoff predictive accuracy will not be negatively affected. Experimentation has shown that a cap of a 7 goal spread works better for the college rankings so that is what is used there.

How accurate are the rankings?

The past predictive accuracy, the ability of the ratings to explain winners of games that have already taken place, is usually in the 80-90% range (excluding ties). In 2000-01 for example, the past predictive values for Minnesota and Wisconsin (two of the states for which the scores were the most complete) were 84.1% and 89.4% respectively. The future predictive accuracy, the ability of the ratings to predict the outcome of future games, is generally about the same to a couple of percentage points lower. But perhaps the best indication of how accurate the ratings actually are is not expressed in prediction percentages but in the direct comparison between the ratings predictions and the coaches' playoff seedings. In all cases (except 1999-2000 in Wisconsin) over the last four seasons in comparison with the coaches seedings in Minnesota and Wisconsin these ratings (frozen before the start of the playoffs) have beaten the coaches seedings (usually by a comfortable margin) in predicting playoff game winners despite the inherent disadvantage of having home ice assigned based on the coaches' seedings. In 2000-01 the higher ranked team won 61 of 70 games in Wisconsin (4 more than the seedings) and 108 of 142 games in Minnesota (two more than the seedings).

What do the numerical values mean?

The GmPerf value is simply the average goal differential for that team in all games for which a score is listed on this site. The Sched value is the weighted (by games played against) average value of all of a team's opponents Total values. The Total value is the sum of GmPerf and Sched values. It should be noted that all of these values are in goal units but the Sched and Total values only have meaning within each particular ranking (i.e. you cannot compare these numbers between two different state rankings or between state and national rankings) and only the difference in values between two listed teams is meaningful. The difference between two teams Total values is the difference in average game performance and is in goal units. Note that when two teams are more than three goals apart in the HS rankings, the spread becomes somewhat understated due to goal differential caps used in creating the rankings.

last updated March 8, 2001