Thursday, July 05, 2007

Building a Better Mousetrap: Adjusted Pythagorean Winning Percentage

Time and again I've espoused on this blog the virtues of using a team's Pythagorean winning percentage to project more accurately how they will perform in the upcoming season. For the uninitiated the formula for the Pythagorean winning percentage is as follows:

Points Scored^2.37)/(Points Scored^2.37 + Points Allowed ^2.37)

The resulting number is a team's Pythagorean winning percentage. Multiplying that number by the number of games played will give a reasonable estimation of how many games a team should have won. However, the the formula is not without its flaws. For starters, blowouts, especially extreme blowouts can artificially inflate or deflate a team's Pythagorean record depending on whether or not they received or doled out the beating. The solution? Compute the Pythagorean winning percentage on a game by game basis, add up the totals, and divide by games played. This way each game is counted the same and the effect of blowouts is lessened. Here is a hypothetical example of the adjusted theorem in action for Eponymous State University.

ESU Results


ESU went 8-3 while scoring 312 points and allowing 198. Their Pythagorean winning percentage is .746. Their expected record is then 8.21-2.89. Their actual record is aligned pretty well with their Pythagorean record. However, one game sticks out like a sore thumb and is unjustly influencing the ratings. In the fourth game ESU dropped 70 on their opponent. Perhaps they were a Division III school, maybe they were a Division I school with a slew of injuries, maybe they turned the ball over nine times, maybe ESU ran up the score. Either way, we need to find a way to lessen that game's impact. Say for example, ESU stopped scoring after 30 points. They still win the game rather easily, but their seasonal Pythagorean winning percentage drops to .680 (7.48-3.52). That's almost 3/4 of a decrease in expected wins. If we determine the Pythagorean winning percentage of each game, add them up, and divide by 11 we get an adjusted Pythagorean winning percentage of .669 (7.36-3.64).

When we compute the Pythagorean winning percentage on a per game basis the difference between beating a team 30-3 and 70-3 is only about 3/100 of a point in winning percentage.



70 points is just piling on. Each extra score above a certain point negligibly increases the odds of winning the ball game. This in effect puts the proverbial 'cap' on margin of victory.

Now the important part. Is the adjusted Pythagorean winning percentage a decent predictor of a team's fortunes. Here are the r squared values for how three 2005 statistics predicted a teams 2006 winning percentage (BCS schools and Notre dame only--sample size 66).

2005 winning percentage: .352
2005 Pythagorean winning percentage: .3653
2005 adjusted winning percentage: .39

All three measures were reasonable predictors with adjusted Pythagorean winning percentage being the best predictor. It should be noted that in a post last off season, click here to view it, we found a team's (again BCS schools and Notre dame only) 2004 Pythagorean winning percentage to be a much better predictor than its 2004 winning percentage of its 2005 winning percentage. Part of the reason for the lowered predictive powers for both measures could be the added 12th game in 2006. The majority of the time the 12th game features a BCS school taking on a low-level Division IA or non-Division IA school for a guaranteed victory, thus boosting a team's winning percentage.


Sam said...

I have a computer program with access to the last four seasons' college football results and I used it to figure out the differences between Actual Win%, Pythagorean Win% (PWP), and the Adjusted PWP. (No games against 1-AA teams were considered.)

(Note: The differences mentioned below were obtained by subtraction (absolute values were used) not a "percent difference calculation". The differences below are the differences in win percentages not the percent difference of the win percentages, so to try and avoid confusion I'm not going to use the word percent again.)

Over the last four seasons, the PWP and the adjusted PWP have differed from one another by and average of 5.6 with a standard deviation of about 3.7, a high of 18.5, a low 0.1. What I've noticed just from looking at the data is that in almost every case where the difference between the PWP and adjusted PWP is greater than 10, the adjusted PWP was significantly closer to the true Win%. When compared against the true Win%, PWP differed on average by 7.7, while adjusted PWP differed by 7.4. Not a very large margin. The biggest difference I noticed between the two PWPs is that the adjusted PWP constrains the maximum and minimum values of the set more than PWP just. For example, in 2005 Texas's PWP was 95.1% while their adjusted PWP was 88.9%, they effectively lost a win. The same thing occurs at the bottom of the barrel. Duke's 2005 PWP was 7.6% while their adjusted PWP was 14.9%, a net gain of about 1 win. As a result the highest average difference between true Win% and adjusted PWP occurs in the highest and lowest quarter of the teams. This is the opposite of what I observed with the standard PWP which has the highest difference in the middle 50% of teams. So it seems that the adjusted PWP formula sacrifices some information about the best and worst teams to gain information about the bulk of the teams in the middle.

I tried to compare the predictive ability of the standard and adjusted PWPs with that of Win% alone. My numbers don't seem to match with what you had in your previous post. I may not have done the R-Squared thing right in excel. (Hey, I'm a programmer not a statistician!)

Here's what I found:
2003 to 2004
Standard PWP: 0.43205
Adjusted PWP: 0.42508
True Win %: 0.43507

2004 to 2005
Standard PWP: 0.34931
Adjusted PWP: 0.34256
True Win %: 0.30365

2005 to 2006
Standard PWP: 0.32519
Adjusted PWP: 0.31176
True Win %: 0.29400

The trend seems to be that adjusted PWP is a little tiny bit less predictive than standard PWP, which are both a little bit better than Win% just. Again, I could've done this all wrong, so these numbers might be meaningless. It bothers me that there is a decrease from year to year.

matt said...

Hey Samuel, thanks for the post. I think our numbers differ because I only used data from the 66 BCS teams (plus Notre Dame) instead of all 119 teams. I used BCS teams only, well because I'm lazy and don't have the programming ability so I had enter everything into excel from wedsites like, which has an all-time database for college football scores.

Sam said...

Ok, so I ran it again and my 2004 to 2005 were almost the same as yours. (I'm missing a few games from the 2004 season and I didn't include any 1-AA games which should probably account for the slight differences.)

Here are the new results:

2003 to 2004
Standard PWP: 0.40671
Adjusted PWP: 0.38241
True Win %: 0.40581

2004 to 2005
Standard PWP: 0.51715
Adjusted PWP: 0.47422
True Win %: 0.40862

2005 to 2006
Standard PWP: 0.39204
Adjusted PWP: 0.39140
True Win %: 0.34331

It seems to me that the adjusted PWP isn't any better at predicting future successes than PWP alone. But I think in general, it's a good metric for determining a team's consistency over the course of a season.

Compare LSU and Michigan's 2006 seasons. Both finished 11-2, LSU's PWP was 91.1% and their Adjusted PWP was 74.4%. Michigan's PWP was 80.8% and their adjusted PWP was 79.3%. I've listed both team's margins of victory for each of their games below. (If the table doesn't show up right, blame the font.)

49 | 31
42 | 26
42 | 24
42 | 20
32 | 18
31 | 14
27 | 14
14 | 14
5 | 14
4 | 8
3 | 7
-4 | -3
-13 | -14

LSU played four close games with a 3-1 record. UM played 2 close games with a 1-1 record. Despite the fact that LSU had an average margin of victory 8 points greater than UM's, Michigan was more consistently dominant.

(P.S., this blog is great)