A few years ago (seven to be exact) I developed an adjustment to the Pythagorean Record for college football. Instead of using points scored and allowed, the new statistic, dubbed the Adjusted Pythagorean Record (or APR), used only a team’s offensive touchdowns scored and offensive touchdowns allowed. Touchdowns scored by the defense or special teams, field goals, safeties, extra points, and two-point conversions were ignored. The thinking was that the best teams were those that scored and prevented touchdowns at the best rates. While defensive scores, clutch place-kicking, and crafty two-point conversion plays can dramatically alter the result of any single football game, scoring touchdowns with your offense and preventing your opponent from doing the same is a better long-term predictor of success. I have compiled in-conference college football APR data back to 2005 and over the past four offseasons have posted an APR breakdown of each FBS conference (hold your applause). I also wanted to eventually conduct an APR analysis of the NFL, and well, you are in luck. I have calculated APR data back to 1970 (the first year after the AFL and NFL merged) and will be making sporadic posts this summer using that data. This first post will examine the accuracy of the APR in the NFL and research what happens to teams that win significantly more or less than their APR would lead us to expect. Enjoy.
As stated previously, I have calculated APR data going back to 1970. However, in the interest of focusing this particular post on the modern NFL, the data reported here will only consist of APR data going back to 2002. Why 2002? Well, that was the last time the NFL expanded and realigned. The league added the Houston Texans in 2002 and went from two conferences with three unbalanced divisions in each to two conferences with four divisions of four teams apiece. While 2002 may not seem that long ago, it does give us a sample size of 17 seasons and 544 individual team seasons. And perhaps just as important, there has not been a major work stoppage so each of the 544 individual teams played a sixteen-game regular season so we don’t have to make any adjustments to their raw data. Also, it bears mentioning that APR includes only regular season games. Preseason (duh) and postseason games are not included.
Let’s use a 2018 team that finished with a final record that was very close to their APR as a starting point. In 2018, the Cincinnati Bengals finished 6-10 (and finally fired Marvin Lewis). Over the course of sixteen regular season games, the Bengals scored 40 offensive touchdowns and allowed 49. Their APR was calculated as such.
(40^2.37) / ((40^2.37) + (49^2.37)) = 0.382023
Their expected winning percentage was just north of 38%. This value is then multiplied by sixteen and yields 6.11 expected wins. As a shorthand, we say their APR was 6.11.
Obviously, NFL teams cannot win one tenth of a game (although they can win a half as ties are counted in my calculations as half a win), so Cincinnati finished .11 games short of where we would expect based on their APR which is pretty darn close.
How often do teams finish with records that closely match their APR? Pretty often.
So the APR looks like a solid measure of team strength in the NFL, and perhaps more importantly, it might be able to identify a few teams poised for regression or progression the next season. Which teams exceeded or failed to meet their APR in 2018? Ah, you’ll have to wait until next week to find out. See you then. 😀