Thursday, May 28, 2020

2019 Yards Per Play: Sun Belt

This week, our final YPP takes us to the Sun Belt.

Here are the Sun Belt standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Sun Belt team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2019 season, which teams in the Sun Belt met this threshold? Here are Sun Belt teams sorted by performance over what would be expected from their Net YPP numbers.
Georgia Southern significantly exceeded their expected record based on YPP thanks to a solid mark in close conference games (3-1) and the best in-conference turnover margin in the Sun Belt (+8). The Eagles were very protective of the ball in conference play, committing just two turnovers over their eight-game conference schedule. Meanwhile, South Alabama undershot their expected record based on YPP. The Jaguars were not exceptionally bad in close games (1-2 in one conference games), but their poor offense coupled with a lackluster turnover margin (-4 in Sun Belt play) kept them in the league basement despite decent peripherals.

Arkansas State and Preseason College Football
Watch any college football pregame show in late August or early September and you'll no doubt hear the talking heads emphasizing the fact that unlike the NFL, college football has no dress rehearsal. There are no preseason games to work out the kinks. Come Labor Day Weekend (and maybe a little before), the games count and teams have to work through their issues in real time. While this is technically true, Arkansas State head coach Blake Anderson has done of good job of manufacturing preseason games for his Red Wolves.

Anderson has been the head coach in Jonesboro for six seasons. During that span, his teams have been to six bowl games and won a pair of Sun Belt titles. And after three consecutive years of one and done head coaches, he has provided some much needed stability for the program. The Red Wolves have been very successful in the Sun Belt, posting a 36-12 conference record under Anderson, second only to Appalachian State in that span (41-7). However, the Red Wolves have not played very well against non-Sun Belt opponents. Excluding games against FCS teams (they did lose once), the Red Wolves are 4-13 against non-conference opponents in the regular season under Anderson. Obviously some of those games are against Power Five opponents that Arkansas State is not expected to win. So let's remove those from the equation. When we remove the eight games against Power Five opponents, Arkansas State looks a little better. Their record of 4-5 against other Group of Five opponents is mediocre, but a far cry from their .750 conference winning percentage.
Let's examine their non-conference record another way; through the lens of the point spread. Their Against the Spread (ATS) record against G5 regular season opponents is one game worse than their actual record at 3-6. Their conference ATS record also gets worse, but they are still covering the number more than 60% of the time in Sun Belt play.
It should also be noted that three of Arkansas State's four G5 victories have come against teams that have finished 4-8 or worse (Tulsa and UNLV twice).

The Sun Belt is typically the worst FBS conference, but using the point spread to level the playing field, Arkansas State has played much better against their conference foes than against other G5 opponents. It sure seems like there is more at play here than mere randomness. My theory? Anderson knows his team has no shot to make the College Football Playoff and only an infinitesimal chance to play in a New Year's Six Bowl. Thus, he treats the non-conference schedule, even against teams Arkansas State can theoretically compete with, as a chance to iron things out in preparation for the conference season. All that matters is Sun Belt play and a chance to win a conference title. In all six seasons, the schedule has played out the same way. Arkansas State opens with four non-conference games (three in 2017 thanks to a cancelation) and then eight Sun Belt contests. Despite the occasional rough start in the non-conference, by the time league play starts, Arkansas State is hitting their stride.

Thursday, May 21, 2020

2019 Adjusted Pythagorean Record: SEC

Last week we looked at how SEC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2019 SEC standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, SEC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Ole Miss and Tennessee were the two teams that saw their actual record differ significantly from their APR. Ole Miss undershot their APR as well their expected record based on YPP and we went over some reasons for that last week. Meanwhile, Tennessee won nearly two more games than we might expect based on their ratio of touchdowns scored and allowed. The Vols were 2-0 in one-score conference games, beating both Kentucky and Missouri by four points, but they weren't exceptionally lucky in close games. The bigger culprit is their performance in their three conference losses. The Vols lost to Alabama, Florida, and Georgia (the three best teams on their schedule) by a combined 82 points. While three of their conference wins did come by double-digits, the combined margin in their five SEC victories was only 56 points.

Whatever Happened to the Conference Championship Game Shockers?
When it comes to conference championship games, the SEC is the OG. When the league expanded to twelve teams in 1992, it also instituted a divisional structure and created a conference title game to match up those two division champs. The Big 12 followed suit a few years later and after the turn of the century, the ACC got in on the fun. Post conference realignment, the Big 10 and Pac-12 also added title games. Thirty years ago, the idea of such an exhibition was a novelty, but now it is an accepted part of college football. Every FBS conference puts on a title game the first weekend in December. With the title game ensconced in the college football zeitgeist, I thought now would be a good time to examine the results of all the title games for the Power Five conferences and see if there was anything to be gleamed from the data. As the SEC has the most robust back catalog of championship games, they are the obvious place to start. The table below lists the following vital statistics about the SEC Championship Game: The straight up record of the favored team, the average spread, the largest spread, the largest upset relative to the spread, and the most recent upset.
Favored teams have done pretty well in the SEC. The favored team has won a little more than 82% of the time. The average spread being nearly ten points surprised me as did the most recent upset (I thought for sure Auburn was favored in both 2013 and 2017, but they were not). While the most recent upset was just six seasons ago, that spread was very tight. Prior to that, the next most recent upset came in 2009 in a matchup of undefeated teams. The largest upsets came two seasons apart, with Freddie Milons out-rushing Shaun Alexander in a beatdown of the Gators in 1999 and LSU ending Tennessee's national title hopes in 2001.

The record of the favorite in the SEC seems pretty good and the average spread seems pretty high, but without something to compare it to, it means nothing. So lets look at the other Power Five conferences starting with the Big 12.
Suddenly the SEC's average spread doesn't look that big. The average Big 12 spread has been nearly twelve points! In addition, while the first Big 12 title game featured the biggest upset, three of the four upsets have come from double-digit underdogs with Kansas State being both a victim (1998) and a suspect (2003). That Kansas State victory also marked the last time an underdog won with the favorite (usually Oklahoma) riding a ten game winning streak.

Now here is the ACC.
Like the Big 12, the ACC favorite has posted a similar overall record and similar spread margin. The average spread has increased significantly over the past four seasons, with Clemson being favored on average by more than twenty points in their past four trips to the title game. The Tigers are also the last team to pull off an outright upset.

Next up, the Big 10.
While the Big 10 favorite has won just over half the time, I think that can be attributed to the narrow margin of the average spread. Prior to the last two games (where Ohio State was a double-digit favorite in both) the average spread was under four and a half points.

Finally, here are the numbers for the Pac-12.
Favored teams have done quite well in the nine Pac-12 title games, with Oregon's victory over Utah representing one of only two times an underdog has won outright (Stanford over Arizona State in 2013 was the other).

So now we can try to answer the question I posed earlier. Where have all the upsets gone? For starters, there haven't been that many to begin with. Overall, favored teams are 61-18 in Power Five conference title games (just north of a 77% winning percentage). Favorites of at least a touchdown are 41-8 (nearly 84%) and double-digit favorites are 26-4 (almost 87%) with the last double-digit underdog to win outright being Florida State in 2005.

Also, consider the four teams to lose as double-digit favorites: Kansas State, Nebraska, Oklahoma, and Virginia Tech. Nebraska (at least in 1996) and Oklahoma are powerhouses, but Kansas State and Virginia Tech may have arguably possessed less raw talent than the teams they were favored to crush (Texas A&M and Florida State).

I have a limited understanding of statistics, but I think what may be going on here is a Poisson Distribution. It's not deadly and doesn't involve the thousands of women Bret Michaels slept with, but instead refers to the probability of a given number of events occurring. For the reasons outlined above (and a whole lot of randomness), there was a cluster of upsets between 1996 and 2005 (seven favorites of at least a touchdown lost in that span) and we've just been going through a dry spell since. Similarly in college basketball, there were four fifteen seeds that defeated two seeds between 1991 and 2001. Then we went more than a decade without one before having two in 2012 and another in 2013, followed by another in 2016. There have been a few close calls since Clemson pulled the last upset as a touchdown underdog in 2011 (Georgia on two occasions against Alabama, Baylor taking Oklahoma to overtime last season, and Georgia Tech against Florida State to name a few). It will happen. Just give it some time. You probably won't see it coming which will make it even better.

Thursday, May 14, 2020

2019 Yards Per Play: SEC

This week, our offseason sojourn takes us to the SEC, the home of the reigning national champs.

Here are the SEC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each SEC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2019 season, which teams in the SEC met this threshold? Here are SEC teams sorted by performance over what would be expected from their Net YPP numbers.
Ole Miss was the only SEC team that saw their actual record differ significantly from their expected record based on YPP. The Rebels finished 0-3 in one-score conference games, including a memorable one-point loss in the Egg Bowl that may have cost Matt Luke his job. The Rebels also dropped two non-conference games by one score, bringing their overall close game mark to 0-5. Lane Kiffin is actually stepping into a better situation than you might otherwise expect in taking over a 4-8 team in college football's toughest division. If the 2020 season is actually played, a bowl bid for the Rebels wouldn't surprise me.

The Worst SEC Team Ever (In the Past Fifteen Years)
It can be tough to realize we are witnessing history in the moment. Even though the 2019 college football season is still a vivid memory for many fans, I think there is a reasonable case to be made it featured the worst SEC team of the last decade and a half. And no, I'm not referring to the one that lost at home to San Jose State and was blown out by Western Kentucky.

Despite winning a conference game in 2019, Vanderbilt put up the worst Net YPP numbers I have on record (since 2005). While YPP is a useful tool for rating teams, I know its not perfect and far from the definitive word on team strength. With that in mind, I decided to look at SEC teams through that lens in addition to a few others and see where last year's Vanderbilt team ranked. Were they really that bad or were their numbers artificially depressed by the strength of the conference overall? Read on to find out.

As I mentioned previously, Vanderbilt had the worst Net YPP of any SEC team since 2005. Here are the other four SEC teams that make up the bottom five of Net YPP.
Vanderbilt is the lone team to finish three yards per play underwater and they were more than a half yard worse than the second worst team (Houston Nutt's final Ole Miss squad).

Regular readers know another metric I like to use to rate teams is the Adjusted Pythagorean Record (APR). While that post will be going up next week, I can give you a sneak peak of where Vanderbilt ranked in that category in 2019. Last. However, they did finish better than two other SEC teams since 2005.
Note that two Derek Mason coached Vanderbilt teams appear on this list. I know the Vandy job is tough, but he has put some really bad teams on the field in his six seasons in Nashville.

YPP and APR only include data from conference games in their ratings. However, I think its also important to look at how SEC teams did in non-conference games to get a better idea of how good (or in this case bad) they were historically. The Simple Rating System (SRS) from Sports Reference rates teams according to how many points they are above or below the average team. Using this metric, here are the bottom five SEC teams since 2005.
Once again, last year's Vanderbilt team comes out on top. How did they rate so low despite beating a decent Missouri team? Take a look at that non-conference schedule. They lost by double digits to a bad Purdue team, barely escaped a MAC team at home, and were blown out at home by the second worst team in the Mountain West. The SEC was a great conference in 2019, but all Vanderbilt's losses came by at least seventeen points and their only victory by more than a touchdown came against East Tennessee State. As no other team appears in the bottom five of all three metrics, much less the very bottom of two, I christen the 2019 incarnation of Vanderbilt the worst SEC team since 2005.

Thursday, May 07, 2020

2019 Adjusted Pythagorean Record: Pac-12

Last week we looked at how Pac-12 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2019 Pac-12 standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Pac-12 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Both schools from the Evergreen State significantly under-performed relative to their APR. Washington State finished 1-3 in one-score conference games and also finished dead last in in-conference turnover margin (-13). Their brethren from Seattle were even worse in one-score Pac-12 games, finishing 0-4. By contrast, each of Washington's four league wins came by at least twelve points. No team significantly exceeded their APR, but Colorado came close. Perhaps Mel Tucker saw the writing on the wall in regards to how good Colorado actually was and this contributed to him taking the Michigan State job.

When the Meek Inherit the Earth
Last week I used the 247 Talent Composite to try and find value when elite teams were underdogs to non-elites. For the past five years, there has not been any value to be had. This week, I will be doing the same thing, but different. Or maybe doing a different thing the same way. Instead of looking at elite teams as underdogs, I wanted to look at what happens when the dregs of college football are favored. The methodology is explained in the next paragraph, so if you only want the results, skip down.

Using the 247 Talent Composite, I calculated the mean and standard deviation talent ratings for all Power Five teams (and Notre Dame) for the years 2015, 2016, 2017, 2018, and 2019. As I mentioned last week, I ignored the Group of Five teams because with the exception of about five teams per season, Power Five teams all have more 'talent' than Group of Five teams. Using the mean and standard deviation, I determined which Power Five teams were more than one standard deviation below average in terms of raw talent. Whereas about two teams per season were at least two standard deviations above average, no team was more than two standard deviations below average in any season. On average, about eleven teams per season were at least one standard deviation below average. Once I determined the teams, I looked at all instances where these 'dregs' of the Power Five were favored against any other Power Five team that was not a dreg and calculated how they fared against the spread. I also separated out instances where they were extreme favorites (three points on the road, six points at a neutral site, and nine points at home). With that out of the way, here are the results.

Seventeen schools have been one standard deviation below average in raw talent at least once in the past five seasons. Some of them are obvious (Wake Forest) and others are a little shocking (Arizona -- RIP Kevin Sumlin). They are listed in the table below.
Five schools have been one standard deviation below average in all five seasons. Four of those schools have been at least moderately successful in that span (Boston College, Kansas State, Wake Forest, and Washington State) and the other is Kansas.

So how did these dregs of Power Five football (talent-wise) fare against the spread when they were favored against more talented Power Five teams? As you might expect, not that great.
They covered 42.5% of the time as favorites and were marginally better as extreme favorites (44.4%). Here are how the individual teams performed as favorites.
When you break things down by team, the Washington State Cougars stand out. They were favored 23 times in the past five seasons against more talented Power Five teams which is more than double the number of times of the second most favored team (Virginia). They were an extreme favorite 14 times, which is more than three times as often as any other team. They are also one of three teams (Wake Forest and Indiana are the other two) to post a winning ATS record as a favorite. Some statistically inclined folks might call them an outlier. When we remove them from the equation, the record of these Power Five dregs as favorites comes into clearer focus.
Excluding Washington State, these talent-challenged Power Five teams covered less than 37% of the time and just under 32% of the time as extreme favorites. When filling out a parlay card, you may have intuitively believed betting against a team like Purdue as a favorite was a good idea. Data has confirmed your intuitions. When a team with inferior talent is favored against a more talented team, betting against them has proven profitable at the window the past five seasons, especially when you don't bet against Mike Leach.

Thursday, April 30, 2020

2019 Yards Per Play: Pac-12

We stay out west this week where we review the Mountain West's big brother, the Pac-12.

Here are the Pac-12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Pac-12 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2019 season, which teams in the Pac-12 met this threshold? Here are Pac-12 teams sorted by performance over what would be expected from their Net YPP numbers.
After reviewing the Mountain West, where the actual standings were quite different from the YPP and APR numbers, the Pac-12 represents a  return to normalcy. No team significantly over or under-performed relative to their YPP numbers. So lets move to something a little more interesting.

When David is Favored Over Goliath
Think back if you can to a different time. It was early December. The holidays were approaching, we were blindly ignorant to the calamity that was headed our way in 2020, and Utah was favored over Oregon in the Pac-12 Championship Game. The Utes were certainly formidable, boasting an 11-1 record heading into the game, with their lone defeat coming on a Friday night early in the season at Southern Cal. Ten of their eleven victories had come by at least eighteen points and their last three had all come by at least four touchdowns. Still, even a basic college football fan knew the Utes were not an elite recruiter like Alabama or Ohio State. Their baseline talent was in the middle of the pack for Power Five programs. They relied on coaching and development to achieve success more so than raw talent. Yet here they were, favored on a neutral field against a team that had played for two national titles the previous decade. Well, you know how things shook out. This got me to thinking, how often are programs with elite talent underdogs to teams without it and more importantly, how do they fare in that role? The next paragraph will go into detail about how I set about answering that question, so if you are pressed for time or have a short attention span, skip on down.

To identify the Goliaths (elite recruiters) in this study, I used the 247 Sports Talent Composite for the years 2015, 2016, 2017, 2018, and 2019. The talent composite uses the recruiting star ratings to give a numerical value to each FBS and FCS team. In 2019, the top team was Alabama with a rating of 984.96 and the lowest rated team was Bryant with a scant 6.30 talent rating. Instead of arbitrarily setting a threshold for what constituted an elite recruiter, I let math do the job for me. Initially I loaded all 130 FBS teams (for 2019) into an Excel spreadsheet and calculated the mean and standard deviation of their talent rating. I classified teams that were +2 standard deviations above average as uber-elite and those that were +1 standard deviation above average as elite. However, a few problems quickly surfaced. While there were only two or three uber-elite teams each season, there were close to 30 teams that qualified as elite. To me, this seemed like too many. Can nearly a quarter of FBS really be considered elite? I don't think so. Thus, I decided to only include Power Five teams (and Notre Dame) in the calculation. Each season there are a few of Group of Five teams that rank ahead of Power Five teams in talent rating (never more than five for any season from 2015 through 2019), but for the most part, FBS teams are segregated along those conference lines. Using only Power Five teams in the analysis yielded the same number of uber-elites, but cut the elites down to about twelve per season. Intuitively, this seems much more in line with how college football actually functions. So, once I had the teams classified as elite and uber-elite, I looked at instances where they entered a game as underdogs against teams in another grouping. In other words, I threw out games where two uber-elites were facing off because obviously one has to be favored (unless the line is a pick), where two elites were playing, where an uber-elite was favored against any team, or where an elite was favored against a non-elite. Confused? Here it is a little simpler. I looked at games where an uber-elite was an underdog (provided they weren't facing another uber-elite) and where an elite was an underdog (provided they were not playing an uber-elite or another elite). In addition to looking at instances of an elite or uber-elite being an underdog, I also categorized some of their underdogs roles as extreme. This included games where they were a three point or greater home underdog, a six point or greater neutral underdog, or a nine point or greater road underdog. Finally, I did not include bowl games in the analysis as those can be rather funky with motivation, players sitting out, and other numerous angles impacting the game.

With the methodology out of the way, I have provided a table listing teams that were either elite or uber-elite between 2015 and 2019 as well as the number of times they appeared in that category. I considered listing out the elite and uber-elite for each season, but thought that would be too messy. If you would like to see that table, hit me up and I can provide it for you.
Eleven teams were either elite or uber-elite all five seasons: Alabama, Auburn, Clemson, Florida State, Georgia, LSU, Michigan, Notre Dame, Ohio State, Southern Cal, and ummmm Tennessee? Well, I'm sure you can spot the outlier. You may have noticed the team that prompted this analysis, Oregon, is nowhere to be seen. I thought they would have classified as elite for at least one season, but alas, the numbers never lie.

So how did these teams fare against the spread as underdogs? Surprisingly, not that well.
As you would probably have guessed, the uber-elites were rarely underdogs to teams not in their weight class. It only happened six times in the past five seasons and the underdogs were 3-3 ATS overall and 1-1 in the role of an extreme underdog. With a .500 ATS mark (in a very limited sample), they did fare better than the elites.
Elite teams covered just under 47% of the time when they were underdogs to the middle and lower classes of college football. In extreme situations, they covered at a similar rate (48%). I am actually a little shocked by these numbers. I expected that elite underdogs would be solid bets. For the past five seasons, that has not been the case.

Here are how the individual teams performed as underdogs.
Southern Cal, Tennessee, and Texas (three historical programs suffering through down times) are the only teams that were underdogs a significant number of times over the past five seasons. Tennessee and Texas actually performed quite well in the role, but Southern Cal, yikes.

If you had bet against all these elite and uber-elite teams in all situations, you would have made a little money (52.4% is the break even point assuming -110 per bet), but that is hardly a significant advantage for this sample size. With the cover percentage of betting against elite underdogs hovering right around the break even line, this is not a trend I would recommend pursuing unless other aspects of your handicap identify an advantage. Oh well, I apologize for making you read all those words and not providing any type of gambling nugget. But, if you are patient, there might be something for you next week. In the business, I believe they call that a tease. See you next Thursday.

Thursday, April 23, 2020

2019 Adjusted Pythagorean Record: Mountain West

Last week we looked at how Mountain West teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2019 Mountain West standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Mountain West teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Seven teams saw their expected record differ significantly from their actual record using yards per play and when it comes to APR, the results were quite similar. Six teams saw their APR differ from their actual win total by at least a game and a half. Last week, we discussed some reasons why Boise State, Nevada, and Utah State exceeded their expected record and why Fresno State and New Mexico under-performed. However, a new entrant, Wyoming boasted the largest negative differential between their APR and actual record. Thanks to a fantastic defense, the Cowboys finished just behind Air Force in APR last season. Unfortunately, the Cowboys were 0-3 in one-score conference games and finished 4-4, losing three league games by twelve total points. By contrast, their smallest margin of victory in Mountain West play was ten points.

Requiem for Rocky
In early 2020, Rocky Long resigned as head coach of San Diego State. You may have missed this in the college football news cycle as Clemson and LSU were slated to play in the College Football Playoff a few days later. Since Long spent most of his coaching career at mid-major jobs west of the Mississippi, many casual college football fans probably don't know who he is. Well, I aim to change that. Over the next 10,000 words or so, my four regular readers will learn the hagiography of Rocky Long.

Long is not dead by the way. He's not even retired. He is the defensive coordinator at his alma mater, the University of New Mexico. He was also the head coach of the Lobos for eleven seasons before relocating to beautiful San Diego. He finished his head coaching career at New Mexico with a losing record, but the Lobos were bowl eligible for seven straight seasons (2001-2007) and he is arguably the most successful coach in school history (at least based on what he did in the Land of Enchantment). But we're mostly going to focus on what he did at San Diego State the past eleven seasons (nine as head coach).

Before we start praising Long too much, let's delve into a mild criticism. Prior to the 2012 season (his second in charge in San Diego), Long opined that August that his team would be more aggressive in their fourth down attempts heading into the season. Did his team become more aggressive in terms of fourth down attempts in 2012? Compared to the previous season, not really.
But whoa, they did get very aggressive in 2013, leading the nation in total fourth down attempts. So Long basically became an amalgamation of Mike Leach and Doug Pederson from then on right?
After being moderate to very aggressive in terms of fourth down attempts in his first three season, Long retreated into a shell for a half decade or so. This metric is not perfect as it is devoid of context (I didn't sift through the play by play to determine how many fourth and shorts San Diego State faced, compare it to the national average, and adjust for situational aspects like time and score), but it shows a pretty drastic shift in thinking. Why? Well, the answer is pretty simple.
The Aztecs gave the ball up on a lot of those fourth down gambles in 2013 and Long apparently decided, like Goldwater, that conservatism was the best path forward.

Oh well. Rocky achieved great success at San Diego State (two Mountain West titles) despite not being at the cutting edge in terms of analytics. But let's give Long credit for something the Aztecs did well during his tenure: play fantastic defense. Long became defensive coordinator at San Diego State in 2009, when he was hired by Brady Hoke (who after a winding road from Michigan, Oregon, Tennessee, and the NFL is now the head coach once again). Prior to Long's arrival as defensive coordinator, the Aztecs had ranked dead last in the Mountain West in yards allowed per play for two consecutive seasons. After a sixth place finish is his first season on the job, they ranked in the top three of the conference for the next decade and have not allowed more than five yards per play to conference foes in the past six seasons.
Long called his own defensive plays in San Diego once he became head coach (in 2011), so he shoulders a great deal of credit for those fantastic numbers. In addition to posting great defensive numbers, San Diego State usually boasted a great running game to compliment it. Three San Diego State running backs were drafted during Long's tenure (Ronnie Hillman, Rashaad Penny, and Nick Bawden who was actually a fullback in college) while Donnel Pumphrey left San Diego State as the NCAA's all-time leading rusher (with an asterisk of course). In addition, since Long took over in 2011, there have been fifteen instances of a running back finishing with at least 250 carries and twenty touchdowns while averaging at least six yards per carry (three quarterbacks have done it -- Jordan Lynch, Lamar Jackson, and Malcolm Perry). I think this arbitrary combination of numbers does a good job of identifying backs with the qualities of both explosiveness and work-horsery. San Diego State backs have two of those seasons.
The only other schools with multiple seasons are one known for their backs and beefy offensive linemen and another that is annually one of the most talented teams in the nation. The running game struggled a great deal in 2019 (though San Diego State still won ten games and received a few votes in the final AP Poll), so maybe Long got out one season too soon rather than one season too late.

Rocky Long will never be a household name among college football fans, but he did great work at two places that did not have a winning tradition when he arrived. His (likely) final act will be attempting to return his alma mater to respectability as defensive coordinator. Will he succeed? The odds are probably stacked against him, but I wouldn't call in a Longshot.

Thursday, April 16, 2020

2019 Yards Per Play: Mountain West

This week we head west to try and rid some of our east coast bias. Welcome to the Mountain West review.

Here are the Mountain West standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Mountain West team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2019 season, which teams in the Mountain West met this threshold? Here are Mountain West teams sorted by performance over what would be expected from their Net YPP numbers.
Seven teams saw their actual record differ significantly from their expected record based on YPP. Boise State, Nevada, and Utah State exceeded their expected record while Colorado State, Fresno State, New Mexico, and San Jose State under-performed relative to their YPP numbers. Close game record does a good job of explaining the over-performance. Boise State, Nevada, and Utah State combined to go 8-1 in one-score conference games. And while the Broncos went undefeated, the Wolfpack and Aggies were blown out in most of their league losses. Of Nevada's four conference defeats, three came by at least 26 points while both of Utah State's conference losses came by at least 24 points. For the underachievers, Fresno State and San Jose State can blame close games, as they went a combined 2-6 in one-score conference games. New Mexico didn't play any conference games decided by less than eleven points, but they did have the worst in-conference turnover margin of -11. However, Colorado State is the real odd duck, or Ram if you will. They had the third best per-play differential in the conference, but won just three of their eight league games. They were only 0-1 in one-score conference games and their turnover margin was underwater (-4), but hardly debilitating. I couldn't really come up with an explanation for their struggles. As it stands, Steve Addazio will likely be the beneficiary of their positive regression while Mike Bobo will have to settle for working for Will Muschamp.

Largest Average Discrepancy
You may have noticed this past season's Mountain West featured an abnormally large number of teams that saw their actual record differ significantly from their expected record based on YPP. With YPP data going back to 2005, I wanted to see if it had the largest average discrepancy (by absolute value). It did, narrowly edging out a conference from fifteen years ago. Before we get to splitting decimals though, here are the other conferences that with the largest average disparity between their teams' actual record and expected record based on YPP.
The Sun Belt looked a lot different in 2011 than it does today. The conference had only nine teams, making it the smallest of our top five. With only nine teams, the high variance is also slightly less impressive as a few large outliers can have out sized influence on the average. However, even though the conference was only nine deep, more than half the teamssaw their expected record differ by more than .200 (the standard I use to rate a difference as 'significant').
If your memory of college football seasons runs together, 2015 was the year Michigan State stole a conference title and playoff bid from Ohio State. You can actually read the YPP recap here.
The Mountain West holds two of the top three spots on our list. This one is also recent enough that you can actually read the YPP recap here.
Then known as the Pac-10, the conference of champions is our surprise runner-up. The average difference was just .0001 less than this past year's Mountain West, our overall winner for largest average discrepancy.
In its twenty year history, the Mountain West has had better years, but none where the standings and per play differentials were so mismatched. Put that on a trophy!