Wednesday, May 24, 2017

2016 Adjusted Pythagorean Record: Sun Belt

Last week, we looked at how Sun Belt teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually. Once again, here are the 2016 Sun Belt standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only.
Finally, Sun Belt teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard no team saw their record differ significantly from their APR. Idaho and Georgia State were the biggest over and under-achievers respectively, but we already touched on some reasons for that last week.

Georgia Southern enjoyed a successful, perhaps the most successful, transition from FCS to FBS. In their first two seasons in the new classification, the Eagles went 18-7, won an outright conference title, won a bowl game, and played three Power 5 schools to within one score on the road. Despite their success, head coach Willie Fritz made what amounts to a lateral move (or perhaps even a downgrade) to Tulane. In his stead, the Eagles made an interesting hire. They tabbed Tyson Summers, a Georgia native, who spent just one season in Statesboro. That one season (2006) happened to be the worst in school history. Summers also came from the defensive side of the ball, as he had most recently been the defensive coordinator at UCF and Colorado State before coming to Georgia Southern. Summers did keep the run-first option the team utilized to great success under his predecessor, but the performance left a lot to be desired. After averaging over six yards per carry in both 2014 and 2015 and scoring 109 combined touchdowns on the ground, the Eagles averaged under four and half yards per carry and scored just 24 rushing touchdowns in 2016. This decline in production contributed to a 5-7 record in Summers' first season where the Eagles had to upset Troy in their final game just to get to five wins. Summers enters 2016 on the proverbial hot seat. So what are his chances of surviving and getting the Eagles back to a bowl? To answer that question, I looked at all first year coaches since 2006 who oversaw a decline of at least three regular season wins and an increase of an least three regular season losses. That query produced a sample of 38 coaches. How did those coaches perform the following season? The results are summarized below.
Overall, the teams improved by an average of about two regular season wins the next year. More than two thirds of the teams improved the next season, and only about ten percent saw a further erosion of their record. More than quarter of the teams in the sample improved by at least three games, so there is hope for a large positive shift in fortunes for concerned Georgia Southern fans. However, I don't want to give those fans a false hope. There is decent, perhaps good, chance that Summers is not the right man for the job. Of that sample of 38 coaches, 29 are no longer with their teams. Of those 29, only five had a winning record at the end of their tenure with the team. If we include the coaches who are still active, 9 of 38 have winning records with their team. There are some success stories to point to like PJ Fleck, Skip Holtz, and Butch Jones, but while the odds of Georgia Southern improving in 2017 are good, the odds of them retaining Summers for the long haul may not be.

Thanks for reading my 2016 YPP and APR posts. Its been almost 20 weeks since Clemson upset Alabama, but we still have about 14 more weeks before college football season kicks off in earnest. Over the summer, this blog will add new content, but it won't be as frequent as the weekly updates you have (hopefully) been enjoying. I have some studies in the queue on polls and their accuracy and biases so if that interests you, check back every now and then. I'll also be making another Vegas trip and documenting my college football investments. Once the season starts, I'll continue with my weekly picks column and perhaps add more original posts here and there as the spirit so moves me. Have a great summer. I know we can get through the rest of this long offseason together.

Wednesday, May 17, 2017

2016 Yards Per Play: Sun Belt

Nine conferences down, and now we move to our last FBS conference! Where did the time go? This week, we examine the Sun Belt. Here are the Sun Belt standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Sun Belt team. This includes conference play only. The teams are sorted by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the Sun Belt met this threshold? Here are Sun Belt teams sorted by performance over what would be expected from their Net YPP numbers.
Idaho, in their penultimate season before moving down to FCS over-performed relative to their expected record, while FBS newcomer, Georgia State under-performed. Idaho finished with the best in-conference turnover margin in the Sun Belt at +11 and was a solid 2-0 in one-score conference games. That little bit of good fortune coupled with a powerful passing attack helped the Vandals win six conference games for the first time in school history. Georgia State can also blame turnovers for their poor record. The Panthers posted an in-conference turnover margin of -9 (which was not quite good enough for last place) and were also shoddy in the kicking game, making just 7 of 13 field goals in Sun Belt play.

If you were looking at the disparity between Sun Belt teams’ records and their expected YPP records and thought: ‘Hmmm. Georgia State sure did miss their expected record by a wide margin. I wonder if it was the widest margin ever.’ Well, I am here to answer that question. Here are the top (or bottom) ten mid-major teams since 2005 ranked by the largest disparity between their actual record and their expected record based on YPP.
Some notes on the table:

  • In an interesting statistical coincidence, 2016 produced the two teams that missed their expected record by the widest margin. Utah State and Georgia State, along with SMU in 2007, were the only teams to miss their expected record by at least .400 (a little more than three wins in an eight-game conference season). 
  • Georgia State actually appears on this list twice, which is perhaps one reason why Trent Miles is no longer coaching the team. 
  • Four of the ten teams on this list (Georgia State, SMU, New Mexico, and FIU) ended up losing their coach either via firing (sometimes at midseason) or resignation
  • Chris Petersen’s best Boise State team probably does not belong on this list, but the Broncos were so dominant (+3.68 YPP in a pretty good WAC) that the regression analysis believed they should have won more than all their games.

Wednesday, May 10, 2017

2016 Adjusted Pythagorean Record: SEC

Last week, we looked at how SEC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually. Once again, here are the 2016 SEC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, SEC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard LSU significantly under-performed relative to their APR. There is not a lot of mystery as to why that is the case. LSU won five conference games. Four of those wins came by an average of 24 points. The other was a narrow victory over Mississippi State. Their three losses came by a combined 21 points, with the 10-0 loss to Alabama their largest margin of defeat.

Two weeks ago in the APR write up of the Pac-12, I looked at how well Pac-12 coaches over or under-performed relative to their APR and expected YPP records. In that post, I only looked at the Pac-12 since it expanded to twelve teams (2011-2016). Taking inspiration from Senator Blutarsky who noted Jim McElwain exceeded his YPP significantly in his first two seasons at Florida, I decided to do the same with the SEC. However, since it just means more in the SEC, I decided to go back as far as I have APR and YPP data - 2005. To qualify for inclusion on the leaderboards, coaches had to have at least three full SEC seasons under their belts in the twelve year period from 2005 through 2016. This criteria produced 27 coaches ranging from greats like Saban and Meyer to forgotten men like Croom and Dooley. Finally, before we get to the actual tables, just a housekeeping note. For coaches who did not finish a season (see Les Miles in 2016), I used the full season difference and credited them with the percentage they coached. For example, LSU was 1.88 games worse than expected in APR in 2016. Miles coached two (of eight) conference games so he receives 'credit' for negative 0.47 wins (-1.88*.25). The rest go to Ed Orgeron who receives 'credit' for negative 1.41 wins (-1.88*.75). Without further adieu, here are SEC coaches ranked by the average number of wins per season they exceeded their APR.
Chizik only coached for four seasons, so sample size is obviously an issue here. His national championship team was almost three wins better than their APR and his next team finished .500 in conference play despite an APR of only about two wins. His other two teams were pretty neutral in regards to their APR, but that sixteen game run puts him in front. Pinkel and Tuberville had similarly short tenures (remember this only includes Tuberville's stint on The Plains from 2005-2008), but fielded several teams that were better than their APR. For coaches with a significant tenure, Miles and Richt exceeded their expected records by about two fifths of a win on average. At the other end of the spectrum, you will find a plethora of Vanderbilt coaches consistently under-performing their peripherals. James Franklin, Bobby Johnson, and especially Derek Mason all occupy space in the bottom quartile. Houston Nutt coached at two schools during this time period and while his three Arkansas teams under-performed by about half a win per season, his four Ole Miss squads were even worse as they averaged more than a full win less than their APR!

Now let's look at Yards Per Play. Here are the SEC coaches ranked by the average amount they exceeded their expected record based on YPP. Keep in mind while APR was based on wins (i.e. +.500 equals half a win greater than expected), YPP is based on winning percentage. Thus, Tommy Tuberville's +.135 translates to a little more than one win (.135*8 = 1.08) per conference season.
Once again, we see some familiar faces at the top. Chizik and Tuberville were first and third respectively in APR and are second and first in YPP. Miles is once again the longest tenured coach in the top five while Richt is closer to average here. The bottom of the list also looks very similar with Ed Orgeron bringing up the rear. If you look back at the APR numbers, he was also fifth from the bottom there. I was a big fan of Orgeron's hiring last season, but these tables give me pause. On the one hand, Orgeron's teams have under-performed in each of his nearly four full seasons in charge. In addition, while the 2016 team was not totally his, keep in mind LSU was a consistent over-achiever, at least relative to their peripheral stats under Miles. On the other hand, Orgeron went a decade between head coaching jobs and in between appears to have matured while also guiding another team to a solid finish after a mid-season firing. Perhaps we shouldn't judge 2016 too harshly with all the turmoil surrounding the program. However, if 2017 plays out like 2016 did, with LSU blowing out five conference opponents, while losing a semi-competitive game to Alabama and two other close games in the SEC, not only will we have further evidence of a trend, Orgeron will find himself squarely on the hot seat.

Wednesday, May 03, 2017

2016 Yards Per Play: SEC

Hard to believe, but we only have two more conferences to review. This week, we head south to the SEC. Here are the SEC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each SEC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the SEC met this threshold? Here are SEC teams sorted by performance over what would be expected from their Net YPP numbers.
Arkansas was the lone SEC team to significantly over-perform relative to their expected record. Arkansas was not particularly lucky in terms of one-score games (1-1) or turnover margin (-6) in SEC play. No, the Hogs had the profile of a one win conference team thanks to an abysmal defense. The Hogs allowed nearly eight yards per play to SEC opponents and really had only one good defensive showing. They held a limited Florida offense to ten points and under five yards per play. If we remove that game, the Hogs allowed over 8.2 yards per play to their other seven conference opponents. Two teams averaged north of ten yards per play against Arkansas and six conference opponents averaged more than seven yards per play against them.

A few years ago, I penned this Pulitzer-caliber post about Les Miles and his uncanny inability to cover the spread (or more accurately his team's inability to cover the spread). Since we are looking at the SEC this week and since Miles' SEC career is over, I decided to reexamine his performance against the spread relative to his conference contemporaries. Let's get degenerate.

Miles coached at LSU eleven full seasons and parts of a twelfth beginning in 2005. In that span, 22 other coaches have spent at least four seasons as SEC head coaches. The following table lists those 23 coaches ranked by their winning percentage against the spread (ATS) in conference games (championship and bowl games excluded). I cheated a little and included Ed Orgeron even though he does not quite have four full seasons under belt since he did succeed Miles at LSU.
A few observations.

  • Despite being forced to pay a premium as the most recognized team in college football, backing Nick Saban and Alabama has been a winning proposition for gamblers. Since coming to Tuscaloosa, Saban has covered over 58% of the time against SEC opponents. 
  • Look at the three Auburn coaches since 2005 with the exact same ATS records. Eerie. 
  • While he never quite had the reputation in gambling circles of Miles, Mark Richt didn't exactly inspire a lot of ATS confidence for Georgia backers. 
  • Miles had a winning ATS conference record in just one season, but it was quite a doozy. His 2011 team went 7-1 ATS. If we remove that outlier year, his ATS conference record drops to 29-49-4 (a .372 winning percentage). 
  • While Miles does not quite bring up the rear, his career is more than double the length of the two men with a worse conference ATS wining percentage. 
  • And speaking of the guy in last place, he may join Miles in the unemployment line soon if the Aggies continue to struggle relative to their expectations.
So we know Miles was not good at covering the spread, but what if we break things down further. How did his teams perform in different roles. The following table lists LSU's performance ATS versus SEC foes under Miles in the role of favorite, double-digit favorite, and underdog.
Miles was a little better as a favorite, but you would have still made money betting against him in both roles. If nothing else, his teams were consistent as there was not a great deal of difference in their ATS numbers as a standard favorite and a double-digit favorite.

Let's look at one more angle. How did his teams perform ATS at home and on the road against SEC opponents?
So much for that Death Valley aura. On the road, his teams were basically a coin flip to cover, but the Tigers were horrible at home under Miles, covering just over 36% of the time. In fact, in the first four years of his tenure at LSU, his teams covered just once in home conference games!

Miles entertained college football fans for over a decade in Baton Rouge. He brought us a two-loss national champion, one of the best teams to not not win the national title, the Tennnessee Waltz Game,  tried to call a timeout on a change of possession (this was one season before the ill-fated rule that mandated the game clock start when the play clock started between possessions went into effect so maybe he was just ahead of the curve), a lot of grass eating, and of course, the final play (thus far) of his coaching career. I could never tell if he was a college football genius or the football equivalent of Homer Simpson living a charmed life despite being overwhelmingly incompetent. The truth was probably somewhere in the middle, but regardless, college football won't be as fun with him not around.

Wednesday, April 26, 2017

2016 Adjusted Pythagorean Record: Pac-12

Last week, we looked at how Pac-12 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 Pac-12 standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Pac-12 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard Stanford exceeded their expected record and UCLA under-performed relative to their expected record. Last week, we went over some reasons regarding UCLA poor record, so let's focus on Stanford. The Cardinal were not particularly lucky in on-score games, as they played in just one and lost it to Colorado. Of course, while the final score may not say so, their game against UCLA for all intents and purposes was a one-score affair. The Cardinal scored late to take the lead and then scored a defensive touchdown on the final play of the game for a misleading final score. Its just too bad Brent Musburger was not calling the game. What about turnover margin? No, Stanford was middling in that regard, posting a -1 in-conference turnover margin. So how did Stanford manage to drastically exceed their expected record? The answer lies in their two games with the Washington schools. Over an eight-day stretch in late September and early October, Stanford lost to Washington and Washington State by a combined margin of 64 points. These two blowouts dragged down their scoring margin and tarnished their overall profile despite them winning six of their other seven conference games.

I have been blogging about Yards Per Play and APR for FBS conferences for nearly two years now and I have data going back to 2005. With that in mind, I wanted to see if some coaches had any particular knack for exceeding or failing to exceed their expected record based on YPP and APR. As many coaches have come and gone in the twelve seasons I have data, I decided to just look at Pac-12 teams (for obvious reasons) and just look at teams since 2011, when the conference last expanded. I arbitrarily decided to only look at coaches who coached for at least three seasons so that no one-year wonders are unfairly represented. Obviously, using three years and beginning in 2011 means we miss out on quite a few notable Pac-10/12 coaches including Pete Carroll, Chip Kelly, and Jeff Tedford among many others. Perhaps a future post will be more inclusive. Anyway, here are all Pac-12 coaches since 2011 who have coached at least three seasons in the conference ranked by the average number of wins per season they exceed their APR.
I placed an asterisk by Steve Sarkisian as I elected not to count any of his games from 2015, when he was relieved of his duties after three conference games. APR and YPP numbers are done at the season level, so it made sense to ignore the three games he coached in 2015. Stanford had the largest positive difference of any Pac-12 team between their APR and actual record in 2016 and that is nothing new. Since 2011, David Shaw leads all Pac-12 coaches in average number of wins by which he exceeds his APR. Todd Graham is a distant second, with the recently fired Mark Helfrich third, and Rich Rodriguez fourth. On the other end of the spectrum, Jim Mora, Sonny Dykes, Chris Petersen, and Mike MacIntyre have averaged more than half a win fewer than expected based on their APR. Some of this can be blamed on small sample size as Washington was nearly two and a half game below their APR last season which significantly skews the data considering Petersen has only been in Seattle for three years. Similarly, UCLA has more than two and a half games worse than their APR in 2016 which negatively impacts Jim Mora's overall numbers.

Let's now turn our attention to Yards Per Play. Using the same criteria previously outlined, here are the Pac-12 coaches sorted by the average amount they exceed their expected record based on YPP.

Keep in mind while APR was based on games (i.e. David Shaw's +.603 means his teams exceeded their APR by six tenths of a game on average), the expected record based on YPP is based on winning percentage. Thus, Todd Graham's +.142 translates to about 1.3 games over a nine game conference season (.142*9). Once again, Graham and Shaw are at the top of the list. And again, Mora, Dykes, and Petersen are near the bottom, while MacIntyre has been more average by this measure. As I mentioned earlier, sample size is an issue when looking at these numbers.

So what do these numbers mean? Is Chris Petersen overrated as a coach because his teams appear to under-perform their records based on things like YPP and APR? Maybe there is something systemic to his teams that make them under-perform. Or maybe this is all 'noise' and the result of one bad season in a sample size of three. I would think Washington fans would be happy with his teams under-performing in these metrics as long as Washington continues to contend for Pac-12 titles and College Football Playoff appearances. On the other hand, David Shaw and Todd Graham do seem to always have their teams winning more games than we might otherwise suspect based on their per play averages and the amount of touchdowns they score and allow. Many factors influence which teams win football games, but it appears, at least on the surface, that David Shaw and Todd Graham have found something on the margins that allow their teams to win more than we might otherwise expect.

Wednesday, April 19, 2017

2016 Yards Per Play: Pac-12

Seven conferences down, three to go. This week, we head even further west and examine the Pac-12. Here are the Pac-12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Pac-12 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the Pac-12 met this threshold? Here are Pac-12 teams sorted by performance over what would be expected from their Net YPP numbers.
Four Pac-12 teams saw their expected record differ significantly from their actual record. Colorado and Washington State vastly over-performed relative to their expected record while Arizona and UCLA under-performed. Colorado went from the basement to the penthouse thanks to better play, but also thanks to a 3-1 record in one-score conference games, including a 10-5 baseball slugfest against Stanford which featured a Todd Helton grandslam. Washington State opened 2016 with a loss to an FCS opponent for the second consecutive season, but went 3-0 in one-score conference games to put themselves in contention for the North division title. Arizona suffered through a great deal of injuries en route to their worst conference record since 2003, but they were not particularly unlucky in one-score games. No, the Wildcats can blame turnovers. They forced only six in Pac-12 play while committing 19. Their in-conference turnover margin of -13 was eight worse than the team with the second worst in-conference turnover margin, Oregon. Finally, UCLA began the year in the top-20, but finished just 4-8 (2-7 in Pac-12 play). The Bruins also suffered through their share of injuries, but were mostly done in by their poor record in close games (0-3 in one-score conference games).

The 2016 Pac-12 Championship was a flashback of sorts. Former powers Colorado, a school that had not won a conference title since 2001 and Washington, a school without a conference title since 2000, faced off. That comparison is a little misleading though. While Washington has been a consistent bowl team since the arrival of Steve Sarkisian in 2009, Colorado had only played in one bowl in the last decade and had not finished with a winning record since 2005. Colorado improved significantly in 2016, going from 1-8 in Pac-12 play the previous year to 8-1. The improvement of seven games in conference play is one of the largest in college football history. Meanwhile, Washington came into 2016 with a great deal of hype. The Huskies were far better than their 4-5 conference record indicated in 2015 and were highly though of by many statistical measures, including two on this very blog. Washington lived up to the hype, doubling their conference win total, winning the Pac-12 championship, and advancing to the College Football Playoff. The two Pac-12 Championship Game participants improved by a combined total of eleven conference wins in 2016. This got me to wondering about the largest combined improvement of championship game participants. I looked at every FBS conference that had divisional play from the SEC to the WAC (moment of silence) and calculated the combined conference play improvement of the championship game participants. Surely this Pac-12 Championship Game was historic. Off hand, I could not remember a larger combined improvement. Ah, but my memory seems to have abandoned me in my old age. Here are the five largest combined increases in conference wins by championship game participants.
I had already forgotten about the 2013 SEC Championship Game. Missouri rebounded from a rough early beginning to their SEC membership by winning seven conference games and capturing the East. Auburn fired Gene Chizik after a winless conference campaign and rode some Gus Malzahn magic all the way to the final BCS National Championship Game. It has been smooth sailing for both Tigers ever since. The aforementioned Pac-12 Championship Game comes in second at a game behind the SEC. Conference USA has two entries tied for third. In 2005, the first year of divisional play in Conference USA, Tulsa doubled their conference win total from the previous season (when they were in the WAC). Another newcomer, Central Florida, won seven more conference games than the previous season (when they were in the MAC). Ten years later, Western Kentucky rode one of the best offenses in Conference USA history to a four-game improvement and a league title. They faced a Southern Miss team that won seven conference games after winning just two over the previous three seasons. Finally, the 2010 MAC season saw Miami of Ohio improve by six wins and pull off a massive upset in the championship game against a Northern Illinois team that went unbeaten in league play. While it wasn’t quite the most historic conference championship game of all time, the Pac-12 can take solace that its title game matchup was among the most unique college football events in recent memory.

Wednesday, April 12, 2017

2016 Adjusted Pythagorean Record: Mountain West

Last week, we looked at how Mountain West teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 Mountain West standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Mountain West teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard, New Mexico outperformed their APR and Utah State underperformed. Both teams also over and underperformed their expected record based on YPP and we discussed some reasons why last week. Read last week’s post if you’re interested in why the Lobos and Aggies boasted records that differed from what we might expect.

Take another look at the Mountain West standings. I’ll wait. Did you notice that almost every team in the Mountain Division had a winning conference record? Yep. Five of the six teams won at least five of their eight conference games and three won six. Is this unique? How often does one division produce so many teams with winning records? I’m glad you asked.

Thankfully, divisional play only goes back a quarter century so I didn’t have to look back too far. It began in 1992 when the SEC opened Pandora's Box by expanding to twelve teams, splitting into two divisions, and hosting a championship game. Several conferences followed suit throughout the 90’s and now every conference except the Big 12 and Sun Belt features two divisions. I looked at all those FBS divisions since 1992 and discovered this phenomenon has only happened four times, but each occurrence has come in the past three seasons. Here is each instance listed chronologically.

Pac-12 South 2014
The Pac-12 plays nine conference games which means teams cannot finish with a .500 conference record. Depending on how the schedule shakes out, this gives a team that might have only finished 4-4 an opportunity to steal an extra game and eke out a winning record. It also helps when the last place team in the division does not win a game and in effect donates a victory to the other five members. Perhaps the Pac-12 can develop a plaque or award to honor the 2014 South Division as the first to feature five teams with winning conference records.

Mountain West Mountain 2015
2016 was not the first time the Mountain West boasted five teams with winning conference records. It also happened the year before. While the last place team in the division did manage a pair of conference wins, they came in games against the West Division so the Cowboys did not win a game against their division foes.  
Big 10 West 2016
Personally, I think this one deserves an asterisk and perhaps two because this division has seven teams and the conference played nine games. Despite winning three conference games between then, Illinois and Purdue did not beat any of the other five teams in the division. They merely provided them with a combined ten conference wins.

Mountain West Mountain 2016
Not only did the Mountain West pull off this improbable accomplishment two seasons in a row, they did it with a nice ‘worst to first’ story as Wyoming won the division after finishing last in 2015. Again, the last place team in the division, Utah State, did not win a single game against divisional foes.

The past two seasons have truly been unique in the Mountain West. The Mountain Division has produced five teams with winning conference records both seasons despite not having the presence of a seventh team or a ninth conference game. It doesn’t mean a great deal in the grand scheme of things. In fact, most people probably didn’t even realize it happened, but it shows that if you pay close attention you might see something historic.