Wednesday, May 04, 2016

2015 Yards Per Play: SEC

We are closing in on the home stretch. Just two conferences to go. This week we examine the SEC. Here are the 2015 SEC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each SEC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or underperforming by more than a game and a half in a small sample seems significant to me. In the 2015 season, which teams in the SEC met this threshold? Here are the SEC teams sorted by performance over what would be expected from their Net YPP numbers.
The SEC saw a pair of teams fall below and a pair of teams exceed their YPP expectations. For Missouri and South Carolina, the two teams that failed to reach their expected record based on YPP, the culprit was simple: close game results. Missouri and South Carolina finished a combined 0-6 in one-score conference games. While both teams were especially bad on one side of the ball (offense for Missouri and defense for South Carolina), it took several bad bounces for them to finish a combined 2-14 in SEC play. The two teams that vastly exceeded their YPP results also happened to meet in the SEC Championship Game. A confluence of factors allowed the Gators to win the SEC East in their first season under Jim McElwain. Florida finished 3-1 in one-score conference games, boasted the best in-conference turnover margin (+8), and scored three non-offensive touchdowns. For Alabama, the results are more mystifying. The Crimson Tide lost only once all season (by six points), and won just a single conference game by fewer than thirteen points. The Tide were not especially buoyed by turnover margin either, finishing a respectable, but hardly superb +1 in SEC play. However, the Tide did take advantage of unconventional touchdowns. They returned four interceptions for touchdowns in SEC play, including three against Texas A&M. The Tide continued to score in unconventional ways in the playoffs, returning a punt for a touchdown against Michigan State (not needed for the victory) and a kickoff for a touchdown against Clemson (vital to the win).

2015 closed the book on the Steve Spurrier era at South Carolina. As a Columbia resident and degree holder from the university (I try to keep this a secret), I decided to conduct a review of Mr. Spurrier’s overall body of work in the SEC. We’ll start with some basic housekeeping. While Spurrier retired/resigned/quit halfway through the season, we are going to credit the win over Vanderbilt and the three other conference losses under Shawn Elliott to his account. This was his team after all, even if he couldn’t stand to see them play defense either.

Let’s start by looking at his SEC record. The following table lists the conference record for each SEC team since the 2005 season. Newcomers Missouri and Texas A&M are listed at the bottom.
Among the twelve tribes of the SEC, South Carolina ranks almost exactly in the middle over the last eleven years. They also rank third in the East, significantly behind Florida and Georgia, but also significantly ahead of once (and perhaps future) power Tennessee. One interesting, non-Gamecock related observation: Kentucky should be very ashamed. Their league record is worse than Vanderbilt’s over the past eleven seasons!

Let’s break that record down a little further. How have the Gamecocks performed against every SEC team? The following table lists South Carolina’s record against every SEC team since 2005. The table is sorted alphabetically with SEC East teams appearing first.
A few observations. Based on historical precedent, the Gamecocks performed well against the ‘Big 3’ in the East, compiling a 15-18 record with identical 5-6 marks against each team. That being said, it is easy to see where Spurrier padded has record. The Gamecocks went a combined 17-5 against Kentucky and Vanderbilt, collecting more than a third of his SEC victories (and exactly half of his wins versus the East) against that pair. Finally, outside of a perfect record against the Mississippi schools, the Gamecocks had a horrible track record against SEC West schools. Take away the wins against Ole Miss and Mississippi State, and the Gamecocks are just 5-18 against the West.

Finally, how did Spurrier performe as a favorite and as an underdog? The following table lists Spurrier’s record against conference foes in both roles. These results are straight up, and not against the spread.
While this is not a perfect measure (I made no distinction between being a twenty point favorite and a two point favorite), it can give you an idea of how South Carolina was perceived by the tools bookmakers use to set the point spread and by the public once those spreads are made. For example, in his first season (2005), the Gamecocks were only favored in a pair of conference games. Contrast that with 2011 when the Gamecocks were favored in every league game save one. Overall, especially prior to the wheels sort of coming off at the end of his tenure, Spurrier has very good as a favorite (32-6 through 2013). By the same token, he was pretty bad as an underdog, especially in his final season. I’ll leave you with this bit of statistical minutia: If Spurrier had won every SEC game in which the Gamecocks were favored and lost each one in which they were an underdog, he would have ended up with the same record he actually achieved: 45-43. Eerie.

Tuesday, April 26, 2016

2015 Adjusted Pythagorean Record: Pac-12

Last week, we looked at how Pac-12 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2015 Pac-12 standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Pac-12 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
A pair of Pac-12 teams saw their APR numbers differ significantly from their actual records. Oregon finished 7-2 in the Pac-12 despite allowing 40 touchdowns in league play! Meanwhile Washington finished with a losing conference record despite scoring nine more touchdowns in Pac-12 action than their opponents. Close games played a small role in the disparity. Oregon went 3-1 in one-score games, while Washington was 1-2. However, neither was extremely lucky nor unlucky in close games. Turnovers don’t do anything to augment the story, as both had positive differentials in conference play. No, dominating wins and losses explain most of the disparity. While Oregon was still figuring things out early in the season, they lost by 42 points to Utah! That defeat tampers down their numbers. Couple that with the fact that the Ducks were unusually permissive on defense, and you can see why Oregon posted middling APR numbers. On the other hand, while Washington won just four of nine conference games, they were absurdly dominant in three of those wins. The Huskies defeated Arizona, Oregon State, and Washington State by a combined 126 points with no win coming by less than five touchdowns!

Awkward segue.

One of the more interesting developments that has coincided with the Pac-12’s expansion has been the absolute disappearance of Colorado’s homefield advantage since joining the league. The table below lists Colorado’s home and road splits in conference play from their final six seasons in the Big 12 with the same splits over their first five seasons in the Pac-12.
While they were mediocre at home during their last few seasons in the Big 12, their home record was far superior to the duds they consistently laid on the road. However, since joining the Pac-12, their home and road records are effectively the same. Now, some might point out that Colorado’s move to the Pac-12 has corresponded with a cratering of the football program. And to that point, I would 100% agree. In the interest of analyzing this further, I decided to look at a metric that takes into account how well a team is ‘supposed’ to perform and judges them based on expectations, not raw results. I am talking of course, about the Las Vegas line or point spread. For the uninitiated (and churchgoing) audience, the point spread is an unbiased look at who should win a certain game, and more importantly, by how much. Using the points spread, I calculated the Spread Adjusted Margin (SAM) for each Colorado conference game from 2005-2015 and determined the per game averages for their home and away contests. The SAM is pretty easy to calculate. Here are two examples. Say Colorado is expected to beat a team by 3 points. They win by 7. Their SAM for this game is +4. This is, the margin they were supposed to win by (3) subtracted from how much they actually won by (7). Now say Colorado is expected to win by 10 points. Instead they lose by 2 points. Their SAM for this game is -12. That is, the margin they were expected to win by (10) subtracted from how much they actually won by (-2). Easy right? Here are the results separated by their conference affiliation.
As you can see, there does appear to have been a real change since the Buffaloes joined the Pac-12. In their final six Big 12 seasons, the Buffaloes produced a SAM that was almost five points per game better at home. However, in Pac-12 play, the Buffaloes have about the same SAM at home and on the road. I have my own theories (cough cough Marijuana) as to why this might have occurred, but I’d be interested to hear if any readers have (preferably crackpot) theories on why this might be.

Wednesday, April 20, 2016

2015 Yards Per Play: Pac-12

Seven conferences down, three to go. This week we go over the Pac-12. Here are the 2015 Pac-12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Pac-12 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or underperforming by more than a game and a half in a small sample seems significant to me. In the 2015 season, which teams in the Pac-12 met this threshold? Here are the Pac-12 teams sorted by performance over what would be expected from their Net YPP numbers.
Stanford was the sole Pac-12 team to significantly over or under shoot their expected record based on YPP. How did the Cardinal manage to do this? Well, it is a bit of a mystery. The Cardinal won eight conference games, with seven coming by at least ten points, which indicates they were pretty dominant. Their lone conference loss came by just two points and they actually had a negative in-conference turnover margin. So why don’t their YPP numbers indicate a dominant team? I think the answer lies in the one game they lost. While the Cardinal fell to Oregon by just two points on a failed two point conversion, the YPP numbers in the game were much starker. Oregon averaged over nine yards per play and outgained the Cardinal by three yards per play on average. Remove that game and Stanford’s YPP numbers are +1.15 instead of their solid, but hardly elite +0.74.

Speaking of Oregon and Stanford, even casual college football observers know they have dominated the Pac-12 North since the conference split into a pair of divisions beginning in 2011. Either the Ducks or Cardinal have won the North each season and gone onto defeat the South champion. The South on the other hand, has never had a repeat champion (UCLA appeared in the title game for two consecutive years thanks to postseason ineligibility at the other Los Angeles school) and four of the division’s six schools have represented the South in the title game in five seasons of divisional play. A half-decade into the new incarnation of the Pac-12, I wanted to take a look back and see which school has been the best in the South. Pac-12 teams play nine conference games, which includes four games against teams from the North. As those games can vary from very difficult (Oregon and Stanford) to piece of cake (Oregon State in 2015), I decided to look at only intra-divisional games. In other words, I only looked at contests involving Pac-12 South teams. The year by year and overall results are listed below. 
There has been a great deal of parity at the top of the Pac-12 South. Arizona State, Southern Cal, and UCLA are separated by just a single game with the Bruins ranking as the best of the Pac-12 South five years in. Arizona has been middling, while conference newbies Colorado and Utah have struggled, especially the Buffaloes. After beating Utah to end the 2011 regular season, Colorado is in the midst of a 20-game losing streak to divisional foes. Yikes! Somewhere Gary Barnett is smiling. Or more likely, looking the other way while sexual assaults happen.

Thursday, April 14, 2016

2015 Adjusted Pythagorean Record: Mountain West

Last week, we looked at how Mountain West teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2015 Mountain West standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Mountain West teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Colorado State was the only Mountain West team to significantly outperform their APR numbers. The Rams actually allowed more touchdowns than they scored, but managed to win five of eight league games (their bowl game with conference rival Nevada is excluded from this analysis). The Rams won both of their one score league games and were outscored by significant margins in each of their league losses. Boise State, San Diego State, and Utah State beat the Rams by a combined 70 points.

Since APR didn’t give us a whole lot to discuss, let’s talk about something else, namely the Mountain West’s standing among mid-major (Group of Five) leagues. With the advent of the College Football Playoff, the current structure of haves (Power Five) and have nots (Group of Five) has been in place for just two seasons. The table below lists how each Group of Five conference, as well as the Group of Five independents (and BYU is considered Group of Five here) have fared against each other (regular season games only) in 2014 and 2015.
After faring well against their fellow mid-majors in 2014, the conference took a step back in 2015, winning just a third of their games against other Group of Five members. This could well be a one-year blip, so let’s look at things another way. This next table lists every current Mountain West team alphabetically and indicates the last season they finished ranked in the AP Poll and the last season during which they appeared in the AP Poll at any point.
Outside of Boise State, the results for the other Mountain West teams are not too great. San Jose State and Utah State each registered historic seasons in 2012, and while they have been contending for bowl games in the three seasons since, those were clearly outlier years. Similarly, Nevada rode Colin Kaepernick to a top-10 finish in 2010, but has not sniffed the polls since. While Fresno State flirted with a BCS bowl bid in 2013, they have not finished a season ranked in over ten years. I could go on. If the Mountain West wants to assert itself as the best mid-major conference it needs a team to join Boise State in the national conscience. San Diego State ran roughshod over the Mountain West in 2015, but the Aztecs stumbled in the non-conference, losing to California and Penn State as well to South Alabama of the Sun Belt. Group of Five teams are at a disadvantage when they go up against Power Five schools, but Mountain West teams can improve their national profile by winning more games in the non-conference against teams of similar regard.

Wednesday, April 06, 2016

2015 Yards Per Play: Mountain West

Only four more conferences to go in our 2015 recap. This week we go over the Mountain West. Here are the 2015 Mountain West standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Mountain West team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or underperforming by more than a game and a half in a small sample seems significant to me. In the 2015 season, which teams in the Mountain West met this threshold? Here are the Mountain West teams sorted by performance over what would be expected from their Net YPP numbers.
Only one team saw their actual record differ significantly from their expected record based on their YPP differentials. And boy did it ever differ. For the second time in three seasons, Hawaii finished winless in Mountain West play. The 2013 team also posted decent YPP numbers and improved to three league wins in 2014, so there is hope for new coach Nick Rolovich. So how did Hawaii go about losing all their conference games despite bad, but not horrendous YPP numbers? Close games are not the culprit, as Hawaii only lost a single one-score league game (by a single point at New Mexico). No, turnovers told the story for the Warriors. Hawaii turned the ball over 26 times in their eight conference games, while only forcing six of their own. Their in-conference turnover margin of -20 was by far the worst in the conference (13 worse than second to last Wyoming). We’ll discuss more about this historic margin later. I am not making the argument that Hawaii was a ‘good’ team in 2015. They were shut out in each of their first three road games (albeit against strong competition in Ohio State, Wisconsin, and Boise State). However, they did show signs of life early in the year by beating Colorado of the Pac-12 thanks to a little assist from some bumbling officials. If their turnover margin was merely bad and not historically poor, the Warriors probably would have scrounged up at least one and potentially a pair of conference wins.

Hawaii was not the only Mountain West team with a historical in-conference turnover margin in 2015. While Hawaii was struggling with a -20 margin, San Diego State was rolling through the conference en route to an undefeated record and league title with a +19 turnover margin (excluding their championship game win over Air Force). To get a handle on what one might expect from these two polar opposites going forward, I decided to look at teams with similar extreme in-conference turnover margins and see how they performed the following year. Here we’ll define extreme as averaging two more or two less turnovers per game that your opponent. For an eight game conference schedule, that would equate to a turnover margin of +16 or -16. Similarly, for a nine game schedule that would be either +18 or -18. Hawaii and San Diego State were certainly unique in 2015. Only ten other teams since 2005 posted such an extreme turnover margin (Arkansas State also fit the criteria, but like the Warriors and Aztecs, their follow up performance is unknown at this time). We’ll start with the teams with extreme negative turnover margins.
Obviously, teams with extremely poor turnover margins don’t tend to win a lot of games. These four squads combined to go 2-28 in conference play. Perhaps not surprisingly, three of the teams had new head coaches the following year. Oklahoma State elected to retain their head coach (probably since he was in his first season), and that has worked out pretty well for the Cowboys. New Mexico State is a bit of an oddity, as they became a college football independent after their turnover plagued final season in the WAC. That is why there is no follow up conference record listed for them. As for the other three teams, well, they all improved their conference record by at least two games with Oklahoma State and Wyoming qualifying for bowls. For what it’s worth, New Mexico State’s overall record improved from 1-11 to 2-10, so each member of the quartet improved the next season. A pessimist might point out each team had nowhere to go but up after their poor showings and a statistician might highlight the small sample size here. However, another statistician might bring up something about regression (or in this case progression) to the mean and opine that an extremely poor turnover performance is unlikely to be repeated. As for me, I would set the over/under on league wins at two for Hawaii in 2016.

Now let’s look at the other side of the coin. Here are teams with extreme positive turnover margins.
In a not too surprising development, teams with historically great turnover margins tend to have good records. These six teams combined to go 47-3 with Oregon and Toledo being the only teams to not win a conference title. Unlike the poor turnover teams there was continuity at the head coaching position the following season with only Oregon losing their head coach (to the NFL no less). For the most part, each team remained strong the next year with only Kansas State falling out of contention for a conference title. Each team qualified for a bowl game in the following season and two thirds of the teams declined by one game or less. An optimist would likely say these teams had nowhere to go but down after compiling their pristine conference records, and once again, a point can be made about small sample sizes. However, as with Hawaii, San Diego State is unlikely to see their extreme turnover margin repeated. The Aztecs did not throw a single interception in conference play last year! While I expect them to be contenders in the conference in 2016, expecting another undefeated, scorched earth run through the Mountain West is likely folly.

Thursday, March 31, 2016

2015 Adjusted Pythagorean Record: MAC

Last week, we looked at how MAC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2015 MAC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, MAC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Kent State was the lone MAC school to see a significant difference between their APR and their actual record. Once you look at their underlying offensive performance, the reason for this disparity is quite simple. Kent State scored six, yes six, offensive touchdowns in their eight conference games. This kind of futility often results in a one or zero win campaign. However, Kent State actually opened MAC play 2-1, by clustering their touchdowns at favorable times and playing decent defense. Despite their 2-1 MAC record, the Golden Flashes had already been outscored by 20 points. Over their final six conference games, only one would be decided by less than 13 points and the Golden Flashes would be outscored by more than 18 points per game.

Despite their historical offensive ineptitude, Kent State fans might have at least a little reason for optimism heading into 2016. The following table lists the other MAC teams that have failed to score more than 10 offensive touchdowns in conference play and their follow up performance the next year.
Based on an admitted small sample size, it appears quite difficult to perform so poorly offensively for two consecutive seasons. Each team that scored 10 or fewer offensive touchdowns rebounded to score at least 23 in their epilogue. Three out of four schools also saw their conference record improve. This is perhaps not too surprising since their offenses returned from the abyss. In the interest of curbing the enthusiasm of Kent State fans, it should be noted that three of the four teams also felt compelled to change coaches after their dreadful offensive showings. Eastern Michigan was the only school to retain their coach, while the other three brought in fresh blood (or old fresh blood) to revitalize their teams. Barring an unforeseen set of circumstances, Kent State will be led by Paul Haynes (don’t worry if you didn’t know who their coach was) for the fourth consecutive year in 2016. We’ll see if he is able to coax a similar offensive improvement out of the Golden Flashes.

Wednesday, March 23, 2016

2015 Yards Per Play: MAC

Our 2015 conference recaps now take us to the Big 10's little brother, the MAC. Here are the 2015 MAC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each MAC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2015 season, which teams in the MAC met this threshold? Here are the MAC teams sorted by performance over what would be expected from their Net YPP numbers.
Only two teams in the MAC met the threshold, with both just barely eclipsing the magic number. Ohio had the statistical profile of a slightly below average MAC team, but managed to win more than half their games and finish second in the MAC East. The Bobcats were 2-0 in one-score league games, but were hardly extremely lucky in that category. No, the most likely explanation for Ohio exceeding their YPP numbers is the fact that they played good, but not great in most of their wins, while they were absolutely destroyed in each of their three losses. Ohio won five games, and while three games by double-digits, their average MAC win was by just over 16 points. Meanwhile, each of their three league losses came by at least 24 points and two were by at least 35 points. On the other side of the coin, Massachusetts, in their MAC swan song, had a better statistical profile than Ohio, but won less than half as many games as the Bobcats. The Minutemen were a little unlucky, going 1-3 in one-score MAC games, but were not significantly unlucky. Whereas Ohio played horrendously in their three losses (being outscored by 97 points), Massachusetts was competitive in almost all their games. The Minutemen dropped their six league games by a total of 66 points. The Minutemen were consistently below average, but probably deserved an extra win or two based on how they played. The Minutemen end their disappointing quadrennial sojourn in the MAC with a 7-25 league record.

Frank Solich is the dean of MAC coaches, having joined the Bobcats prior to the 2005 season. Under his guidance, the Bobcats have experienced great success. They have played in three MAC Championship Games, made seven bowl appearances, and spent time in the top 25 of the AP Poll. However, the one accomplishment that has eluded Solich during his tenure is a MAC title. Here are the cumulative MAC standings since Solich has been in Athens, Ohio.
The Bobcats are tied for fourth overall in MAC winning percentage (and tied for first among teams from the East with Bowling Green) since 2005. However, while the three teams ahead of and tied with them have combined for eight titles, the Bobcats have not been able to break through. Meanwhile, Buffalo, Miami, and Akron have combined to win about a third of their league games since 2005, but own three league championships! As a wise man once said: I’d rather be lucky than good.

In another interesting piece of statistical minutia, Toledo does not even have a MAC Championship Game appearance despite posting the third best league mark since 2005! Part of this is because they play in the stronger MAC West where Northern Illinois has won six consecutive division titles under three different head coaches.