Thursday, June 06, 2019

2018 Adjusted Pythagorean Record: Sun Belt

Last week we looked at how Sun Belt teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2018 Sun Belt standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Sun Belt teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine whether or not a team significantly over or under-performed relative to their APR. By that standard, Troy significantly over-performed and both Georgia State and Texas State under-performed. Troy was a little lucky to finish 7-1 in conference play as they were 2-0 in one-score games. Georgia State was 0-1 in one-score games and they finished next to last in turnover margin in Sun Belt play (-6). However, the main reason their APR is significantly better than their actual record is their performance in their lone conference win. In their Sun Belt opener, the Panthers rolled Louisiana-Monroe 46-14. In their other seven conference games (all losses of course), the Panthers permitted their opponents an average of five offensive touchdowns per game! Meanwhile, Texas State also under-performed relative to their YPP numbers and we discussed some reasons for that last week, so peruse the back catalog of posts to catch up.

Head Coaching Turnover in the Sun Belt
The Sun Belt will welcome four new head coaches in 2019. Appalachian State, Coastal Carolina, Texas State, and Troy will all be taking the field without the men who led them in 2018. Some of those teams lost coaches thanks to their great success (Appalachian State and Troy), others thanks to their lack of success (Texas State), and others thanks to retirement (Coastal Carolina). That head coaching turnover is tied for the most the Sun Belt has experienced since the end of the 2005 season. However, the turnover in the Sun Belt is even more significant when you consider the other year the league welcomed four new coaches was 2018. Even if you disregard Coastal Carolina’s coaching change entering the 2018 season (when Joe Moglia returned from a one-year sabbatical to deal with health issues), that still means seven of the league’s ten teams have changed coaches in the past two seasons. Blake Anderson (Arkansas State) and Matt Viator (Louisiana-Monroe) are the resident deans of Sun Belt coaches as they enter their sixth and fourth year respectively. To give you an idea of just how much turnover there has been, take at look at the following table. It lists all the Sun Belt coaching changes since the end of the 2005 season (counted in the table as the beginning of the 2006 season) along with the reason for the change.
So what does this mean for the Sun Belt in 2019? It means the league is in transition. Appalachian State and Troy, two programs that have won at least a piece of the last three Sun Belt titles and finished a combined 41-7 in Sun Belt play since 2016, lost their head coaches. In addition, Coastal Carolina, a perennial FCS playoff participant will also be without their head coach as they enter their third season at the FBS level. If a lower-level team was to make a run at the Sun Belt title, this might be their best shot. The Sun Belt has been won by a (current) team other than Appalachian State, Arkansas State, Georgia Southern, or Troy just once since 2006 (Louisiana-Lafayette tied for the title in 2013 with Arkansas State).

Well, that does it for our conference recaps. Unfortunately, we still have about twelve weeks before the season gets started in earnest (only eleven until some sweet Week Zero action). In the intervening weeks, I'll be making another trip to Vegas with a requisite betting recap post. In addition, I will also have some intermittent posts on the NFL. More specifically, the Adjusted Pythagorean Record (APR) in the NFL. Be on the lookout for that this summer. And, as always, thanks for reading.

Thursday, May 30, 2019

2018 Yards Per Play: Sun Belt

At long last, we come to the final conference in our offseason recaps. This week, we will be reviewing the Sun Belt.

Here are the Sun Belt standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Sun Belt team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2018 season, which teams in the Sun Belt met this threshold? Here are Sun Belt teams sorted by performance over what would be expected from their Net YPP numbers.
Coastal Carolina, in their second year of play at the FBS level, significantly exceeded their expected YPP record and nearly finished bowl eligible. The Chanticleers were a decent, but hardly amazing 2-1 in close Sun Belt games, but their YPP numbers were so wretched thanks to multiple blowouts. Their other five Sun Belt losses all came by double-digits, with four coming by twenty or more points. On the other end of the spectrum, Texas State significantly under-performed thanks to an 0-3 mark in close conference games. A little bit of offense would have gone a long way for the Bobcats as they scored fourteen or fewer points in five of their seven conference losses.

Sun Belt Ranked Teams
In 18 seasons of existence at the FBS level, the Sun Belt has never had a team finish the season ranked in the top 25 of the AP Poll. A Sun Belt team has managed to climb into the regular season AP poll twice, but both times, they lost their very next game. Following the 2015 season, a former Sun Belt team did manage a spot in the final poll, but the conference itself is still searching for its first ranked finish. Appalachian State came agonizingly close this past season, finishing 26th, and the conference as a whole finished with an unprecedented three teams receiving votes in the final AP Poll. The following table lists every Sun Belt conference team that received at least one vote in the final iteration of the AP Poll for a given season.
While the conference has had just eight teams receive votes in the final AP Poll, five of those instances have occurred in the past four seasons, with three of those spots belonging to teams that were playing at the FCS level six seasons ago (Appalachian State and Georgia Southern). Sun Belt teams have almost no margin for error when it comes to finishing ranked (as evidenced by the 2018 Appalachian State team that finished 11-2 with an overtime loss at Penn State), but the conference has improved in the past half-decade. Before the next round of realignment cranks up and changes college football as we know it, I predict we will see a Sun Belt team finish in the top-25.

Thursday, May 23, 2019

2018 Adjusted Pythagorean Record: SEC

Last week we looked at how SEC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2018 SEC standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, SEC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as an arbitrary line of demarcation and by that standard, Texas A&M significantly over-performed relative to their APR. The Aggies also exceeded their expected record based on YPP and we went over some reasons for that last week, so I won’t rehash that here. On the other end of the spectrum, Missouri and Mississippi State significantly under-performed relative to their APR. Neither team was significantly unlucky (combined 1-3 in one-score conference games), but they both showed signs of dominance. Three of Missouri’s four conference wins game by at least three touchdowns while all of Mississippi State’s conference wins came by at least fourteen points.

Mississippi State and the Dominance of a .500 Record
Speaking of Mississippi State…
The Bulldogs undershot their APR by over two and a half games. Going back to 2005, that is one of the largest negative discrepancies between a team’s actual record and their APR (in the BCS/Power Five).
That in itself is a pretty amazing ‘accomplishment’, but when I looked closer at the numbers I was shocked at Mississippi State’s performance in their conference wins and their conference losses.
In their four conference losses, Mississippi State managed a solitary offensive touchdown. However, the defense was still solid. Extrapolated to a full conference season, their average of 2.25 touchdowns allowed per game (18 over an eight-game season) would have ranked fifth in the SEC in 2018. If the offense had shown up even a little, the Bulldogs could have split those four games and been eyeing a top-ten finish in their bowl game.

Obviously the level of competition is a factor in their divergent performance in the wins and losses. Their four conference losses all came to teams that finished the season ranked in the AP top fifteen while two of their four wins came against the dregs of the SEC West (Arkansas and Ole Miss). Still, the Bulldogs also notched convincing wins against Auburn and Texas A&M and held Alabama to their lowest regular season point total so the potential for a memorable season was there. Taken together, the first year of the Joe Moorhead era feels like a waste. With an opportunity to salvage a ranked finish in the Outback Bowl, the Bulldogs somehow lost to Iowa despite allowing just 199 yards of total offense. The Bulldogs were much better than your typical five-loss also-ran (12th in SRS, 8th in S&P+), but coaches are remembered for wins and losses, not the wet dreams of stat nerds. I don’t believe Moorhead is in trouble by any means, but he missed a great opportunity to extend the honeymoon period in Starkville.

Thursday, May 16, 2019

2018 Yards Per Play: SEC

We are entering the home stretch. Just two more conferences to go. This week we examine our final Power Five conference, the SEC.

Here are the SEC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each SEC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2018 season, which teams in the SEC met this threshold? Here are SEC teams sorted by performance over what would be expected from their Net YPP numbers.
Texas A&M and Ole Miss were the two SEC teams that saw their actual record diverge significantly from their expected record based on YPP. The Aggies exceeded their expected record thanks to a 4-1 mark in one-score conference games. Ole Miss was 1-2 in one-score conference games, which is hardly historically unlucky. However, their defense was historically bad (allowed at least 31 points to every conference opponent) which prevented them from winning games despite an above-average offense.

The Best SEC Offense
For most of the 2018 season, the Alabama offense looked like an unstoppable juggernaut. Until their lackluster showing in the national championship game against Clemson (when they still moved the ball effectively), the Tide had scored at least 24 points in each game and topped 30 points twelve times in their first fourteen games. Even counting their loss to the Tigers, Alabama still averaged over 45 points per game in 2018. That loss obviously left a bad taste in the mouth of Alabama (and SEC) fans, but viewed holistically, the Tide were one of the most dominant offenses of all time. How do they compare to other recent SEC offenses in the two statistics we hold dear around these parts, Yards per Play and APR?

Let’s start with Yards per Play. Here are the top six SEC offenses in terms of raw yards per play since 2005. These numbers include conference play only with the SEC Championship Game excluded (where applicable).
The Tide sure seem like one of the best modern SEC offenses by this metric. They are one of only five SEC offenses to average north of seven yards per play and they are nearly a quarter of a yard per play clear of the second place team (another incarnation of Alabama that did not win the national title).

Now let’s look at the offensive component of APR, offensive touchdowns. Once again, here are the top six SEC offenses in terms of offensive touchdowns. These numbers include conference play only with the SEC Championship Game excluded (where applicable).
Once again, Alabama ranks number one in this metric, with Tim Tebow’s national title winning Gators the only team in the same neighborhood. The four teams tied for third all averaged a full touchdown less per game than Alabama. Well, that’s a pretty open and shut case right? Alabama was clearly the best SEC offense of the past decade and a half. No question about it. Just to be sure, let me run the numbers again, but this time taking into account the scoring environment each team played in. Here are the Yards per Play numbers along with the SEC average for that season. Instead of being sorted by raw totals, the teams are sorted by the difference between their yards per play and the SEC average.
When we adjust for the offensive environment, Alabama still stands out, but another team joins them at the top of the heap. Look at how defenses dominated the SEC in 2008. The average SEC team gained just five yards per play. That’s more than half a yard less than they gained in 2018!

Now let’s do the same thing with offensive touchdowns. Here are the offensive touchdown numbers along with the SEC average for that season. Once again, the teams are sorted by the difference between their offensive touchdowns and the SEC average.
Alabama stands out, but they still can’t shake that 2008 Florida team. SEC teams scored nearly half a touchdown more per game in 2018 than they did in 2008 so Florida’s 44 total touchdowns more than doubled the production of the average SEC offense in 2008!

I went into this analysis thinking Alabama’s 2018 team would rate out as the best SEC offense of recent vintage. However, once an adjustment is made for the uptick in quality of SEC offenses (or downgrade in quality of SEC defenses), the 2008 Florida team is at worst the equal of the Tide and probably a hair better.

Thursday, May 09, 2019

2018 Adjusted Pythagorean Record: Pac-12

Last week we looked at how Pac-12 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2018 Pac-12 standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Pac-12 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as an arbitrary line of demarcation and by that standard, Colorado was the only team that saw their actual record differ significantly from their APR. They also under-performed relative to their YPP numbers and we went over some reasons for that last week, so there is no need to rehash that here.

Problems in the Pacific Northwest
Oregon State has endured a rough few years since Mike Riley departed. In the four seasons since Riley left to take the Nebraska job, the Beavers are just 9-39 overall and 4-32 in Pac-12 play. The Beavers did technically improve their won/loss record under first year head coach Jonathan Smith, but they were among the worst BCS/Power 5 teams since 2005 in terms of Net Yards Per Play.
The Beavers were only ‘bested’ by Paul Wulff’s first two horrendous Washington State teams which managed to finish more than three yards per play under water in back-to-back seasons. Now, the all-important question is, how much improvement is possible in one season? To answer that question, we first need to define what we mean by improvement. Do we mean playing better, regardless of the change or lack thereof in won/loss record or are we only concerned with winning more games? I’ll try to answer both queries. First, here are the teams from the previous chart (minus Oregon State of course) with their Net YPP, their Net YPP the next season, and the difference between the two numbers.
This is good news for Oregon State. The Beavers are likely to improve. Eight of the previous nine worst teams in Net YPP improved the following season, with Kansas being the only team to decline (and they pretty much stayed the same). On average, the teams improved by about nine-tenths of a yard per play with Ole Miss seeing the largest single season improvement (more on them later) at +2.30.

Now let’s see how each team’s conference record changed in the following season.
Unlike the YPP table, this is more of a mixed bag. Four of the nine teams improved, three stayed the same, and two declined. However, despite improved conference record for four of the nine teams, the starting point was so low to begin with that only two of the nine teams finished with more than one conference win. And those can be explained with some extenuating circumstances. Rutgers was still quite bad in 2017, winning three conference games by a total of 20 points. Their 'improvement' didn't last and they were once again winless in conference play in 2018. Meanwhile Ole Miss was (quite obvious in hindsight) cheating like hell. Unless Jonathan Smith is calling hookers on his state-issued cell phone, expectations for the Beavers in 2019 should be muted. If Oregon State can improve to around -2.00 in net yards per play and hold steady in the conference win column, 2019 should be considered a success in Corvallis.

Thursday, May 02, 2019

2018 Yards Per Play: Pac-12

After nearly two months of G5 posts interspersed with some basketball musings, we return to the Power Five. At least in name. This week we examine the Pac-12.

Here are the Pac-12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Pac-12 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2018 season, which teams in the Pac-12 met this threshold? Here are Pac-12 teams sorted by performance over what would be expected from their Net YPP numbers.
Washington State and Colorado were the only Pac-12 teams that saw their actual record differ significantly from their expected record based on YPP. Washington State exceeded their expected record thanks to a solid record in close games (3-1 in conference play) and the second-best in-conference turnover margin (+5). Meanwhile, Colorado was 1-2 in close conference games and had a -7 turnover margin in conference play. The Buffaloes were particularly bad down the stretch, posting a -12 turnover margin over their final three games. The Buffaloes lost those three games by an average of nearly twenty points, despite a relatively modest per play differential (-.39).

Maybe Pump the Brakes on Washington State
The 2018 season was one of the best in Washington State history. Their eleven wins were the most in school history, their SRS of 12.19 was the eighth best in school history (fifth best since World War II), and their final AP rank of tenth was tied for third. And the Cougars accomplished all this despite dubious preseason expectations. The Cougars were not ranked in the preseason poll and some of the sharpest prognosticators in the college football business (yours truly included) thought they would lose their season opener to Wyoming. A few weeks ago, I offered fans of Auburn, Miami, and Wisconsin hope heading into 2019. Teams that start the season ranked in the top-ten of the AP Poll and wind up unranked tend to bounce back the following season. Now I want to look at the converse (or is it inverse? I never payed close attention in geometry or philosophy). How do teams that start out unranked and finish in the top-ten perform the next season?

From 2005-2017, 27 teams began the year unranked and finished in the top-ten of the final AP Poll. They are listed in the table below along with their regular season record in the year they finished ranked, their regular season record the next season, and the difference in record between the two seasons.
The obvious takeaway from this table is that regression is a cruel mistress. As a general rule, those teams that start out unranked and finish in the top-ten tend to regress the next season. For your convenience, I have summarized the results.
Nearly three quarters of those teams declined the following season, about twenty percent held steady, and less than ten percent improved. Teams were over twice as likely to decline by two games (59%) as they were to stay the same or improve (26%). Cumulatively, the teams saw their regular season record decline by a little more than two games the next season. With this in mind, it might be a good idea to downgrade the Cougars a bit when projecting their 2019 record. But the Cougars were not the only team to finish in the top-ten despite modest preseason expectations.
The way a team finishes the season significantly impacts the way we view their body of work. Florida closed the 2018 season winning four straight, with the last two coming in blowout fashion against Florida State and Michigan. It’s easy to forget that Florida also lost to Kentucky for the first time since the Reagan administration, were not even ranked for good until the first week of October, and lost by three touchdowns at home to Missouri. The Gators recruit significantly better than Washington State, so their program has a sturdier frame, but they also play in the SEC which means they will face a more arduous schedule. I expect to hear and read a lot of Florida darkhorse hype over the summer, but I would be wary of proclamations of their return to glory just yet.

Thursday, April 25, 2019

2018 Adjusted Pythagorean Record: Mountain West

Last week we looked at how Mountain West teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2018 Mountain West standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Mountain West teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Using a game and a half as a line of demarcation, Hawaii was the lone team that saw their actual record differ significantly from their APR. The Warriors also exceeded their expected record relative to their YPP and we discussed a few reasons for this last week. Rather than rehash that, let's delve into another characteristic Hawaii has managed to maintain for the last decade or so.

Consistently Inconsistent
Hawaii’s appearance in the 2018 postseason was shocking. The Warriors entered 2018 fresh off a 3-9 season where they won just a single time in Mountain West play. Yet the Warriors opened the season by winning a road game for the first time in nearly a calendar year. They followed that road upset with a pair of home wins, and following a body-clock loss at Army won three in a row to stand 6-1. Their wins were a little fluky, and the Warriors came back to earth over the second half of the season, losing four of six to finish the regular season 8-5. Still, eight wins was a massive improvement and the Warriors ended up finishing 5-3 in the improved Mountain West. However, perhaps it shouldn’t have been so shocking. Two years previous, the Warriors were starting the 2016 campaign fresh off a winless conference season with a first-year head coach. They managed to grind out four conference wins and a surprise bowl appearance (and victory). In fact, the Warriors have been treating (torturing?) their fans with a year-to-year roller coaster ride for the past decade or so. Based on year-to-year differences in conference victories, Hawaii has been the most inconsistent mid-major team since 2007. To illustrate this, I have charted their number of conference wins below.
From 2007 to 2008, the Warriors went from eight conference wins to five, so their absolute difference in wins was three. From 2008 to 2009 they went from five conference wins to three, so their absolute difference in wins was two. Adding these absolute differences together produces a total absolute difference of five. Using this formula through the 2018 season yields an absolute difference of 33, which equates to an average of three wins per year difference in conference record. Only one other mid-major school has come close to being as inconsistent as Hawaii.
So the Mountain West is home to the most inconsistent team of the last decade. It is also home to the two most consistent teams, at least in terms of average difference between conference wins.
Boise State has consistently finished near the top of whatever conference they happen to be a part of, never winning fewer than five conference games since 1998. Nevada has consistently finished within one game of .500 in conference play with a few exceptions sprinkled in when Colin Kaepernick was under center (or more precisely in The Pistol). Heading into 2019, expect Boise State and Nevada to finish within about a game of their 2018 conference record. As for Hawaii, they should either go undefeated and challenge for a playoff spot or finish winless and be in the market for a new head coach.

Thursday, April 18, 2019

2018 Yards Per Play: Mountain West

Our offseason sojourn keeps churning through the Group of Five. This week we are examining the Mountain West.

Here are the Mountain West standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Mountain West team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2018 season, which teams in the Mountain West met this threshold? Here are Mountain West teams sorted by performance over what would be expected from their Net YPP numbers.
Boise State and Hawaii were the Mountain West teams that significantly exceeded their expected record based on YPP. The reason for Hawaii’s resurgence was simple: they were great in close games. The Warriors were 4-0 in one-score conference games and their other conference win came by just nine points. Contrast that to their losses which all came by at least eighteen points. For Boise State, no one thing stands out, but coupled together, several small factors helped them exceed their expected record. The Broncos were a solid 2-1 in one-score conference games, had a +6 turnover margin in conference play, and were +3 in non-offensive touchdowns in conference play. One non-offensive touchdown proved to be the difference in them winning the division. A 99-yard interception return against Nevada provided the margin of victory in their win against the Wolfpack and allowed them to steal the division when they beat Utah State in the regular season finale.

A Step-Back for Boise?
After compiling the YPP data for 2018, I was surprised Boise State ranked so low. The Broncos were not a bad team, but they did not have the profile of a dominant team either. Yet on the first weekend in December, they were once again hosting the Mountain West Championship Game for the second year in a row and third time in the game’s six year existence. To contextualize just how much Boise State’s actual record exceeded their expected record based on YPP, I went back and looked at all mid-major (non-BCS and Group of Five) teams since 2005 that exceeded their expected record by at least .300 (remember, I consider a difference of .200 significant) and finished with a conference winning percentage of at least .875 (7-1 in an eight game conference schedule). I added this extra qualifier because I wanted to look at teams that not only exceeded their expected record, but were also in contention for their conference title. In other words, I wanted average teams that finished with great records instead of bad teams that finished with average records. So how unique was Boise State in 2018? They were only the sixth mid-major team to exceed their expected record by at least .300 and win at least seven of their eight conference games. Boise and the other five teams are listed below along with their conference records the following season.
Obviously, this reeks of small sample size, but the results should not be encouraging for Boise State fans. The other five teams on this list all declined by at least two games in conference play the next season and the average decline was over three games (3.1). The Broncos do have a better and longer track record than these five teams. All five were coming off losing seasons when they significantly exceeded their expected record, with UCF fresh off a winless campaign when they surprised the nation by nearly winning Conference USA in 2005. Meanwhile, the Broncos have not won fewer than eight games in two decades. Still, they are losing a four-year starter at quarterback and their defensive YPP was the worst (in terms of actual yards per play and conference rank) it has been in the fourteen years I have been tracking YPP data. The Broncos will likely open as the Mountain West favorite when odds are released in the coming months, and they will be an even more prohibitive favorite to win the Mountain Division. I wouldn’t place a bet on one of the other teams in the division (Air Force, Colorado State, New Mexico, Utah State, Wyoming) to win it individually, but collectively, I would take the field over the Broncos to represent the Mountain half of the conference in the championship game.

Thursday, April 11, 2019

2018 Adjusted Pythagorean Record: MAC

Last week we looked at how MAC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2018 MAC standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, MAC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as an arbitrary measure to determine if a team over or under-performed relative to their APR. By that standard, in 2018, no MAC team significantly over or under-performed so we'll move on to a more interesting conversation.

The Dean of MAC Coaches
Mid-major (Group of Five) coaches at the FBS level typically follow one of two career paths. They succeed and move up to a bigger job (another mid-major or Power Five) or they fail and get fired and return to their previous life as an assistant coach. There are exceptions to the rule of course. Sometimes these coaches leave of their own accord to become assistants in the NFL or at a better college job, but for the most part these guys are promoted or fired depending on their success at the mid-major school. For this reason, you rarely see ‘lifers’ at mid-major programs. But in the closing act of Frank Solich’s career, he has become a mi-major lifer.

Beginning a half-century ago, Frank Solich certainly appeared to be a lifer. A Nebraska lifer. Solich played for legendary Nebraska head coach Bob Devaney in the 1960s, coached high school football in Nebraska until the late 1970s, and then became an assistant under another legendary Nebraska coach, Tom Osborne. When Osborne retired following the 1997 season, after winning three national titles in his four years, Solich succeeded him as head coach. Despite three-top ten finishes and a conference title in six seasons, Solich was Gene Bartowed following a 9-3 regular season in 2003. After a gap year, Solich returned to coaching in 2005 at Ohio. The Bobcats had gone just 11-35 in four seasons under Brian Knorr, but Solich had them in the MAC Championship Game in just his second season. The Bobcats dipped a bit in his third and fourth seasons (combined 10-14 record), but beginning in 2009, Solich has had the Bobcats bowl-eligible each of the past ten years. In that span, the Bobcats have made three additional appearances in the MAC Championship Game, won their first bowl game in school history (plus three more), and briefly appeared in the AP Poll for the first time since 1968.

While 2005 may not seem like that long ago, Solich is very close to becoming the longest-tenured MAC coach ever. Only four other men in history have lasted more than ten seasons as head coach at a MAC school. They are listed below.
First off, Ohio fans, don’t @ me. I know Bill Hess was the coach at Ohio from 1958-1977. However, in those first four seasons, Ohio was not an FBS school and the MAC was not an FBS conference. Northern Illinois fans, don’t @ me either. The Huskies were an FBS independent in Joe Novak’s first season (1996). Solich has been Ohio’s head coach for fourteen seasons, so if he is still the head coach in December of 2020, he will tie Hess and Herb Deromedi from Central Michigan as the longest tenured MAC coach. In addition, thanks to the twelve-game regular season, potential conference championship games, and the proliferation of bowl games, Solich already owns the MAC record for games coached and is just four wins shy of tying Deromedi for most wins all time as a MAC coach. However, thanks to Deromedi’s phenomenal conference record, Solich would probably need to coach at least four more seasons to tie or break the record for most MAC conference wins (he would be 78 in December 2022).

Solich has yet to reach the MAC mountaintop at Ohio, and realistically, he is probably running out of time to win Ohio’s first MAC title since 1968. I’d love to see the Bobcats win the MAC in the next year or two while simultaneously making Solich the dean of MAC coaches, not just in the present, but of all time.

Thursday, April 04, 2019

2018 Yards Per Play: MAC

After a two week break where we talked about basketball, this blog makes its triumphant return to the only sport that really matters - football. This week, we will be examining the MAC.

Here are the MAC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each MAC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2018 season, which teams in the MAC met this threshold? Here are MAC teams sorted by performance over what would be expected from their Net YPP numbers.
Buffalo was the lone MAC team to significantly exceed their expected record based on YPP. The Bulls were not exceptionally lucky (1-0 in one-score games), nor were they fueled by a fluky turnover margin (+1 in conference play). They simply were not quite as dominant as we might expect from a team that lost just once in eight conference games. On the flip side, three MAC teams significantly under-performed relative to their expected record based on YPP. Bowling Green, Central Michigan, and Kent State finished a combined 1-8 in one-score conference games and the lone victory came in a game played between these teams (Kent State edged Bowling Green in October). Those three teams were bad, they just weren’t quite as bad as their respective records.

The MAC LEast
While the MAC East champion lost in the MAC title game for the third straight year and seventh time in the last ten seasons, the division as a whole finally enjoyed a modicum of success against the MAC West. For the first time since 2009, the MAC East did not post a losing record against the MAC West in inter-division play.
Which MAC East teams have been most responsible for this ghastly record over the past nine seasons? It really has been an equal opportunity performance them. Between 2010 and 2018, eight teams spent time in the MAC East. Akron, Bowling Green, Buffalo, Kent State, Miami, and Ohio were fixtures in the division, but Massachusetts (2012-2015) and Temple (2010-2011) also vacationed there. How did those teams do against the MAC West?
No MAC East team posted a winning record against the MAC West. Ohio carried water best for the MAC East squads (although their bucket was littered with holes) as they almost won half their games against the MAC West. I find it interesting that Bowling Green is the lone MAC East team with multiple conference titles since 2010 (which obviously means they were able to beat a MAC West team in the conference title game), yet they have the worst record against the MAC West. 

Was 2018 an outlier or will the MAC East continue to close the gap on the MAC West and usher in a new era of competitive inter-division play? Stay tuned to ESPN2 on November weeknights to find out.