## Wednesday, February 24, 2016

### 2015 Yards Per Play: Big 12

After dispensing with the Big 10, we shift our attention to the other big conference, the Big 12. Here are the Big 12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Big 12 team. This includes conference play only. The teams are sorted by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2015 season, which teams in the Big 12 met this threshold? Here are the Big 12 teams sorted by performance over what would be expected from their Net YPP numbers.
Only two teams boasted a record that was significantly different than what would have been predicted based on their YPP numbers. Both Oklahoma State and Kansas State exceeded what their record should have been according to the numbers. For Oklahoma State, the reason was simple. The Cowboys were 4-0 in one-score games, beating a quartet of middling outfits (Iowa State, Kansas State, Texas, and West Virginia) by a combined 16 points. Three of those games did come on the road, so give them credit for winning away from Stillwater, but the Cowboys did not have the bona fides of an elite team. On the other hand, Kansas State is a different story. Statistically, the Wildcats were the second worst team in the Big 12 by a significant margin. Their offense was only better than that of their in-state brethren and they made up for it by also having one of the worst defenses in the Big 12. So how did a team that was consistently outplayed on a down to down basis by a significant margin manage to qualify for a bowl games? The Wildcats actually sported a slightly below average 2-3 record in one-score Big 12 games, so we can’t attribute the difference to their record in close games. We’ll have to look elsewhere. One area where Kansas State gained an edge was special teams. The Wildcats returned a punt and three kickoffs for touchdowns in their Big 12 games. Those scores provided the margin of victory in a three point win over Iowa State and a one point win over West Virginia. Another way that Kansas State was able to remain competitive despite their poor overall play was their slow pace.
Based on the plays they ran and the plays their defense faced, the Wildcats saw the least ‘action’ of any Big 12 team in conference play. The Wildcats saw about twelve fewer plays per game than the average Big 12 team and 25 fewer than Texas Tech, the Big 12 leader in that category. When your offense and defense are both inefficient, shortening the game can pay dividends. Finally, Kansas State benefited from their home schedule. Two of their three league wins came at home (with their only road win coming against perennial punching bag Kansas) and while their loss to Oklahoma was not competitive, the Wildcats lost by just seven points each to TCU and Baylor in Manhattan.

One of the major storylines in college football since 2010 has been realignment. Every FBS conference has seen some kind of membership change since the end of the 2010 season. In fact, some conferences have ceased to exist entirely. The Big 12 has been one of the more interesting cases, as three other power conferences raided it. The Big 12 in turn plundered the Big East and Mountain West to steady its membership. When the dust had cleared, the Big 12 lost four teams and added two bringing the total membership to ten teams. The marquee programs in the conference post-realignment are of course, Oklahoma and Texas. While the Sooners have pulled their weight in the conference, and on the national level, Texas has struggled. The Longhorns have endured a pair of losing seasons and have not won the conference since 2009. In the midst of the struggles by the Longhorns, a quartet of non-traditional powers has emerged to ensure the Big 12 remains a player on the national stage.

Between 1980 and 2011, Texas and Oklahoma combined for three national titles, 19 conference titles (in the Big 8, Southwest, and Big 12 conferences), 21 top-ten finishes in the AP Poll, and 41 top-25 AP Poll finishes. Those are pretty good numbers. In that same time span, Baylor, Kansas State, Oklahoma State, and TCU combined for zero national titles, 12 conference titles (with most coming courtesy of TCU in mid-major leagues), nine top-ten finishes in the AP Poll, and 31 top-25 AP Poll finishes. The table below summarizes what I just typed.
However, since 2012, Baylor, Kansas State, Oklahoma State, and TCU have been torchbearers for the Big 12. Oklahoma and Texas have combined for two conference titles, three top-ten finishes in the AP Poll, and four top-25 AP Poll finishes. Oklahoma has contributed all the conference titles, all three of the top-ten finishes, and three of the four top-25 finishes. Meanwhile, Baylor, Kansas State, Oklahoma State, and TCU have combined for four conference titles (with only Oklahoma State failing to register at least a shared title), three top-ten finishes in the AP Poll, and ten top-25 AP Poll finishes. Again, take a gander at the table that summarizes these results.
If we look at just Big 12 performance, the results are even more revealing.
While Oklahoma has the best Big 12 record in that span, Baylor is a close second with Kansas State and Oklahoma State tied for third. TCU is tied with Texas for fifth, but the Horned Frogs have more dominant of late, having gone 15-3 the past two seasons after a 6-12 start to major conference life.

What does all this mean for the Big 12? The good news is that teams have risen while Texas has fallen. While Oklahoma remains the bell cow for the conference, other teams have popped up intermittently to keep the Big 12 on the national radar. The bad news is that these teams may not have staying power. Baylor was an abject dumpster fire until Art Briles got there. They had some moderate success in the 80s and early 90s, but since the Big 12 started, they were the weakest link. The infrastructure has improved, but how far will they fall once Briles leaves? Similarly, Kansas State may have been the worst FBS program when Bill Snyder arrived in the late 80s. During his brief retirement, the Wildcats were middling at best. Snyder will be vacating Manhattan much sooner rather than later. What will the Wildats do when he is gone (for good this time)? Oklahoma State had some good teams under Pat Jones in the 80s (and briefly Les Miles), but Mike Gundy has raised the program to new heights. With that T. Boone Pickens money, the Cowboys may be well positioned for success when Gundy leaves, but it is far from guaranteed. Finally, TCU has exceeded their historical levels under coach Gary Patterson. He has been in Fort Worth for a decade and a half and seen the Horned Frogs go from mid-major power to Big 12 contender. How will this program fare when he leaves? Success of upstarts has played a key role in keeping the Big 12 relevant in the national picture during uncertain times. However, relying on these upstarts to remain prosperous after their regimes change may not be prudent. Perhaps the best case scenario for the Big 12 is a return to glory for another old money program, Texas.

## Wednesday, February 17, 2016

### 2015 Adjusted Pythagorean Record: Big 10

Last week, we looked at how Big 10 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2015 Big 10 standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, the Big 10 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
Northwestern was the only Big 10 team with a record substantially different than their expected record based on APR. There was also a positive disconnect between the Wildcats record and their expected record based on YPP. We covered that last week, so unlike Marco Rubio, we won't repeat ourselves here. Instead, we'll take a closer look at Paul Chryst and his first Wisconsin team.

Last week I introduced a new throwaway metric to measure the impact of a coach changing jobs at the FBS level. Since Paul Chryst was the reason I researched the issue, I decided to name the metric after him. Now this week, I want to take a look at how the Badgers performed offensively under Chryst in his first season at the helm. I really have no personal vendetta against Paul Chryst. These are merely observations.

I have touchdown and yards per play data (in conference play) for every FBS team going back to 2005. The chart below lists the offensive touchdowns scored and yards per play averaged by the Badgers in Big 10 play since 2005 with their rank in the conference in parentheses. For easy reference, the chart is color coded based on who was coaching the team. Four different gentlemen have guided the Badgers on the gridiron during this time span: Barry Alvarez (2005), Bret Bielema (2006-2012), Gary Andersen (2013-2014), and Paul Chryst (2015).
Obviously, Chryst’s struggles stand out like a sore thumb. The Badgers endured their lowest touchdown and yards per play output under his watch. Not only, were their raw numbers the lowest in the period examined, but their place among the other Big 10 teams was also at its nadir. Sure, the Badgers suffered injuries in 2015, but their precipitous decline from their historical baselines and particularly from the prior season (when they were second in touchdowns and first in yards per play) has to be at least a little disturbing for Badger fans. Chryst was the team’s offensive coordinator from 2006-2011, when they were running roughshod over the rest of the Big 10 with their power running game and solid, but unspectacular quarterback play (with the exception of their one-year rental of Russell Wilson), so maybe things will turn around. Or maybe Chryst is in over his head as a head coach as could be inferred from his wholly mediocre three seasons at Pitt. In all likelihood, we will get to find out.

I’ll close with a little more statistical minutia regarding the impotence of the Wisconsin offense in 2015. For the first time since 2004, Wisconsin failed to have a single back top 1000 yards rushing. Dare Ogunbowale led the Badgers with 819 yards on the ground in 2015. This total would have ranked behind the second leading rusher for Wisconsin teams in 2014, 2013, 2010, and 2008 (and just 13 yards more than the second leading rushing in 2012). From 2005-2014, six Badgers rushed for over 1000 yards in a season. These six backs (Brian Calhoun, PJ Hill, John Clay, Montee Ball, James White, and Melvin Gordon) combined for twelve 1000 yard seasons. Four of those backs, Calhoun, Ball, White, and Gordon, were drafted by NFL teams.

## Thursday, February 11, 2016

### 2015 Yards Per Play: Big 10

Two conference reviews down, eight to go. We move on to the B's now. Here are the Big 10 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Big 10 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me.In the 2015 season, which teams in the Big 10 met this threshold? Here are the Big 10 teams sorted by performance over what would be expected from their Net YPP numbers.
The Big 10 saw a large number of teams (six) teams finish with records that did not match their YPP numbers. And let’s deal with the elephant in the room. Yes, the numbers say Penn State was the second best team in the league. If you look closely though, you will see that outside of Ohio State, there were no dominant teams in the Big 10 this year. However, some did post dominant records. More on that in a moment. Illinois, Minnesota, and Maryland under-performed based on their YPP numbers while Iowa, Northwestern, and Michigan State produced better records than one would expect. Illinois began the year with an interim head coach after allegations of player abuse cost Tim Beckman his job just before the season started. The Illini cannot blame close losses for the disparity between their record and their expected record. The Illini actually won their only close conference game, edging Nebraska 14-13. Despite finishing with a 5-7 record, Illinois elected to retain coach Bill Cubit. Not all were pleased with this decision. Like Illinois, Minnesota also ended the year with an interim coach, and they too decided to keep him on despite a losing record. Jerry Kill’s health issues resurfaced in 2015 and his abrupt retirement meant Tracy Claeys was now in charge. The Gophers lost a pair of tight games to good teams (Michigan and Iowa) en route to their 2-6 conference finish and were marginally competitive against both Ohio State and Wisconsin. Maryland, like the Gophers and Illini (sensing a theme here?) also ended the year with an interim coach. Randy Edsall was fired after a 2-4 start and disgraced former New Mexico head coach Mike Locksley replaced him. Locksley guided the Terrapins to just one win in their final six games, but that was half as many as he had in nearly five times as many games in the Land of Enchantment. And he avoided a sexual harassment scandal to boot. Maryland was more competitive under Locksley, losing one-score games to both Penn State and Wisconsin under his guidance. For the triumvirate of teams that exceeded their YPP numbers, close games told the story. Iowa, Northwestern, and Michigan State finished a combined 12-1 in one-score conference games with the only loss coming in controversial fashion by the Spartans. Iowa also posted a +13 turnover margin in Big 10 plays (tops in the conference). Iowa, Northwestern, and Michigan State produced gaudy regular season records, but in their bowl games, they were beaten by a combined score of 128-22, providing further ammunition for the argument that they were not quite as good as their record indicated.

Now, I am going to throw some shade toward Mr. Paul Chryst.

Around midseason when Pitt began to look like a contender in the ACC Coastal Division, it looked like they had made a coaching upgrade when their former head coach, Paul Chryst, took the Wisconsin job. Obviously, except in extremely rare instances, one season does not serve as the final evaluation in the success or failure of a head coach. Still, I thought it would be interesting to look at coaches who change jobs and see how both their former and current teams performed with them at and not at the helm. I decided to call my little throwaway metric ‘The Chryst Index’ or TCI. Basically what TCI measures is how much worse the coach’s old team got when he left combined with how much better his new team got when he arrived. Here is a quick rundown on how it is calculated.

1. For starters, TCI can only be measured for coaches who move from one FBS job to another.
2. Start with the coach’s final season at his old job. Subtract the final regular season win total of this season from the final regular season win total under the new coach.
3. Next, move on to the coach’s first season at his new job. Take the final regular season win total of this season and subtract the final regular season win total of the previous season (the last under the previous coach).
4. Subtract the value from step 2 from the value in step 3. This is the TCI number for the coach. As in most things, more is better.

I know that might be a little confusing, but here is how the math plays out for the eponymous Chryst in 2015. His last Pitt team went 6-6 in 2014. Pitt improved to 8-4 in their first season without him. Subtracting 6 from 8 gives us 2. Pitt improved by two games without Chryst, which reflects negatively on him. His first team at Wisconsin went 9-3. Wisconsin went 10-2 in the regular season before Chryst’s arrival. 9-10 gives us -1. Wisconsin declined by one game when Cyryst arrived. Again, this reflects negatively on him. When we subtract the previous value (2) from -1, we get -3. Only four coaches could be evaluated by TCI for the 2015 season. They are listed below.
In leading Florida to the SEC East crown, Jim McElwain was the only FBS coach to change jobs who had a net positive impact on his teams, both new and old. Chryst ranks last among the quartet of coaches who changed jobs in 2015, but his TCI of -3 is far from the worst of the last decade. Before we get to those esteemed gentlemen, let’s look at those coaches who produced the highest TCI since 2006.
Aside from Gus Malzahn, who improved Auburn by an amazing 8 games, most of these coaches benefited from the teams they left careening off a cliff. Some of this is probably a little by design. Hoke, Fedora, Sumlin, and Kelly had all been at their respective schools for at least three years and had been building for this season that achieved a level of success that was out of place for the school’s historical standards. Not only did they lose the coach, but they often lost a lot of really good players from those teams. As for whether a high TCI is a portent of future success, well that is a mixed bag. Brian Kelly has to be considered a success at Notre Dame and Hoke was certainly successful at San Diego State, but Malzahn and Sumlin will enter 2016 on pretty warm seats. Before winning the ACC Coastal in 2015, Fedora was also feeling the heat in Chapel Hill. Now on to the coaches who produced the worst TCI numbers since 2006. Alas, Chryst does not quite make the cut.
Aside from Dan Hawkins, who saw Boise go from a cute mid-major to a burgeoning national power upon his departure, no other coach on this list saw his former team drastically improve. No, they ‘earned’ their position because their new teams struggled. Unlike a high TCI, a low TCI seems to herald trouble for a new coach. Dan Hawkins stuck around Colorado for parts of five seasons, but guided the Buffaloes to only one bowl game and zero winning seasons. Steve Kragthorpe lasted three years at Louisville and produced no winning seasons. Tim Beckman almost made it to his fourth season at Illinois, but also failed to produce a winning season. Dave Doeren has been moderately successful since his disastrous initial campaign at NC State, but the Wolfpack are just 2-16 against ACC teams not located in Winston-Salem or Syracuse. Skip Holtz is by far the biggest success story, rebounding from a poor first season to post back-to-back nine win campaigns at Louisiana Tech.

TCI is not the final word on rating a new football coach, but it can be a useful, if flawed, tool to examine how a coach performed in his first season.

Next week, the Big 10 gets the APR treatment, and we'll take a closer look at Chryst's first Wisconsin team.

## Wednesday, February 03, 2016

### 2015 Adjusted Pythagorean Record: ACC

Last week, we looked at how ACC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2015 ACC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, the ACC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
The ACC saw four teams with significant differences between their APR and actual record. Three of those teams (Boston College, Georgia Tech, and Miami) also saw large differences between their actual record and their record as predicted by their YPP differential. We discussed those three teams last week, so we won’t rehash it here. The other team with a significant difference between their APR and actual record was Duke. The Blue Devils did not have a phenomenal record in close games (3-2) to pump up their place in the standings, but they did have a few blowout losses (35 points to North Carolina and 18 points to Pitt) that tramped down their APR. In fact, the margin in their 35 point loss to the Tar Heels was more than the margin in their four conference victories (24 total points).

Segue.

If you watched any Boston College football games this season, you may have noticed the Eagles didn’t put a lot of points on the board. Yes, they had a harder time scoring than a pimply hunchback at a Kappa Kappa Gamma mixer (thanks, I’m here all weekend). Seriously, though, their offense may have been the worst in college football. However, despite their offensive struggles, the Eagles were competitive in most of their games thanks to a strong defense. This combination of ineptitude and competence got me thinking and in my thinking, I created something I deemed the ‘Excitement Index’. The concept is pretty simple. Most people watch football for the scoring, more specifically, the touchdowns (no one likes field goals). Maybe you are a football snob and you enjoy watching pulling guards smash into linebackers, but I would argue most fans are not that nuanced (or sober). No, they want to see touchdowns. I have APR data for each FBS conference going back to 2005, so I looked at the total number of offensive touchdowns scored and allowed for each team in every conference game dating back more than a decade. I simply added up the offensive touchdowns each team scored with the touchdowns they allowed and divided by the number of conference games played. Why did I use conference games? Well, I have that data readily available. So how does Boston College rate in the ‘Excitement Index’? They are the least exciting team since at least 2005. The average Boston College game in 2015 saw just two and a half combined offensive touchdowns. Boston College as well as the other least exciting teams since 2005 are listed below.
Let me be clear, this is not a slight at Boston College. For my money, their 3-0 ‘boring’ loss to my alma mater (Wake Forest) was a very exciting game. However, I doubt many folks who are not fans of either team switched over to that game. Had the score been something like 70-66, it might have garnered the attention of a few more casual fans. So Boston College ranked dead last of all teams since 2005 in the ‘Excitement Index’. Did any team from 2015 rank first? No. But one team did rank second all time in the measure. You’ll have to wait until we get to that conference before I divulge their identity. That’s what we in the business call a tease. I’ll give you a (probably not needed) hint. They play in the Big 12.