Two conference reviews down, eight to go. We move on to the B's now. Here are the Big 10 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Big 10 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s Yards per Play (YPP). Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards per Play and Yards per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me.In the 2015 season, which teams in the Big 10 met this threshold? Here are the Big 10 teams sorted by performance over what would be expected from their Net YPP numbers.
The Big 10 saw a large number of teams (six) teams finish with records that did not match their YPP numbers. And let’s deal with the elephant in the room. Yes, the numbers say Penn State was the second best team in the league. If you look closely though, you will see that outside of Ohio State, there were no dominant teams in the Big 10 this year. However, some did post dominant records. More on that in a moment. Illinois, Minnesota, and Maryland under-performed based on their YPP numbers while Iowa, Northwestern, and Michigan State produced better records than one would expect. Illinois began the year with an interim head coach after allegations of player abuse cost Tim Beckman his job just before the season started. The Illini cannot blame close losses for the disparity between their record and their expected record. The Illini actually won their only close conference game, edging Nebraska 14-13. Despite finishing with a 5-7 record, Illinois elected to retain coach Bill Cubit. Not all were pleased with this decision. Like Illinois, Minnesota also ended the year with an interim coach, and they too decided to keep him on despite a losing record. Jerry Kill’s health issues resurfaced in 2015 and his abrupt retirement meant Tracy Claeys was now in charge. The Gophers lost a pair of tight games to good teams (Michigan and Iowa) en route to their 2-6 conference finish and were marginally competitive against both Ohio State and Wisconsin. Maryland, like the Gophers and Illini (sensing a theme here?) also ended the year with an interim coach. Randy Edsall was fired after a 2-4 start and disgraced former New Mexico head coach Mike Locksley replaced him. Locksley guided the Terrapins to just one win in their final six games, but that was half as many as he had in nearly five times as many games in the Land of Enchantment. And he avoided a sexual harassment scandal to boot. Maryland was more competitive under Locksley, losing one-score games to both Penn State and Wisconsin under his guidance. For the triumvirate of teams that exceeded their YPP numbers, close games told the story. Iowa, Northwestern, and Michigan State finished a combined 12-1 in one-score conference games with the only loss coming in controversial fashion by the Spartans. Iowa also posted a +13 turnover margin in Big 10 plays (tops in the conference). Iowa, Northwestern, and Michigan State produced gaudy regular season records, but in their bowl games, they were beaten by a combined score of 128-22, providing further ammunition for the argument that they were not quite as good as their record indicated.
Now, I am going to throw some shade toward Mr. Paul Chryst.
Around midseason when Pitt began to look like a contender in the ACC Coastal Division, it looked like they had made a coaching upgrade when their former head coach, Paul Chryst, took the Wisconsin job. Obviously, except in extremely rare instances, one season does not serve as the final evaluation in the success or failure of a head coach. Still, I thought it would be interesting to look at coaches who change jobs and see how both their former and current teams performed with them at and not at the helm. I decided to call my little throwaway metric ‘The Chryst Index’ or TCI. Basically what TCI measures is how much worse the coach’s old team got when he left combined with how much better his new team got when he arrived. Here is a quick rundown on how it is calculated.
1. For starters, TCI can only be measured for coaches who move from one FBS job to another.
2. Start with the coach’s final season at his old job. Subtract the final regular season win total of this season from the final regular season win total under the new coach.
3. Next, move on to the coach’s first season at his new job. Take the final regular season win total of this season and subtract the final regular season win total of the previous season (the last under the previous coach).
4. Subtract the value from step 2 from the value in step 3. This is the TCI number for the coach. As in most things, more is better.
I know that might be a little confusing, but here is how the math plays out for the eponymous Chryst in 2015. His last Pitt team went 6-6 in 2014. Pitt improved to 8-4 in their first season without him. Subtracting 6 from 8 gives us 2. Pitt improved by two games without Chryst, which reflects negatively on him. His first team at Wisconsin went 9-3. Wisconsin went 10-2 in the regular season before Chryst’s arrival. 9-10 gives us -1. Wisconsin declined by one game when Cyryst arrived. Again, this reflects negatively on him. When we subtract the previous value (2) from -1, we get -3. Only four coaches could be evaluated by TCI for the 2015 season. They are listed below.
In leading Florida to the SEC East crown, Jim McElwain was the only FBS coach to change jobs who had a net positive impact on his teams, both new and old. Chryst ranks last among the quartet of coaches who changed jobs in 2015, but his TCI of -3 is far from the worst of the last decade. Before we get to those esteemed gentlemen, let’s look at those coaches who produced the highest TCI since 2006.
Aside from Gus Malzahn, who improved Auburn by an amazing 8 games, most of these coaches benefited from the teams they left careening off a cliff. Some of this is probably a little by design. Hoke, Fedora, Sumlin, and Kelly had all been at their respective schools for at least three years and had been building for this season that achieved a level of success that was out of place for the school’s historical standards. Not only did they lose the coach, but they often lost a lot of really good players from those teams. As for whether a high TCI is a portent of future success, well that is a mixed bag. Brian Kelly has to be considered a success at Notre Dame and Hoke was certainly successful at San Diego State, but Malzahn and Sumlin will enter 2016 on pretty warm seats. Before winning the ACC Coastal in 2015, Fedora was also feeling the heat in Chapel Hill. Now on to the coaches who produced the worst TCI numbers since 2006. Alas, Chryst does not quite make the cut.
Aside from Dan Hawkins, who saw Boise go from a cute mid-major to a burgeoning national power upon his departure, no other coach on this list saw his former team drastically improve. No, they ‘earned’ their position because their new teams struggled. Unlike a high TCI, a low TCI seems to herald trouble for a new coach. Dan Hawkins stuck around Colorado for parts of five seasons, but guided the Buffaloes to only one bowl game and zero winning seasons. Steve Kragthorpe lasted three years at Louisville and produced no winning seasons. Tim Beckman almost made it to his fourth season at Illinois, but also failed to produce a winning season. Dave Doeren has been moderately successful since his disastrous initial campaign at NC State, but the Wolfpack are just 2-16 against ACC teams not located in Winston-Salem or Syracuse. Skip Holtz is by far the biggest success story, rebounding from a poor first season to post back-to-back nine win campaigns at Louisiana Tech.
TCI is not the final word on rating a new football coach, but it can be a useful, if flawed, tool to examine how a coach performed in his first season.
Next week, the Big 10 gets the APR treatment, and we'll take a closer look at Chryst's first Wisconsin team.
No comments:
Post a Comment