Wednesday, January 25, 2017

2016 Yards Per Play: ACC

Next up on our review tour is the ACC, home of the current national champions. Here are the ACC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each ACC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the ACC met this threshold? Here are the ACC teams sorted by performance over what would be expected from their Net YPP numbers.
No team really stood out as vastly outperforming their underlying statistics. Two of the poorer teams in the conference did manage to win a pair of games each despite YPP numbers that are often correlated with winless or one-win seasons. However, the team that produced the biggest disparity between its record and its expected record was Louisville. The Cardinals put up such strong YPP numbers they basically broke the regression analysis as it is obviously impossible to win more than 100% of your games, though I assume Nick Saban is hard at work at attempting to do so. Still, I think it is useful to look at Louisville’s season. I trust numbers and ratings systems more than I trust my own eyes or some expert’s opinions, but they are far from infallible. Whatever rating system one employs will either over or under-estimate certain teams thanks to a small sample size (an eight or nine game conference season) and a plethora of other variables related to the fact that teams are dynamic. Better players progress as they gain experience. Poorer players stay the same or get worse. Some players get injured and miss time. Coaches break down film and try to identify weaknesses. Travel and fatigue play a role in a team’s overall numbers. The list goes on. Louisville appears to be a team that is overrated, and perhaps extremely overrated when using YPP numbers. While the Cardinals did win seven of their eight league games, with their only loss coming in a tight game against eventual national champion Clemson, there were signs this team might not be as strong as some of their blowout wins portended. Consider that Louisville needed a roughing the kicker penalty to put Duke away at home in a ten point win. They also needed a late touchdown pass to escape on the road against a Virginia team that did not win a home conference game. Plus, while they beat Wake Forest by more than 30 points at home, even with advanced knowledge of what was coming, the Cardinals actually trailed the Deacons at the start of the fourth quarter. Once conference play ended, Louisville delivered arguably three of their worst performances on the season. Against a Houston team that would allow 46 points to Navy, 38 points to SMU, and 48 points to Memphis, Louisville with the eventual Heisman winner, netted ten points in an embarrassing defeat. In their next game, the offense returned to form, but the defense, ranked first in the ACC in yards allowed per play permitted Kentucky to score more than 40 points against a Power 5 team for the first time in over two years. Finally, in their bowl game, Louisville could not move the ball against one of the SEC’s best defenses and lurched into the offseason on a three-game skid. Couldn’t happen to a nicer guy.

Despite the fact that Louisville may have ‘fooled’ the conference YPP statistics, those numbers did a good job of identifying one team in particular that was overrated based on national rankings. Boston College had a truly elite defense in 2015, holding ACC opponents to 4.41 yards per play and allowing just twelve offensive touchdowns. However, thanks to a poor offense and some bad luck they did not scrounge up a single conference victory. If you just look at their raw 2016 defensive numbers, you might be inclined to believe they were elite again. You would be wrong. In 2016, Boston College allowed 314 yards per game, 5.09 yards per play, and 25 points per game. Those statistics ranked ninth, 25th, and 44th nationally in those respective categories. However, if we look at the eight ACC games the Eagles played, they allowed over six yards per play, a figure that ranked tenth in the fourteen-team league and 37 offensive touchdowns which ranked twelfth. How then did they convince the nation they were good? They answer is their non-conference schedule. The Eagles played Buffalo, Connecticut, Massachusetts, and Wagner in non-conference play. Buffalo and Connecticut are mid-major teams that ranked ninth in their respective leagues in yards play. Massachusetts does not play in a conference, but they ranked 94th nationally in yards per play. And Wagner is of course, an FCS school. Against those four overmatched opponents, Boston College allowed 104 yards per game, 2.12 yards play pay, and twenty total points. Since those games represented a third of their regular season schedule it elevated their national numbers in much the same way that picking all four number 1 seeds to win their first round game artificially inflates accuracy in NCAA tourney pools. It’s an easy choice that almost everyone gets right and not very useful in determining your real quality.

Wednesday, January 18, 2017

2016 Adjusted Pythagorean Record: AAC

Last week, we looked at how AAC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 AAC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, the AAC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard, South Florida and Tulane were the only teams with actual records far removed from their APR. South Florida finished ranked for the first time in school history, but was a bit fortunate to go 7-1 in league play. The Bulls finished 3-0 in one-score conference games and will probably need a little defensive improvement to remain near the top of the east division next season. On the other end of the spectrum, Tulane did not become a triple-option machine in Willie Fritz’s first season in charge. They won only a single conference game, matching their conference win total from 2015, but were a bit unlucky. They were 0-2 in one-score games and were -4 in net non-offensive touchdowns with three defensive returns returns turning a tight contest against Central Florida into a laugher.

Regular readers of this blog know one of the things I am most interested in is a coach/team’s record in close games. Since the college football season is so short, close games play an outsized role in determining how we evaluate and remember certain seasons. What I had been meaning to do for many years was determine if certain coaches possessed an ability to win close games. While I believe that close games involve a great deal of randomness, I think coaches can have a modest (though probably less than what the media and most fans attribute) impact on these close games both positively and negatively. In the interest of furthering this research, I looked at every coach who was active at the beginning of the 2016 and who had coached at least ten years as a head coach at the FBS level at the end of the 2016 season. In other words, if a coach had at least ten years of experience prior to the 2016 season and was fired before it was over (Les Miles), they are included. Similarly, if 2016 represented their tenth year on the job and they made it to the end, they are also included (Butch Jones). I chose ten years as an arbitrary cut off point so that each coach would have a significant sample size of close games. Certainly one or two years into a coaching career is not enough data to evaluate whether or not a coach possesses (if it exists) an ability to win close games. I chose ten because it was a nice round number and because it produced 36 coaches who met the criteria which I thought was a decent enough sample. So what did I find? Glad you asked.
For starters, there does seem to be a relationship between good, or at least long-tenured coaches, and an above average record in close games. The 36 coaches in this cohort, in over 2200 games, produced a combined winning percentage of nearly 56% in one-score (decided by eight points or less) games. If close games were ultimately decided by random chance, we would expect their winning percentage to be much closer to 50%. This likely means one of two things. Either good (in this case I am equating long-tenured to good) coaches do have a positive ability to influence close games. Or, close games are random, and as such if the fates do not smile upon you, you are unlikely to last very long as a head coach. I am inclined to believe the former. Anyway, that is the aggregate data. But let’s look closer at some micro level data. Which coach has performed the best in close games? I began this data excursion expecting to find Bill Snyder as the guru of close wins. Snyder has been solid in close games in his quarter century on the sidelines in Manhattan, Kansas (posting a 55-40 mark in one-score games), but I was surprised at one of the names near the top of the leaderboard. Since this is appearing in the post about the American Athletic Conference, perhaps you can guess who it is. Here are the top coaches sorted first by overall winning percentage and then by games above .500 in one-score contests.

That’s right. The Riverboat Gambler, and birther himself, Tommy Tuberville is one of the best close game coaches of his generation. The top two coaches on this list give credence to the idea that winning close games is a skill as both Meyer and Tuberville have accomplished this at four different schools. Meyer was 4-2 in close games at Bowling Green, 5-1 at Utah, 12-8 at Florida, and is currently an amazing 17-3 at Ohio State. Meanwhile, Tuberville was 11-5 at Ole Miss and 32-17 at Auburn. Even though his teams were not as successful at Texas Tech and Cincinnati, he still managed a 9-4 and 9-6 close game mark in Lubbock and the Queen City respectively. Tuberville had a losing record in close games just three times in 21 seasons as an FBS head coach. Meyer has also had a losing record in close games just three times in fifteen seasons as an FBS head coach.

It does appear that winning close games is a repeatable skill, at least to some extent, and not merely a sequence of random coin flips. However, judging by the names at the top of the leaderboard, perhaps these close games should serve as a sort of indictment against these coaches. Take Meyer and Miles for example. Meyer is 29-11 in one-score games as a head coach at Florida and Ohio State while Miles was 40-20 in one-score games at LSU. I didn’t dig through all the data, but I would assume Meyer and Miles were favorites, and perhaps large ones at that, in a majority of those one-score games. Thus, they probably had more talented teams and probably should not have been in as many one-score games to begin with. Think of winning these one-score games as more relief (pulling one out of the fire) than cause for celebration (winning a close game against a better opponent). Just something to chew on.

Wednesday, January 11, 2017

2016 Yards Per Play: AAC

Happy 2017! Loyal readers and those redirected here via spam, the long and arduous offseason is now upon us. To help pass the time to better days, I’ll be reviewing each FBS conference season from 2016. Each of the ten FBS conferences will get two posts (one per week) reviewing the season that was in terms of Yards Per Play and Adjusted Pythagorean Record. We won't play favorites in terms of Power Five versus Group of Five, as we'll tackle each league in alphabetical order. That will move us into May, where we will still have more than three months until real games begin. Hopefully we can find something to occupy our time with this summer. We begin with the American Athletic Conference. Here are the AAC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each AAC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the AAC met this threshold? Here are the AAC teams sorted by performance over what would be expected from their Net YPP numbers.
For the second year in a row, Navy vastly outperformed their middling peripherals, and earned a trip to the AAC Championship Game because of it. The Midshipmen were able to do this by hiding their porous defense with an efficient offense that also limited the number of possessions their opponents had. Navy’s defense faced the second fewest plays of any AAC team, which was about six fewer per game than the average AAC team. Those six fewer plays combined with Navy’s weak defense meant opponents missed out on about 40 extra yards per game. Navy also finished 4-1 in one-score conference games, further bolstering their record. Connecticut and Cincinnati finished with the biggest negative disparity between their YPP numbers and their actual record. And those two teams also happen to be replacing their head coaches going into 2017. Neither the Huskies nor Bearcats were particularly poor in close games (combined 0-3 record in one-score contests) nor had horrendous turnover margins (combined for -4 in-conference margin). However, the Bearcats did finish with a -4 margin in terms of non-offensive touchdowns (allowed four and did not score any) in AAC play.

Before bowl season, and all the randomness inherent in a long break, including coaching changes and other assorted distractions, the AAC enjoyed a fine season. They were easily the best Group of Five conference from top to bottom and boasted their fair share of Power Five scalps. As is the nature of the game at the mid-major level though, several AAC schools had their head coaches scooped up by Power 5 programs. In fact, three AAC coaches left for Power 5 jobs. Matt Rhule left the champion Temple Owls to rebuild the mess at Baylor, Willie Taggart left his Gulf Coast offense at South Florida to take over Oregon, and the crown jewel of available coaches, Urban Saban Tom Herman, left Houston for the job at Texas (more on him in the Big 12 piece in a few weeks).If losing three coaches to Power Five jobs seems like a lot, well it is. I looked at all coaching changes beginning with those occurring after the 2000 regular season and determined the number of times coaches have gone from Group of Five/non-BCS jobs to Power 5/BCS jobs. The record for number of coaches to leave a conference in one offseason is three, which has occurred three times, most recently with the AAC. Here is each instance.
The AAC is tied with the MAC in 2010 (which means following the 2010 regular season) and the WAC in 2012. However, if we look at percentages, the AAC has not been the most impressive. In 2012, the WAC only had seven teams, so nearly half their coaches were poached by the big boys!

Of course, just because you get poached does not necessarily mean you will last a long time in the Power 5 grinder. Of the MAC coaches from 2010, not a single one is still at the school that hired them away. Mike Haywood, from champion Miami, did not coach a game for Pittsburgh after he was fired for a domestic disturbance. Jerry Kill, from Northern Illinois, had success at Minnesota, but was forced to resign in 2015 due to health reasons. And Al Golden, from Temple, was fired in his fifth season at Miami after failing to return The U to its former glory. As for the WAC fraternity, Sonny Dykes from Louisiana Tech was just fired at Cal after one bowl appearance in four seasons. Gary Andersen from Utah State voluntarily left Wisconsin after just two years to attempt to rebuild Oregon State. Finally, Mike MacIntyre from San Jose State endured three very bad years at Colorado before leading them to a division title in 2016. Check back in six years and there is a good chance, neither Herman, Rhule, or Taggart are still at their respective schools.