Thursday, February 22, 2018

2017 Yards Per Play: Big 12

Three conferences down, seven to go. Next up is the Big 12. Here are the 2017 Big 12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Big 12 team. This includes conference play only, championship game excluded. The teams are sorted by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2017 season, which teams in the Big 12 met this threshold? Here are Big 12 teams sorted by performance over what would be expected from their Net YPP numbers.
No team significantly over or under-performed their YPP numbers in 2017. Oh well. However, that does give us more bandwidth to discuss other facets of the Big 12.

The Big 12 began play in 1996 and continued along in flyover country unfettered until 2010. By that point, the conference was in turmoil. Concerned that the Big 12 was looking out for Texas at the expense of the other members, several teams left the conference. Colorado, Missouri, Nebraska, and Texas A&M all left prior to the 2012 season and the Big 12 replaced them with TCU and West Virginia. The current ten-team version of the Big 12 has existed for six seasons and since a reasonable amount of time has passed, I thought it would be interesting to look at how the member teams have performed. Here are the cumulative Big 12 conference standings since 2012.
Even casual fans could have predicted the Sooners would have the best conference record in that span, but some may have been surprised that their in-state rivals have the second best record, and perhaps even more surprised that five teams have a better conference record than Texas. In addition, the depths to which the Jayhawks have fallen are stark when laid out next to their conference mates. Three conference wins in six seasons is bordering on relegation territory. But a simple aggregation is not all I wanted to examine here. You could easily pull those numbers and record them yourself. No, I want to dig a little deeper. And the shovel we are going to use is the NFL draft.

To win games in any sport, you need good players. Recruiting rankings do a great job in the aggregate of predicting the best teams in college football. Of course, some teams tend to over or under-perform their respective recruiting rankings. Other authors on the internet have examined these teams and you may want to give them a read (after this one of course). However, there is another proxy for talent, but this one comes on the back end, instead of the front end. If you read the previous paragraph, you obviously know I am referring to the NFL draft. Here are the total number of draft picks for each current Big 12 team since 2013 (the first draft after the 2012 season).
Once again, Oklahoma is the king of the hill in the Big 12. The Sooners have had the most players drafted since 2013. However, while the Sooners have had 25 total players drafted since 2013, they have only had one player selected in the first round. Meanwhile, West Virginia has had nine fewer players selected than Oklahoma, but they have had three times as many first round picks. We need a way to quantify how valuable each of these picks are. In the interest of simplicity, I came up with this system: Since there are seven rounds in the NFL draft, a first round pick is worth seven points, a second round pick is worth six points, and so on, until the chaff in the seventh round are worth just a single point. Obviously, this system is not perfect as picks that occur alongside each other could be worth different amounts. The last pick in the first round would be worth seven points while the first pick in the second round would be worth six points. Another issue is the first pick in the draft is worth the same as the last pick in the first round. Of course, we are not attempting to create a perfect value system, but rather a proxy for estimating how much talent each team had on campus. With those caveats out of the way, here are the Big 12 teams ranked by Draft Points since the 2013 NFL draft.
So, the next step is obvious right? Let’s run a regression analysis and see how well this Draft Points metric predicts conference record. One step ahead of you. Conference record is somewhat positively correlated to Draft Points with an R squared value of .477 meaning roughly 47.7% of the variation in conference record is explained by Draft Points. So now, let’s use Draft Points to predict conference record and see which teams have over or under-performed relative to the talent on their roster according to NFL evaluators.
Oklahoma State and to a lesser extent Kansas State have won more games that we would expect from their talent. If I were into trite platitudes, I might say: ‘Their whole is greater than the sum of their parts’. On the other end of the spectrum, Kansas had more players drafted than conference wins. Ouch. Of course, Kansas is not the only team with reason to be ashamed. West Virginia has produced the second most Draft Points since joining the conference, but they have managed just a .500 record in league play.

This analysis has some shortcomings. Foremost, the 2018 draft has not happened yet, and several Oklahoma State players are likely to hear their names called, so this is a little biased in favor of Mike Gundy. Secondly, some solid contributors or even great players may find their skills do not translate to the next level. Their coaches and teams should not be credited for finding and developing an obvious talent who just happens to not have the requisite skills to play professionally. Thirdly, NFL talent evaluators are fallible. Maybe that first round pick is not really any good. Similarly, maybe that undrafted player has the necessary skills to contribute or even be a star. Finally, as I mentioned previously, Draft Points are a flawed proxy for talent. Is a first round pick worth seven times as much as a seventh round pick? I don’t have any idea. I was shooting for speed and comfort instead of a long-lasting metric. I’m sure there are other issues with the analysis, but I’ll leave them for you to critique. Still, I think this exercise was valuable. The two coaches who exceeded their expected record based on Draft Points ('Solomon' Gundy and Bill Snyder) are the best in the respective histories of their schools while the coach who most failed to meet expectations (non-Kansas edition) has been on the hot seat since his team joined the Big 12.

Thursday, February 15, 2018

2017 Adjusted Pythagorean Record: Big 10

Last week, we looked at how Big 10 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2017 Big 10 standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Big 10 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard Michigan State and Rutgers exceeded their APR while Iowa fell short. Last week, I touched on why Michigan State was able to win seven of their nine conference games despite middling peripherals, but I’ll once again highlight their 5-1 record in one-score conference games. Rutgers was 2-0 in one-score conference games, but the reason their APR is so low is due to the other seven conference games, in particular, their six losses. Rutgers' three conference wins came by a combined 20 points. Five of their six conference losses came by more than that. Their ten point loss to Nebraska represented their best performance in a losing effort. Their average margin of defeat in the other five games was 36 points! Consistently getting blown out will tramp down your APR. Finally, Iowa's APR score is due to a confluence of factors. Their inexplicable performance against Ohio State somewhat inflates their season numbers, but the Hawkeyes were also a slightly unlucky 1-3 in one-score conference games. Change a few plays here or there, especially in the Penn State or Northwestern games and Iowa’s record is much closer to their APR.

Once you become an adult, it’s no secret that life comes at you fast. One minute the president is an erudite, well-spoken intellectual capable of making you believe anything is possible. The next minute he’s a racist, septuagenarian. But I digress. I mention that because Urban Meyer has now coached more games at Ohio State than he did at Florida. I know. I’ll give you a second to collect yourself after that bombshell. What’s even more amazing is how well his Ohio State teams have performed in the clutch. The following table lists each Big 10 team’s record in one-score conference games since Meyer’s arrival in 2012.
The table is sorted by games above .500 instead of winning percentage to give you an idea of just how much Ohio State has owned close games. Ohio State’s margin of +13 is greater than the cumulative margin of the other five teams that have managed a winning record in close games (+12). In fact, if you think hard enough, you can probably recall the two instances in which Ohio State came out on the short end of a close game (this does not include Big Championship Games where Ohio State is 1-1 in close games). If close games are indeed a 50/50 proposition, then the chances of one team winning 15 of 17 is roughly 1 in 964.

As a comparison, how did Meyer do in close games at Florida? I’m glad you asked.
His Florida teams were not nearly as dominant (Les Miles, Tommy Tuberville, and Gene Chizik must have all had a rabbit’s foot), but the Gators still won more than they lost over a decent sample size. Meyer’s teams win close games at a rate that suggests it is more than just random chance. I wouldn’t expect his teams to continue winning close games at a nearly 90% clip, but the Buckeyes should continue to perform well in close games for the duration of Meyer’s tenure.

Thursday, February 08, 2018

2017 Yards Per Play: Big 10

We head to the heartland of America this week and examine the Big 10. Here are the Big 10 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Big 10 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2017 season, which teams in the Big 10 met this threshold? Here are Big 10 teams sorted by performance over what would be expected from their Net YPP numbers.
The Big 10 saw three teams finish with records vastly different from their expected records. Michigan State and Northwestern over-performed relative to their expected records while Illinois under-performed. The Spartans and Wildcats rebounded from somewhat (Northwestern) and very (Michigan State) disappointing 2016 campaigns by coming through in the clutch. Michigan State and Northwestern combined to go 8-1 in one-score conference games with the Spartans (5-1) consistently edging their opponents. In fact, the lone close loss between these two came in their October matchup when the Wildcats upset the Spartans in triple overtime. Outside of that game, the teams combined for a perfect 7-0 record in one-score conference games. The Spartans and Wildcats also finished first and second in in-conference turnover margin (+8 for Michigan State and +7 for Northwestern). Meanwhile, Illinois cannot blame close games as only one of their nine conference defeats came by fewer than ten points. The Illini did finish last in in-conference turnover margin (-7), and as we will soon see, they are no stranger to under-performing their peripherals.

Illinois finished winless in Big 10 play in 2017 and just 2-10 overall. This was obviously a huge disappointment to Illini fans and anyone foolish enough to bet on them winning more than 3.5 games. Generally though, teams with the Illini’s YPP profile tend to win about a quarter of their conference games. Based on those YPP numbers, the Illini undershot their conference winning percentage by about .275. If you’ll recall, the Illini also failed to hit their expected conference winning percentage last season as well. They didn’t miss the mark by quite as much, but their under-performance by about .195 was among the worst in the Big 10. Cumulatively, the Illini have undershot their expected conference winning percentage by .470 in Lovie Smith’s two seasons (roughly four wins). How (poorly) does this stack up in recent history? I have YPP data going back to 2005, so I decided to look at all BCS/Power 5 teams that have undershot their expected conference winning percentage by at least .400 over two consecutive seasons. The results are summarized below. Two teams met the threshold this season, with Arizona joining Illinois.
Since 2005, twenty teams have under-performed by at least .400 over two consecutive seasons and Illinois has accounted for a quarter of those instances! Arizona and South Florida are the only other schools with multiple appearances (remember South Florida was in a BCS conference during this time period) and both are here primarily on the strength of one horrendous season bookended by other slightly below average seasons. Meanwhile, the Illini have done this multiple times under four different head coaches!

Alright, so the Illini have a penchant for under-performing, but what can we expect going forward? Are the Illini due for an incredible rebound? Excluding the current Illinois and Arizona teams, I looked at how the other 18 teams performed in the year immediately following their two consecutive years of under-performance. I compared this to the average conference record the team put together in the under-performing years and calculated the difference. The results are summarized below chronologically.
Illinois fans can take heart from this exercise. 13 teams improved their conference win total from the average of the previous two seasons, three saw no changes, and only two declined. The average improvement was by about one and a third games. Illinois has averaged one conference win over the previous two seasons (2-7 in 2016 and 0-9 in 2017), so I would set expectations at around two and a half conference wins in 2018. I don’t know how the Lovie Smith hire will work out in the long run (it seemed pretty unimaginative at the time), but if Smith doesn’t manage at least three conference wins in 2018, I would be inclined to fire him and start over

Thursday, February 01, 2018

2017 Adjusted Pythagorean Record: ACC

Last week, we looked at how ACC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2017 ACC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, ACC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard Georgia Tech was the lone ACC team that saw their record differ significantly from their APR. The Yellow Jackets were a little unlucky in one-score games, going 1-2 in such contests. They also posted a poor, but hardly abysmal in-conference turnover margin of -5. Perhaps the biggest contributor to their bowl-less season was the vagaries of opposing field goal kickers. In ACC play, opponents attempted 17 field goals against Georgia Tech. They made fourteen. 82% is not an otherworldly field goal percentage, but in their three close conference games, Georgia Tech opponents made all eight of their attempts! Those made kicks, especially in the close losses to Miami and Virginia kept the Yellow Jackets home for the holidays for the second time in three seasons. 

2017 was a special season for the Clemson Tigers. Despite key losses from their title team, the defending national champions qualified for the College Football Playoff for the third consecutive season and in the process won their third consecutive ACC title. While the playoff is a relatively recent innovation, the ACC has been around for over 60 years, and in that time period, there have only been seven instances of a team winning three consecutive outright ACC titles.
Obviously, the qualifiers to the previous sentence are very important in framing this accomplishment. Florida State won nine consecutive ACC titles from their first year in the conference (1992) through 2000. However, a pair of shared titles in the middle of their run (1995 with Virginia and 1998 with Georgia Tech) prevented them from winning more than three outright titles in a row. Plus, with the addition of the ACC Championship Game in 2005, the conference only crowns an outright champion now instead of the potential for shared titles in the first 50+ years of the conference. Despite those qualifiers, this is still quite an accomplishment and the Tigers will take aim at becoming the first ACC team to ever win four consecutive outright conference titles in 2018.