Wednesday, May 24, 2017

2016 Adjusted Pythagorean Record: Sun Belt

Last week, we looked at how Sun Belt teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually. Once again, here are the 2016 Sun Belt standings.
And here are the APR standings with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only.
Finally, Sun Belt teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard no team saw their record differ significantly from their APR. Idaho and Georgia State were the biggest over and under-achievers respectively, but we already touched on some reasons for that last week.

Georgia Southern enjoyed a successful, perhaps the most successful, transition from FCS to FBS. In their first two seasons in the new classification, the Eagles went 18-7, won an outright conference title, won a bowl game, and played three Power 5 schools to within one score on the road. Despite their success, head coach Willie Fritz made what amounts to a lateral move (or perhaps even a downgrade) to Tulane. In his stead, the Eagles made an interesting hire. They tabbed Tyson Summers, a Georgia native, who spent just one season in Statesboro. That one season (2006) happened to be the worst in school history. Summers also came from the defensive side of the ball, as he had most recently been the defensive coordinator at UCF and Colorado State before coming to Georgia Southern. Summers did keep the run-first option the team utilized to great success under his predecessor, but the performance left a lot to be desired. After averaging over six yards per carry in both 2014 and 2015 and scoring 109 combined touchdowns on the ground, the Eagles averaged under four and half yards per carry and scored just 24 rushing touchdowns in 2016. This decline in production contributed to a 5-7 record in Summers' first season where the Eagles had to upset Troy in their final game just to get to five wins. Summers enters 2016 on the proverbial hot seat. So what are his chances of surviving and getting the Eagles back to a bowl? To answer that question, I looked at all first year coaches since 2006 who oversaw a decline of at least three regular season wins and an increase of an least three regular season losses. That query produced a sample of 38 coaches. How did those coaches perform the following season? The results are summarized below.
Overall, the teams improved by an average of about two regular season wins the next year. More than two thirds of the teams improved the next season, and only about ten percent saw a further erosion of their record. More than quarter of the teams in the sample improved by at least three games, so there is hope for a large positive shift in fortunes for concerned Georgia Southern fans. However, I don't want to give those fans a false hope. There is decent, perhaps good, chance that Summers is not the right man for the job. Of that sample of 38 coaches, 29 are no longer with their teams. Of those 29, only five had a winning record at the end of their tenure with the team. If we include the coaches who are still active, 9 of 38 have winning records with their team. There are some success stories to point to like PJ Fleck, Skip Holtz, and Butch Jones, but while the odds of Georgia Southern improving in 2017 are good, the odds of them retaining Summers for the long haul may not be.

Thanks for reading my 2016 YPP and APR posts. Its been almost 20 weeks since Clemson upset Alabama, but we still have about 14 more weeks before college football season kicks off in earnest. Over the summer, this blog will add new content, but it won't be as frequent as the weekly updates you have (hopefully) been enjoying. I have some studies in the queue on polls and their accuracy and biases so if that interests you, check back every now and then. I'll also be making another Vegas trip and documenting my college football investments. Once the season starts, I'll continue with my weekly picks column and perhaps add more original posts here and there as the spirit so moves me. Have a great summer. I know we can get through the rest of this long offseason together.

Wednesday, May 17, 2017

2016 Yards Per Play: Sun Belt

Nine conferences down, and now we move to our last FBS conference! Where did the time go? This week, we examine the Sun Belt. Here are the Sun Belt standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Sun Belt team. This includes conference play only. The teams are sorted by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the Sun Belt met this threshold? Here are Sun Belt teams sorted by performance over what would be expected from their Net YPP numbers.
Idaho, in their penultimate season before moving down to FCS over-performed relative to their expected record, while FBS newcomer, Georgia State under-performed. Idaho finished with the best in-conference turnover margin in the Sun Belt at +11 and was a solid 2-0 in one-score conference games. That little bit of good fortune coupled with a powerful passing attack helped the Vandals win six conference games for the first time in school history. Georgia State can also blame turnovers for their poor record. The Panthers posted an in-conference turnover margin of -9 (which was not quite good enough for last place) and were also shoddy in the kicking game, making just 7 of 13 field goals in Sun Belt play.

If you were looking at the disparity between Sun Belt teams’ records and their expected YPP records and thought: ‘Hmmm. Georgia State sure did miss their expected record by a wide margin. I wonder if it was the widest margin ever.’ Well, I am here to answer that question. Here are the top (or bottom) ten mid-major teams since 2005 ranked by the largest disparity between their actual record and their expected record based on YPP.
Some notes on the table:

  • In an interesting statistical coincidence, 2016 produced the two teams that missed their expected record by the widest margin. Utah State and Georgia State, along with SMU in 2007, were the only teams to miss their expected record by at least .400 (a little more than three wins in an eight-game conference season). 
  • Georgia State actually appears on this list twice, which is perhaps one reason why Trent Miles is no longer coaching the team. 
  • Four of the ten teams on this list (Georgia State, SMU, New Mexico, and FIU) ended up losing their coach either via firing (sometimes at midseason) or resignation
  • Chris Petersen’s best Boise State team probably does not belong on this list, but the Broncos were so dominant (+3.68 YPP in a pretty good WAC) that the regression analysis believed they should have won more than all their games.

Wednesday, May 10, 2017

2016 Adjusted Pythagorean Record: SEC

Last week, we looked at how SEC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually. Once again, here are the 2016 SEC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, SEC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard LSU significantly under-performed relative to their APR. There is not a lot of mystery as to why that is the case. LSU won five conference games. Four of those wins came by an average of 24 points. The other was a narrow victory over Mississippi State. Their three losses came by a combined 21 points, with the 10-0 loss to Alabama their largest margin of defeat.

Two weeks ago in the APR write up of the Pac-12, I looked at how well Pac-12 coaches over or under-performed relative to their APR and expected YPP records. In that post, I only looked at the Pac-12 since it expanded to twelve teams (2011-2016). Taking inspiration from Senator Blutarsky who noted Jim McElwain exceeded his YPP significantly in his first two seasons at Florida, I decided to do the same with the SEC. However, since it just means more in the SEC, I decided to go back as far as I have APR and YPP data - 2005. To qualify for inclusion on the leaderboards, coaches had to have at least three full SEC seasons under their belts in the twelve year period from 2005 through 2016. This criteria produced 27 coaches ranging from greats like Saban and Meyer to forgotten men like Croom and Dooley. Finally, before we get to the actual tables, just a housekeeping note. For coaches who did not finish a season (see Les Miles in 2016), I used the full season difference and credited them with the percentage they coached. For example, LSU was 1.88 games worse than expected in APR in 2016. Miles coached two (of eight) conference games so he receives 'credit' for negative 0.47 wins (-1.88*.25). The rest go to Ed Orgeron who receives 'credit' for negative 1.41 wins (-1.88*.75). Without further adieu, here are SEC coaches ranked by the average number of wins per season they exceeded their APR.
Chizik only coached for four seasons, so sample size is obviously an issue here. His national championship team was almost three wins better than their APR and his next team finished .500 in conference play despite an APR of only about two wins. His other two teams were pretty neutral in regards to their APR, but that sixteen game run puts him in front. Pinkel and Tuberville had similarly short tenures (remember this only includes Tuberville's stint on The Plains from 2005-2008), but fielded several teams that were better than their APR. For coaches with a significant tenure, Miles and Richt exceeded their expected records by about two fifths of a win on average. At the other end of the spectrum, you will find a plethora of Vanderbilt coaches consistently under-performing their peripherals. James Franklin, Bobby Johnson, and especially Derek Mason all occupy space in the bottom quartile. Houston Nutt coached at two schools during this time period and while his three Arkansas teams under-performed by about half a win per season, his four Ole Miss squads were even worse as they averaged more than a full win less than their APR!

Now let's look at Yards Per Play. Here are the SEC coaches ranked by the average amount they exceeded their expected record based on YPP. Keep in mind while APR was based on wins (i.e. +.500 equals half a win greater than expected), YPP is based on winning percentage. Thus, Tommy Tuberville's +.135 translates to a little more than one win (.135*8 = 1.08) per conference season.
Once again, we see some familiar faces at the top. Chizik and Tuberville were first and third respectively in APR and are second and first in YPP. Miles is once again the longest tenured coach in the top five while Richt is closer to average here. The bottom of the list also looks very similar with Ed Orgeron bringing up the rear. If you look back at the APR numbers, he was also fifth from the bottom there. I was a big fan of Orgeron's hiring last season, but these tables give me pause. On the one hand, Orgeron's teams have under-performed in each of his nearly four full seasons in charge. In addition, while the 2016 team was not totally his, keep in mind LSU was a consistent over-achiever, at least relative to their peripheral stats under Miles. On the other hand, Orgeron went a decade between head coaching jobs and in between appears to have matured while also guiding another team to a solid finish after a mid-season firing. Perhaps we shouldn't judge 2016 too harshly with all the turmoil surrounding the program. However, if 2017 plays out like 2016 did, with LSU blowing out five conference opponents, while losing a semi-competitive game to Alabama and two other close games in the SEC, not only will we have further evidence of a trend, Orgeron will find himself squarely on the hot seat.

Wednesday, May 03, 2017

2016 Yards Per Play: SEC

Hard to believe, but we only have two more conferences to review. This week, we head south to the SEC. Here are the SEC standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each SEC team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the SEC met this threshold? Here are SEC teams sorted by performance over what would be expected from their Net YPP numbers.
Arkansas was the lone SEC team to significantly over-perform relative to their expected record. Arkansas was not particularly lucky in terms of one-score games (1-1) or turnover margin (-6) in SEC play. No, the Hogs had the profile of a one win conference team thanks to an abysmal defense. The Hogs allowed nearly eight yards per play to SEC opponents and really had only one good defensive showing. They held a limited Florida offense to ten points and under five yards per play. If we remove that game, the Hogs allowed over 8.2 yards per play to their other seven conference opponents. Two teams averaged north of ten yards per play against Arkansas and six conference opponents averaged more than seven yards per play against them.

A few years ago, I penned this Pulitzer-caliber post about Les Miles and his uncanny inability to cover the spread (or more accurately his team's inability to cover the spread). Since we are looking at the SEC this week and since Miles' SEC career is over, I decided to reexamine his performance against the spread relative to his conference contemporaries. Let's get degenerate.

Miles coached at LSU eleven full seasons and parts of a twelfth beginning in 2005. In that span, 22 other coaches have spent at least four seasons as SEC head coaches. The following table lists those 23 coaches ranked by their winning percentage against the spread (ATS) in conference games (championship and bowl games excluded). I cheated a little and included Ed Orgeron even though he does not quite have four full seasons under belt since he did succeed Miles at LSU.
A few observations.

  • Despite being forced to pay a premium as the most recognized team in college football, backing Nick Saban and Alabama has been a winning proposition for gamblers. Since coming to Tuscaloosa, Saban has covered over 58% of the time against SEC opponents. 
  • Look at the three Auburn coaches since 2005 with the exact same ATS records. Eerie. 
  • While he never quite had the reputation in gambling circles of Miles, Mark Richt didn't exactly inspire a lot of ATS confidence for Georgia backers. 
  • Miles had a winning ATS conference record in just one season, but it was quite a doozy. His 2011 team went 7-1 ATS. If we remove that outlier year, his ATS conference record drops to 29-49-4 (a .372 winning percentage). 
  • While Miles does not quite bring up the rear, his career is more than double the length of the two men with a worse conference ATS wining percentage. 
  • And speaking of the guy in last place, he may join Miles in the unemployment line soon if the Aggies continue to struggle relative to their expectations.
So we know Miles was not good at covering the spread, but what if we break things down further. How did his teams perform in different roles. The following table lists LSU's performance ATS versus SEC foes under Miles in the role of favorite, double-digit favorite, and underdog.
Miles was a little better as a favorite, but you would have still made money betting against him in both roles. If nothing else, his teams were consistent as there was not a great deal of difference in their ATS numbers as a standard favorite and a double-digit favorite.

Let's look at one more angle. How did his teams perform ATS at home and on the road against SEC opponents?
So much for that Death Valley aura. On the road, his teams were basically a coin flip to cover, but the Tigers were horrible at home under Miles, covering just over 36% of the time. In fact, in the first four years of his tenure at LSU, his teams covered just once in home conference games!

Miles entertained college football fans for over a decade in Baton Rouge. He brought us a two-loss national champion, one of the best teams to not not win the national title, the Tennnessee Waltz Game,  tried to call a timeout on a change of possession (this was one season before the ill-fated rule that mandated the game clock start when the play clock started between possessions went into effect so maybe he was just ahead of the curve), a lot of grass eating, and of course, the final play (thus far) of his coaching career. I could never tell if he was a college football genius or the football equivalent of Homer Simpson living a charmed life despite being overwhelmingly incompetent. The truth was probably somewhere in the middle, but regardless, college football won't be as fun with him not around.

Wednesday, April 26, 2017

2016 Adjusted Pythagorean Record: Pac-12

Last week, we looked at how Pac-12 teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 Pac-12 standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Pac-12 teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard Stanford exceeded their expected record and UCLA under-performed relative to their expected record. Last week, we went over some reasons regarding UCLA poor record, so let's focus on Stanford. The Cardinal were not particularly lucky in on-score games, as they played in just one and lost it to Colorado. Of course, while the final score may not say so, their game against UCLA for all intents and purposes was a one-score affair. The Cardinal scored late to take the lead and then scored a defensive touchdown on the final play of the game for a misleading final score. Its just too bad Brent Musburger was not calling the game. What about turnover margin? No, Stanford was middling in that regard, posting a -1 in-conference turnover margin. So how did Stanford manage to drastically exceed their expected record? The answer lies in their two games with the Washington schools. Over an eight-day stretch in late September and early October, Stanford lost to Washington and Washington State by a combined margin of 64 points. These two blowouts dragged down their scoring margin and tarnished their overall profile despite them winning six of their other seven conference games.

I have been blogging about Yards Per Play and APR for FBS conferences for nearly two years now and I have data going back to 2005. With that in mind, I wanted to see if some coaches had any particular knack for exceeding or failing to exceed their expected record based on YPP and APR. As many coaches have come and gone in the twelve seasons I have data, I decided to just look at Pac-12 teams (for obvious reasons) and just look at teams since 2011, when the conference last expanded. I arbitrarily decided to only look at coaches who coached for at least three seasons so that no one-year wonders are unfairly represented. Obviously, using three years and beginning in 2011 means we miss out on quite a few notable Pac-10/12 coaches including Pete Carroll, Chip Kelly, and Jeff Tedford among many others. Perhaps a future post will be more inclusive. Anyway, here are all Pac-12 coaches since 2011 who have coached at least three seasons in the conference ranked by the average number of wins per season they exceed their APR.
I placed an asterisk by Steve Sarkisian as I elected not to count any of his games from 2015, when he was relieved of his duties after three conference games. APR and YPP numbers are done at the season level, so it made sense to ignore the three games he coached in 2015. Stanford had the largest positive difference of any Pac-12 team between their APR and actual record in 2016 and that is nothing new. Since 2011, David Shaw leads all Pac-12 coaches in average number of wins by which he exceeds his APR. Todd Graham is a distant second, with the recently fired Mark Helfrich third, and Rich Rodriguez fourth. On the other end of the spectrum, Jim Mora, Sonny Dykes, Chris Petersen, and Mike MacIntyre have averaged more than half a win fewer than expected based on their APR. Some of this can be blamed on small sample size as Washington was nearly two and a half game below their APR last season which significantly skews the data considering Petersen has only been in Seattle for three years. Similarly, UCLA has more than two and a half games worse than their APR in 2016 which negatively impacts Jim Mora's overall numbers.

Let's now turn our attention to Yards Per Play. Using the same criteria previously outlined, here are the Pac-12 coaches sorted by the average amount they exceed their expected record based on YPP.

Keep in mind while APR was based on games (i.e. David Shaw's +.603 means his teams exceeded their APR by six tenths of a game on average), the expected record based on YPP is based on winning percentage. Thus, Todd Graham's +.142 translates to about 1.3 games over a nine game conference season (.142*9). Once again, Graham and Shaw are at the top of the list. And again, Mora, Dykes, and Petersen are near the bottom, while MacIntyre has been more average by this measure. As I mentioned earlier, sample size is an issue when looking at these numbers.

So what do these numbers mean? Is Chris Petersen overrated as a coach because his teams appear to under-perform their records based on things like YPP and APR? Maybe there is something systemic to his teams that make them under-perform. Or maybe this is all 'noise' and the result of one bad season in a sample size of three. I would think Washington fans would be happy with his teams under-performing in these metrics as long as Washington continues to contend for Pac-12 titles and College Football Playoff appearances. On the other hand, David Shaw and Todd Graham do seem to always have their teams winning more games than we might otherwise suspect based on their per play averages and the amount of touchdowns they score and allow. Many factors influence which teams win football games, but it appears, at least on the surface, that David Shaw and Todd Graham have found something on the margins that allow their teams to win more than we might otherwise expect.

Wednesday, April 19, 2017

2016 Yards Per Play: Pac-12

Seven conferences down, three to go. This week, we head even further west and examine the Pac-12. Here are the Pac-12 standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Pac-12 team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the Pac-12 met this threshold? Here are Pac-12 teams sorted by performance over what would be expected from their Net YPP numbers.
Four Pac-12 teams saw their expected record differ significantly from their actual record. Colorado and Washington State vastly over-performed relative to their expected record while Arizona and UCLA under-performed. Colorado went from the basement to the penthouse thanks to better play, but also thanks to a 3-1 record in one-score conference games, including a 10-5 baseball slugfest against Stanford which featured a Todd Helton grandslam. Washington State opened 2016 with a loss to an FCS opponent for the second consecutive season, but went 3-0 in one-score conference games to put themselves in contention for the North division title. Arizona suffered through a great deal of injuries en route to their worst conference record since 2003, but they were not particularly unlucky in one-score games. No, the Wildcats can blame turnovers. They forced only six in Pac-12 play while committing 19. Their in-conference turnover margin of -13 was eight worse than the team with the second worst in-conference turnover margin, Oregon. Finally, UCLA began the year in the top-20, but finished just 4-8 (2-7 in Pac-12 play). The Bruins also suffered through their share of injuries, but were mostly done in by their poor record in close games (0-3 in one-score conference games).

The 2016 Pac-12 Championship was a flashback of sorts. Former powers Colorado, a school that had not won a conference title since 2001 and Washington, a school without a conference title since 2000, faced off. That comparison is a little misleading though. While Washington has been a consistent bowl team since the arrival of Steve Sarkisian in 2009, Colorado had only played in one bowl in the last decade and had not finished with a winning record since 2005. Colorado improved significantly in 2016, going from 1-8 in Pac-12 play the previous year to 8-1. The improvement of seven games in conference play is one of the largest in college football history. Meanwhile, Washington came into 2016 with a great deal of hype. The Huskies were far better than their 4-5 conference record indicated in 2015 and were highly though of by many statistical measures, including two on this very blog. Washington lived up to the hype, doubling their conference win total, winning the Pac-12 championship, and advancing to the College Football Playoff. The two Pac-12 Championship Game participants improved by a combined total of eleven conference wins in 2016. This got me to wondering about the largest combined improvement of championship game participants. I looked at every FBS conference that had divisional play from the SEC to the WAC (moment of silence) and calculated the combined conference play improvement of the championship game participants. Surely this Pac-12 Championship Game was historic. Off hand, I could not remember a larger combined improvement. Ah, but my memory seems to have abandoned me in my old age. Here are the five largest combined increases in conference wins by championship game participants.
I had already forgotten about the 2013 SEC Championship Game. Missouri rebounded from a rough early beginning to their SEC membership by winning seven conference games and capturing the East. Auburn fired Gene Chizik after a winless conference campaign and rode some Gus Malzahn magic all the way to the final BCS National Championship Game. It has been smooth sailing for both Tigers ever since. The aforementioned Pac-12 Championship Game comes in second at a game behind the SEC. Conference USA has two entries tied for third. In 2005, the first year of divisional play in Conference USA, Tulsa doubled their conference win total from the previous season (when they were in the WAC). Another newcomer, Central Florida, won seven more conference games than the previous season (when they were in the MAC). Ten years later, Western Kentucky rode one of the best offenses in Conference USA history to a four-game improvement and a league title. They faced a Southern Miss team that won seven conference games after winning just two over the previous three seasons. Finally, the 2010 MAC season saw Miami of Ohio improve by six wins and pull off a massive upset in the championship game against a Northern Illinois team that went unbeaten in league play. While it wasn’t quite the most historic conference championship game of all time, the Pac-12 can take solace that its title game matchup was among the most unique college football events in recent memory.

Wednesday, April 12, 2017

2016 Adjusted Pythagorean Record: Mountain West

Last week, we looked at how Mountain West teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 Mountain West standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, Mountain West teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. By that standard, New Mexico outperformed their APR and Utah State underperformed. Both teams also over and underperformed their expected record based on YPP and we discussed some reasons why last week. Read last week’s post if you’re interested in why the Lobos and Aggies boasted records that differed from what we might expect.

Take another look at the Mountain West standings. I’ll wait. Did you notice that almost every team in the Mountain Division had a winning conference record? Yep. Five of the six teams won at least five of their eight conference games and three won six. Is this unique? How often does one division produce so many teams with winning records? I’m glad you asked.

Thankfully, divisional play only goes back a quarter century so I didn’t have to look back too far. It began in 1992 when the SEC opened Pandora's Box by expanding to twelve teams, splitting into two divisions, and hosting a championship game. Several conferences followed suit throughout the 90’s and now every conference except the Big 12 and Sun Belt features two divisions. I looked at all those FBS divisions since 1992 and discovered this phenomenon has only happened four times, but each occurrence has come in the past three seasons. Here is each instance listed chronologically.

Pac-12 South 2014
The Pac-12 plays nine conference games which means teams cannot finish with a .500 conference record. Depending on how the schedule shakes out, this gives a team that might have only finished 4-4 an opportunity to steal an extra game and eke out a winning record. It also helps when the last place team in the division does not win a game and in effect donates a victory to the other five members. Perhaps the Pac-12 can develop a plaque or award to honor the 2014 South Division as the first to feature five teams with winning conference records.

Mountain West Mountain 2015
2016 was not the first time the Mountain West boasted five teams with winning conference records. It also happened the year before. While the last place team in the division did manage a pair of conference wins, they came in games against the West Division so the Cowboys did not win a game against their division foes.  
Big 10 West 2016
Personally, I think this one deserves an asterisk and perhaps two because this division has seven teams and the conference played nine games. Despite winning three conference games between then, Illinois and Purdue did not beat any of the other five teams in the division. They merely provided them with a combined ten conference wins.

Mountain West Mountain 2016
Not only did the Mountain West pull off this improbable accomplishment two seasons in a row, they did it with a nice ‘worst to first’ story as Wyoming won the division after finishing last in 2015. Again, the last place team in the division, Utah State, did not win a single game against divisional foes.

The past two seasons have truly been unique in the Mountain West. The Mountain Division has produced five teams with winning conference records both seasons despite not having the presence of a seventh team or a ninth conference game. It doesn’t mean a great deal in the grand scheme of things. In fact, most people probably didn’t even realize it happened, but it shows that if you pay close attention you might see something historic.

Wednesday, April 05, 2017

2016 Yards Per Play: Mountain West

This week, we examine the Mountain West Conference. Here are the Mountain West standings.
So we know what each team achieved, but how did they perform? To answer that, here are the Yards Per Play (YPP), Yards Per Play Allowed (YPA) and Net Yards Per Play (Net) numbers for each Mountain West team. This includes conference play only, with the championship game not included. The teams are sorted by division by Net YPP with conference rank in parentheses.
College football teams play either eight or nine conference games. Consequently, their record in such a small sample may not be indicative of their quality of play. A few fortuitous bounces here or there can be the difference between another ho-hum campaign or a special season. Randomness and other factors outside of our perception play a role in determining the standings. It would be fantastic if college football teams played 100 or even 1000 games. Then we could have a better idea about which teams were really the best. Alas, players would miss too much class time, their bodies would be battered beyond recognition, and I would never leave the couch. As it is, we have to make do with the handful of games teams do play. In those games, we can learn a lot from a team’s YPP. Since 2005, I have collected YPP data for every conference. I use conference games only because teams play such divergent non-conference schedules and the teams within a conference tend to be of similar quality. By running a regression analysis between a team’s Net YPP (the difference between their Yards Per Play and Yards Per Play Allowed) and their conference winning percentage, we can see if Net YPP is a decent predictor of a team’s record. Spoiler alert. It is. For the statistically inclined, the correlation coefficient between a team’s Net YPP in conference play and their conference record is around .66. Since Net YPP is a solid predictor of a team’s conference record, we can use it to identify which teams had a significant disparity between their conference record as predicted by Net YPP and their actual conference record. I used a difference of .200 between predicted and actual winning percentage as the threshold for ‘significant’. Why .200? It is a little arbitrary, but .200 corresponds to a difference of 1.6 games over an eight game conference schedule and 1.8 games over a nine game one. Over or under-performing by more than a game and a half in a small sample seems significant to me. In the 2016 season, which teams in the Big 10 met this threshold? Here are Mountain West teams sorted by performance over what would be expected from their Net YPP numbers.
2016 was a unique year for the Mountain West as nearly half the league saw their actual record differ from their predicted YPP record. Air Force, New Mexico, and Wyoming exceeded their expected record while Fresno State and Utah State fell well short of theirs. Air Force, New Mexico, and Wyoming combined to go 11-3 in one-score conference with each team making just enough plays in crunch time to produce 17 conference wins between them. Air Force and New Mexico also did a great job of using their unique offenses to minimize their per play differential. The Falcons and Lobos keep the ball on the ground for the most part with their option based attacks. Air Force ran the ball more often than any team in 2016 and New Mexico was fourth in run plays per game. Their methodical offenses allowed them to beat their opponents with volume. Air Force ran about seven and a half more plays per game than their conference opponents and New Mexico ran nearly five more plays per game than their Mountain West foes. New Mexico was particularly adept at using their offense to hide their bad defense as the Lobos defense faced about seven fewer plays per game than the average Mountain West team. Fresno State and Utah State can blame bad luck in close games (combined 0-7 in one-score conference games) and turnovers (last and second to last in turnover margin in conference games) for their poor league records. You can also use the 2016 Mountain West as the counterpoint to the ‘defense wins championships’ narrative. Air Force, New Mexico, and Wyoming each finished below average on defense in the Mountain West, with Wyoming and New Mexico finishing second to last and last respectively, but combined to go 17-7 in league play. Meanwhile, Utah State and Fresno State finished third and fourth in the conference in yards allowed per play, but the lone conference win they combined for came when they played each other.

Regression may be in store for Wyoming in 2017 and not just because they were not quite as good as their record in 2016. The Cowboys may ride headfirst into the Plexiglass Principle. Bill James is usually given credit for coining the phrase which holds that teams that drastically improve in one season tend to decline the next season. Wyoming drastically improved in 2016, going from two conference wins in 2015 to six and a division crown. To ascertain what we might expect from the Cowboys in 2017, I looked at all mid-major teams since 2005 that saw their conference win total increase by at least four games from one season to the next and looked at how they did in conference play in the follow up season. The results are summarized in the table below.
Twenty two teams met the criteria of improving by at least four conference games and those twenty two teams proceeded to lose about half of their gains the next season. Collectively, teams combined to win about 1.7 fewer conference games. Sixteen teams saw their win total decline, four teams saw their win total remain the same, and only two improved. Perhaps more troubling for Wyoming fans is that more than a third of the twenty two teams (eight) saw their conference win total decline by three or more games. The Mountain Division is quite strong, so if Wyoming gets back to the Mountain West Championship Game, Craig Bohl should definitely be the coach of the year.

Saturday, April 01, 2017

Football Expansion is Making March Less Mad

Gonzaga, a small school with a unique name that first entered the national conscience nearly two decades ago will be playing in their first Final Four tonight. The Bulldogs are the seventh team since 2006 to win their region and advance to the Final Four while playing in a ‘mid-major’ conference. For the most part, Gonzaga was seeded significantly higher than most of the other mid-major teams that advanced this far. While this is quite an achievement for Gonzaga, the Bulldogs do not have any mid-major company in the Final Four and were also the only mid-major still playing after the first two rounds of the tournament. Part of that is obviously because mid-majors are not as good, on average, as power/major conference teams, but it is also due to the fact they have a much harder time getting into the tournament provided they do not win their conference tournaments and garner an automatic bid. Before we delve into the plight of mid-major basketball programs, let’s define what a mid-major is.

Since 2011, the NCAA tournament has included 68 teams. Eight teams play ‘First Four’ (don’t call them play-in) games with the winners advancing to the ‘real’ tournament field of 64 teams. In seven seasons under the current format, fifteen conferences have received at least one at-large (i.e. not automatic) bid. The following table lists the fifteen conferences in order of the average number of at-large bids they have received each season. I also included the percentage of times each conference received at least one at-large bid and the average seed their at-large teams received.
Based on this data, I would argue there are six major conferences, two pseudo major conferences, and perhaps a handful of (but probably only two or three) true mid-major conferences. We’ll start with the easy part. The Power Five football conferences (ACC, Big 10, Big 12, Pac-12, and SEC) are all major conferences. Each has averaged at least three at-large bids per season and has garnered an at-large bid each year. The Big East, a conference that no longer exists in college football is also a power conference. While the membership has changed drastically since 2011, going from an amalgam of public and private universities with about half the members fielding an FBS team to a coalition of private Catholic universities with zero FBS playing members, the Big East has consistently put multiple teams in the NCAA tournament. Since Extreme Makeover: Big East Edition began in 2014, the conference has averaged 4.5 at-large bids per season compared to 8.3 from 2011-2013. While that may seem like a significant drop, keep in mind the new Big East has only ten members while the old Big East had sixteen.

After the Power Six, the Atlantic-10 and the American Athletic Conference are a notch below. Both conferences have received at least one at-large bid every season since 2011 (the American has only existed since 2014) and the Atlantic-10 has only received one fewer at-large bid in that time frame than the SEC. The Atlantic-10 lacks the heft at the top that the Power Six conferences have, but they are deep and consistently send multiple teams to the NCAA tournament. As for the American, while it is new on the college basketball scene, it does feature four teams with basketball pedigrees (Cincinnati, Connecticut, Memphis, and Temple) as well as a fusion of teams with historical success (Houston), an up-and-comer (SMU), and a mixture of everything in between (East Carolina, Tulsa, UCF, etc).

Outside of the Pseudo Duo in the American and the Atlantic-10, mid-major basketball is pretty much confined to three conferences: the Missouri Valley, the Mountain West, and Gonzaga’s home, the West Coast Conference. Each conference has garnered at least one at-large bid four or more times in the past seven seasons. However, the recent trend has not been positive. Here are the at-large bids that have been earned by teams not in the Power Six or Pseudo Duo conferences over the past seven years.
The past two seasons, a total of two mid-majors have received at-large bids. Wichita State was one of the final four teams in the tournament in 2016 and received an 11 seed and a trip to the First Four and St. Mary’s received a 7 seed as the lone mid-major at-large recipient in 2017.

Why have mid-major teams been squeezed out of the tournament over the past few seasons? You can blame football, and more specifically conference expansion. It began with a small ripple in the middle of the aughts when the ACC grabbed Boston College, Miami, and Virginia Tech from the Big East. The Big East responded by raiding Conference USA, Conference USA took some teams from the WAC and the MAC, the WAC stole some teams from the Sun Belt, and the MAC and Sun Belt pretty much stood pat. This is an abbreviated retelling, but that’s most of the important stuff. Aside from a few teams joining FBS, things were quiet for about five seasons, but then there was a seismic shift.

Beginning with the 2011 season, the Big 10, Pac-10, and SEC brought the Big 12 to the brink of extinction. The Big 10 added Nebraska, the Pac-10 added Colorado and also called up Utah from the Mountain West to get to twelve teams, and the SEC poached Missouri and Texas A&M. To survive, the Big 12 added West Virginia from the Big East and called up TCU from the Mountain West. Elsewhere in the major conference landscape, the Big 10 eventually added Maryland from the ACC and Rutgers from the Big East, while the ACC further depleted the Big East by adding Louisville, Pittsburgh, and Syracuse. The Big East again bolstered their membership by grabbing school further down the food chain from Conference USA. The Big East eventually ceased to exist after the 2012 season, but was rechristened as the American Athletic Conference. Conference USA again seized teams from the Sun Belt, the Sun Belt acquired teams from the WAC, as did the Mountain West who also lost BYU to independence. With no pipeline to replenish their lost members, the WAC went extinct and exists solely as a basketball conference now.

While these changes were driven by football, they also had and continue to have a profound impact on college basketball. When football teams change conferences, the basketball programs often move as well. While college football only has ten conferences (formerly eleven when expansion began) at the FBS level, college basketball has 32 leagues in its ecosystem. Changes at the top trickle down to the mid and low-major conferences. For some teams, this has been beneficial as they have been called up or graduated to major conferences and seen their profile expand. However, one needn't ask Kirk Cameron what life is like for those left behind.

When programs graduate to better leagues, it makes it even harder for the remaining mid-majors to garner at large bids. Take Illinois State for example. This season, the Red Birds lost just two games after Christmas and ranked in the top-50 of KenPom and the RPI when the selection committees chose the teams for the NCAA tournament. Yet, the Red Birds were one of the first teams left out, thanks in part to the relative weakness of the Missouri Valley. Wichita State was the only 'valuable' team the Red Birds were able to play once the conference season started. However, a few years ago, the Red Birds would have had another opportunity for a quality win. Creighton, a small Jesuit school in Omaha, with a solid basketball history over the past twenty years was a member of the Missouri Valley for nearly 40 years, but left after the 2013 season to join the new Big East. The Blue Jays earned a six seed in this year's NCAA tournament and would have provided at least two opportunities for a quality win, and perhaps more importantly, no opportunities for a bad loss, to the Red Birds. Further compounding the issue is that the Missouri Valley elected to replace Creighton to keep their membership at ten teams. Their replacement, Loyola-Chicago, has averaged a KenPom finish of 165 while Creighton has averaged a finish of 41 in the new Big East. Not only have Missouri Valley teams lost a chance at a quality win, they have added an opportunity for a bad loss.

The Missouri Valley is not the only conference that has lost good programs and with them opportunities for at-large bids. Remember Butler? The Bulldogs made back-to-back national finals as members of the Horizon. While the Horizon did not send multiple teams to the tournament in either year that Butler advanced to the Final Four, the Bulldogs would have received an at-large bid had they stumbled in the conference tournament. Where is Butler now? The Bulldogs are officially big time. They joined Creighton in the new Big East in 2014 and share a conference with traditional powers like Georgetown, Marquette, and Villanova. The Horizon could have used a second quality team in 2016. Valparaiso, was 26-6 and in the KenPom top-40 when the NCAA tournament began, but with Butler no longer in the conference, the Crusaders did not have an opportunity for a quality win once conference play started. When they lost in the conference tournament, you know how this story ends. Valparaiso was relegated to the NIT, where they advanced to the Final Four.

In 2011, the Colonial sent three teams to the NCAA tournament, and one famously advanced to the Final Four. All three of those teams are no longer in the conference. George Mason and Virginia Commonwealth are now in the Atlantic-10 and Old Dominion made a lateral move to Conference USA to accommodate their football team. The Colonial sent multiple teams to the NCAA tournament in 2006, 2007, and 2011, and nearly garnered an at-large bid in 2012 (Drexel), but have not come close to producing an at-large team since. To replenish their membership, the Colonial raided the Southern Conference, taking the College of Charleston and Elon. This resulted in a weakened Colonial and a weakened Southern Conference. Speaking of the Southern, while the conference has never produced an at-large team, their odds of doing so now are infinitesimal as their best and most famous program, Davidson, is now in the Atlantic-10.

The Mountain West, which sent multiple teams to the tournament every season from 2002-2015, including five in 2013, has been a single bid league for the past two years. Part of that can be blamed on losing solid basketball programs like Utah and BYU for football reasons. Conference USA, a league that has seen its champion win a game in the tournament each of the past three years, has not received an at-large bid since 2012. This season, Middle Tennessee State, a team in the top-50 of KenPom and winner of two games against SEC teams in the regular season, would likely have been left out had they stumbled in the Conference USA tournament. The primary reason is that only one other conference team ranks in the KenPom top-100. Former Conference USA teams like Houston, SMU, and UCF ranked in the top-100 this season, but now play in the American Athletic Conference. Of course, Middle Tennessee State is only in Conference USA because that league raided their former home, the Sun Belt.

And who can forget about WAC basketball? When the NCAA tournament expanded to 64 teams in 1985, the WAC sent multiple teams to the tournament for 18 consecutive years and produced a national finalist in 1998. However, the conference has received just one at-large bid in the last decade. The Mountain West formed when half of the WAC split at the end of last century and dealt the league a football death blow when they raided them a few seasons ago. Now the WAC houses teams like Grand Canyon and Cal State Bakersfield that were not playing DI basketball ten years ago as well as itinerants with nowhere else to go.

Conference expansion, primarily driven by football, has resulted in strong mid-major programs moving up to major or pseudo-major conferences. The mid-major leagues have responded by restocking their leagues with low-major programs. The resulting talent drains mean deserving mid-major programs do not have additional opportunities to improve their strength of schedule once conference play begins. This equates to more at-large bids for major and pseudo-major conferences. When the college basketball season begins, you can pretty much count on one hand the number of mid-major programs with realistic at-large aspirations (Gonzaga, St Mary's, and Wichita State). And with Wichita State potentially moving to the American, the number could shrink even further. To me, this is a bad development for college basketball. Southern Cal is not my idea of a March 'Cinderella'. I'll hold out hope that as the selection committee uses more advanced metrics, deserving mid-majors will start getting the benefit of the doubt.

Wednesday, March 29, 2017

2016 Adjusted Pythagorean Record: MAC

Last week, we looked at how MAC teams fared in terms of yards per play. This week, we turn our attention to how the season played out in terms of the Adjusted Pythagorean Record, or APR. For an in-depth look at APR, click here. If you didn’t feel like clicking, here is the Reader’s Digest version. APR looks at how well a team scores and prevents touchdowns. Non-offensive touchdowns, field goals, extra points, and safeties are excluded. The ratio of offensive touchdowns to touchdowns allowed is converted into a winning percentage. Pretty simple actually.

Once again, here are the 2016 MAC standings.
And here are the APR standings sorted by division with conference rank in offensive touchdowns, touchdowns allowed, and APR in parentheses. This includes conference games only with the championship game excluded.
Finally, MAC teams are sorted by the difference between their actual number of wins and their expected number of wins according to APR.
I use a game and a half as a line of demarcation to determine if teams drastically over or under perform their APR. Ball State was the lone MAC team that saw their actual record differ significantly from their APR. And the culprit(s) are pretty easy to identify: close games and turnovers. The Cardinals were 0-4 in one-score MAC games, with four of their seven conference defeats coming by a combined 18 points. Ball State also had the worst in-conference turnover margin of any MAC team, losing nine more turnovers than they forced.  
Life as an FBS coach has probably not gone the way Lance Leipold had hoped. Leipold left Division III power Wisconsin-Whitewater after winning six national championships in eight seasons and owning a career winning percentage north of .900 (109-6). In his two seasons at Buffalo, Leipold has already lost nearly triple the games he lost at Watergate Whitewater. Perhaps even more troubling, his second Buffalo team was much worse than his first, winning three fewer games and rating out as one of the MAC’s worst teams by almost any statistical measure. Is there any hope for the Bulls in 2017, or is 2018 the earliest the Buffalo can expect a return to the postseason? To answer this question I looked at every FBS coach hired since 2006 that had produced consecutive losing regular seasons to begin his tenure. If a coach finished 6-6 or 6-7 with the benefit of a bowl game, they are not included here. This query produced a decent sample of 54 coaches. How did those coaches perform in their third season? The following table lists the average regular season improvement from Year 2 to Year 3 for these coaches.
Over the course of a twelve game regular season, coaches that began their career with back-to-back losing campaigns picked up nearly two extra wins in their third season. Two wins would be nice, but they won’t send Buffalo back to the postseason. Plus, roughly two wins is just an average. What are the odds Buffalo improves in 2017? Once again, I examined the 54 coaches and identified whether they improved, declined, or stayed the same in their third season.
This is where things start to look better for Buffalo. More than three quarters of the coaches saw their teams improve in their third season and about a quarter either declined or stayed the same. Take a look at that last row though. More than a third of the teams saw significant improvement (three or more games) in their third season. Buffalo fans need only look to their own conference last season to see some examples of this phenomenon.

Chris Creighton began his tenure at perennial punching bag Eastern Michigan with 2-10 and 1-11 records, but the Eagles surged to 7-5 in his third season. Similarly, Chuck Martin began his rebuild at Miami of Ohio with a 2-10 and 3-9 campaign. His Red Hawks began 2016 0-6, but won their last six games and nearly stole the division. Elsewhere in college football, former North Dakota State head coach Craig Bohl was just 6-18 in two seasons at Wyoming. His Cowboys improved by six games and won their division in his third season. Jeff Monken led Army to their first win over Navy since 2001 and seven regular season wins after winning just six games in his first two seasons at West Point. At Wake Forest, Dave Clawson led the Demon Deacons to their first bowl since 2011 after winning just six games in his first two seasons.

By his third season, a coach has at least two full classes of his own recruits and the holdovers from the previous administration have had ample time to learn his system. If the first two seasons have featured more losses than wins, he has probably also been successful in holding on to his coordinators and other assistants. Thus, the third season presents an excellent opportunity for a leap forward. I wouldn’t go so far as to guarantee that Buffalo will return to the postseason in 2017, but significant improvement would not surprise me.