The NFL is a league with a small sample size. During the regular season, teams only play 16 games. A crucial fumble and a tipped interception can make the difference between finishing 8-8 or 10-6 and playing significant Football in January. Some teams have terrible luck on one side of the ball which can be a recurring theme throughout the whole season. Just take a look at Matt Ryan in 2017 – he had a stunning amount of six tipped interceptions. Take away the one against Miami and they win their division. The Falcons ranked 3rd in net yards per pass, but only 12th in passer rating. Questionable play-calling by Steve Sarkisian – especially in the red zone – led to a much worse scoring efficiency than their yardage efficiency let’s assume. Some teams can also be highly efficient on a per-play basis, but struggle to finish drives.
We have already found out that the run game has almost no impact on scoring. If a team is interested in scoring a lot of points, it better be good at passing. Pass efficiency metrics like net yards per pass (NYPPA) or pass DVOA by Football Outsiders explain 64% to 75% of the variance in offensive scoring since 2011. By that, we can create linear functions and calculate the expected value (y) depending on the input (x). For instance: when a team averages 6.5 net yards per pass, it is expected to score 22.1 offensive points per game – based on the data going back to 2011. I created linear functions for four different pass efficiency metrics: NYPPA, pass DVOA, Adjusted Net Yards per Play (ANY/p) and Passer Rating. I calculated how many offensive points per game each team should have scored based on those four metrics. By averaging those four PPG numbers, you get the final Expected Points. Then I compared it to Actual Points per game to calculate the differentials for each team. Same goes for the defense and for scoring differentials.
Some might be wondering why I take metrics like passer rating or ANY/p, which include touchdowns and interceptions to indicate scoring efficiency already. Well, we are coming back to the small sample size and certain plays deciding games. It’s a difference whether you score a touchdown to win a game or in garbage time when you are up by 21 points. It’s a difference whether you throw a tipped interception early in the game when you can still recover, or on your final drive. To me it’s always interesting to look at both, yards-based and scoring-based efficiency metrics. An offense can march downfield with three long passes but run it into the end zone from the one-yard line. Yards per play look good, passer rating suffers from not getting a passing touchdown. To cut a long story short, here are Expected Points versus Actual Points for NFL offenses in 2017:
It reads like this: Based on pass efficiency metrics, the Falcons were expected to score 24.1 offensive points per game but actually scored only 20.8. The difference of -3.3 PPG ranked 29th or 4th-worst. Regression, tipped interceptions and Steve Sarkisian. The Chargers were the most unfortunate offense. They should have scored 5.1 PPG more than they actually did: terrible red zone play and the worst kicking game in the league. They hit just 20 of 30 field goals, worst percentage among all 32 teams. The Ravens were the most fortunate offense, scoring 3.6 PPG more than they should have. Their defense created 34 turnovers against an easy schedule and a lot of backup quarterbacks which led to short scoring drives.
Now let’s take a look at the defenses:
The Patriots were an enigma. Their defense was one of the worse on a per-play basis, but they were highly efficient in terms of yards per point. Remember the long Bills drive that resulted in a tipped interception by Tyrod Taylor at the goal line? The Redskins defense was also highly interesting. Based on pass efficiency, they were the 10th-best scoring defense but they conceded the 9th-most points. I haven’t digged into that but I remember them giving up tons of big plays through the air that outweighed the consistent good plays.
With expected points per game for both the offense and defense, we can now calculate the expected scoring differential and compare it to the actual differential:
It turns out, the Patriots were the most fortunate team last year whereas the Redskins were at the bottom. They should have had a scoring differential of +3.0 but they actually had -1.4, a difference of -4.4 on the year. The Super Bowl champions Philadelphia Eagles finished fourth.
Looking at expected versus actual points is very interesting. It can give you a clue about which team under- or overperformed over the sample size of 16 games. No matter how efficient the Redskins are in 2018, they will probably not underperform by 2.8 points per game on defense again. They might give up 22.1 defensive points per game again, but their efficiency will probably point towards 22.1 PPG or more. That’s how regression works.