The whole thing is a worthwhile read, but the one line that stood out to me was this:
The point is that [Roman Polak and Andreas Borgman] got time against the opposition’s top line in circumstances in which that line didn’t really pose an attacking threat.
The overall idea that Tyler presented is pretty intuitive but also incredibly important: if you come on the ice when your opponent is finishing their shift, they’re unlikely to be mounting much of an offensive push, and most of their effort will likely be focused on advancing the puck out of their zone and getting of the ice.
The implication of this is that traditional measures of quality of competition might be understating the actual differences in QoC between players. Ten seconds of ice-time against Connor McDavid when he’s trying to get off the ice isn’t the same as 10 seconds of ice time against Connor McDavid when he’s trying to score a goal.
The key question, however, is whether there is an actual drop in shot rates at the end of a shift, and how long that drop lasts for. We know from past work by Micah McCurdy and Gabe Desjardin how shot rates change depending on how a shift starts, but how they change at the end of a shift is a relatively unexplored area.
The plot below shows how shot rates change relative to the time remaining in a player’s shift (the right side of the chart is the end of a player’s shift). I’ve limited the analysis to shifts that ended with an on-the-fly change, since shot rates at the end of a shift that results in a stoppage are extremely high (these are mostly shots right before the puck is frozen) and face-off starts are obviously a whole different ballgame.
As you can see, the data seems to be fairly in line with what Tyler was arguing – shot rates against go up at the end of a player’s shift while shot rates for drop slightly. We see a slight rise in both rates for and against in the last 1-2 seconds, which are likely just shots that are occurring as a player is leaving the ice and are unlikely to be related to their individual efforts.
When we look at the data broken down by position, however, we see a more nuanced situation which gives us a better view of how different roles impact a player’s results.
While the trend we saw above seems to hold for defencemen, the progression of a shift for forwards is much different. Defencemen tend to outshoot their opponents in the early part of their shift, but mostly get outshot from the last 20 seconds of their shift onward.
Forwards, on the other hand, have a fairly even shot attempt ratio through most of their shift, with the overall pace of play dropping off towards the end of their shift. The key thing to note here is that if you’re coming onto the ice for the last 10 or so seconds of an opposing forward’s shift, you’re probably not going to be seeing the best effort from them.
These trends even hold when we break things down by quality of player. In the chart below I binned (yes, I know) players by average TOI rank to create four lines for forwards and three pairs of defencemen (this is an imperfect method, but should give us a reasonable view of what’s going on).
The data shows pretty much what you’d expect – each line and pairing shows the same trend, with the major difference between the top and bottom 6 being that top 6 players tend to outshoot their opponents early on, while bottom 6 lines get outshot throughout their shifts. The key again though is that even first and second line players see drops in their output towards the end of their shifts.
All of these results have clear implications for many of the regression-based models that are being developed today. These models treat each interval (a consecutive period with the same 10 skaters on the ice) as a row in their input, and have dummy variables for each player on the ice and to account for the impact of how the interval started (e.g. defensive zone face-off or on-the-fly). The regression then attempts to find the coefficients for each player and starting state that best explain the observed results (generally Expected Goals or Corsi, for or against).
But what this analysis suggests is that we might need to know additional information about how long a player has been on the ice in order to adjust for a player’s “intent” over the interval. Players who have just started their shift likely intend to generate chances, while players who have been on the ice for more than 30 seconds may only intend to get the puck out of their zone and get off the ice. If I’m a player who consistently plays against top line players only at the end of their shift I may be getting credit for shutting down these players (who otherwise have good offensive results) when in reality their suppressed offensive output may simply be a structural product of how the flow of the game works.
The big question, of course, is whether the differences in which players face better or worse opposition at the end of their shift are meaningful (spoiler alert: I don’t know). Tyler’s data certainly suggests that they are, but there’s likely more digging to be done to determine the extent that this phenomenon impacts the metrics that we use to evaluate players. Nevertheless, these results clearly show that there’s more about the structure and flow of the game that we can mine from the existing data, and that quality of competition may be more complex than previous attempts to measure it have suggested.
]]>While the whole panel was endlessly informative and extremely entertaining, one thing Daryl Morey said about the Rockets approach to implementing strategies really stuck with me:
“One thing I think all of us have done is more take the lessons that are sort of obvious that everyone has agreed to and taken them to the logical conclusion. Which is, for example us, it’s better to make 3 than 2 on a shot…genius, right…taking it to it’s logical conclusion, which is shoot 50 of them a night.”
Basically what Morey was saying is that if all the work you’ve done has shown that a strategy is beneficial, don’t hedge your bets and only go half-in on it. It’s this idea that’s been floating around in my head for the last 2 days since I wrote about optimizing contract structure – if GMs could theoretically save cap space by setting up contracts to pay more money to players upfront, how much total cap room could they create simply by structuring their contracts in the most efficient way possible?
The answer, of course, depends on the assumptions you make (since no one has been foolish enough to let me run my crazy experiments on an actual NHL team). First and foremost is the impact of the discount rate – higher discount rates make this a more effective cap optimization strategy, while lower discount rates have less total benefit. But this is easy enough for us to test out – we can simply run our analysis using various discounts and observe the range of impacts the each discount rate gives us.
There’s also the question of what the optimal contract structure is. While the ideal contract from a player’s point of view would be something like 99.9% of the money in the first year with the remaining cash spread out over the last N-1 years of the deal, the CBA has certain rules to prevent this kind of cap circumvention. Specifically, there are 2 major criteria that all contracts have to meet with regards to when payments occur:
We can add 2 other criteria that are necessary to ensure that the NPV is maximized while minimizing the actual dollars spent:
While these rules give us a general sense of what the best structured contract looks like, they don’t give us an exact answer as to what the optimal contract structure is. When the length of the contract is 1 or 2 years, the optimal structure is easy enough to define. For a 1 year deal, the AAV is the total salary in the first year, so we don’t have anything to do – there’s no way to actually optimize it. For a 2-year deal, the optimal structure is to pay P in year 1, and 0.65P in year 2 – that’s the biggest drop you can get, and we want to move as much money forward as possible.
But for contracts that are 3 years or longer, there are actually many ways to structure a contract that meet the rules we established above, but that aren’t necessarily optimal. For example, if we call the salary in the first year of a contract P, we could simply decrease the salary of contract in a straight line until we hit 50% at the end of the contract (and then solve for the value of P that makes the NPV equal to the NPV of the actual contract signed). While this seems like a logical solution in theory, in practice it’s actually not aggressive enough in front-loading contracts, and many current deals are actually better structured as they exist already.
One method of structuring that’s mostly optimal[1] goes like this:
This won’t always give us the most optimal contract structure, but it will generally be a non-trivial improvement over how contracts are currently structured. We can solve for P in the same way we described above: simply find the value of P that makes the NPV of our optimal contract the same as the NPV of the actual contract. We can then find the AAV as:
AAV (Optimal) = 1/N * [P * floor(N/2) + 0.65 * P + (N – floor(N/2) – 1) * 0.5 * P]
As a simple example, let’s look at P.K. Subban’s most recent contract that he signed with the Montreal Canadiens. That deal has an AAV of $9M per season, but is structured in far from an optimal manner: the bulk of the payments occur in the middle of the deal. In theory, the Habs could have offered to pay him more in the first 4 years in order to knock down the total cost (and with it the AAV). But what would the optimal way to structure it be?
If we assume a discount rate of 5%, the NPV of Subban’s deal was $60.9 million dollars when he signed it. Using the structure we described above, he could get the same value if he signed a deal that paid roughly $11.3M in years 1-4, $7.3M in year 5, and $5.65M in years 6-8. The AAV of that deal would have been ~$8.7M, saving the Habs Preds $300k per year – not a bad chunk of change just for rearranging some payments.
If we repeat this exercise for each player who has signed since July 1, 2014 and total up the savings by team, we can get a reasonable estimate of what optimizing each contract’s structure could be worth to a GM.[2]
Discount Rate | Min Cap Hit Saved Per Team | Average Cap Hit Saved Per Team | Max Cap Hit Saved Per Team |
2.5% | $0.06M | $0.38M | $0.83M |
5.0% | $0.12M | $0.75M | $1.62M |
7.5% | $0.18M | $1.11M | $2.37M |
10.0% | $0.24M | $1.45M | $3.07M |
As we noted above, the impact is highly dependent on the assumptions you make about the time value of money, but it seems to me that there could be some value to this strategy, particularly if you’ve got an owner with deep pockets. The average team would save nearly $400k under the most conservative of assumptions, an amount which could be the difference between adding that marquee player at the deadline and sticking with your current roster.
Now there are obviously a few caveats to this analysis that could reduce the potential impact. First, as we mentioned above, you’d need to have a very nice owner to give you the financial flexibility to structure deals like this. This may be a challenge in cash-strapped markets, but this strategy could actually be a good way for teams whose potential spending is being restricted by the salary cap to flex their financial muscle a bit.
Second, as I noted in my last piece, the savings from front-loading need to be weighed against the potential additional costs if you need to buyout a contract or if a player retires. If we exclude players who are 30 or older when they signed, the expected cap savings tends to drop by $50k-200k, depending on the discount rate. That’s not enough to remove all merit from the strategy, but it does knock away some of the benefit that we noted above.
Third, it’s not necessarily clear that players would be willing to accept a lower cap hit, even if it was in their best interest financially. Players may be more concerned with their cap hit than the actual financial details of their contract, and may be reluctant to accept a deal that makes them look worse than one of their peers.
Nevertheless, it does look like there could be some cap benefit to a team with an open-minded owner who’s willing to take a risk. While correctly evaluating a player’s future performance will always be more important than these kind of accounting tricks, finding new ways to squeeze a bit of extra value out of your limited cap space may give team’s just enough room to add that piece that pushes them over the edge.
[1] There are only 3 contracts that are “more optimal” in their current state than this one, so I feel pretty safe saying this is pretty close to being optimal.
[2] Excludes contracts under $1MM and the Vegas Golden Knights.
]]>
For players, there’s a strong incentive to get as much money as soon as you can. Not only does this protect you against a buyout to a certain degree, but because of the time value of money the actual net value of the contract is greater to you the more you front-load your contract. If a player has the choice between a 3-year deal where they’re paid $4M each year, and a 3-year deal where they earn $5M in the first year and $3M in the last year, they should opt for the latter deal, since an extra dollar now is worth more than an extra dollar in the future. The AAV on both deals is the same, but the player comes away with more value if they get their money sooner.
For owners, obviously, the incentive runs the other way – you want to back-load your players’ contracts as much as you can, since you can take any amount you save in year 1, invest it until year N, and then pay the player using the principal while pocketing any interest you’ve earned. But if you insist on pushing salary later in a deal, players may ask for more money in total to compensate, inflating the overall AAV on the contract.
Because of this, it’s not really fair to compare contracts that have the same AAV but different payment structures. If two players have a 6.5M cap hit, but one player is getting more of their money earlier, it likely means that their team believes the player getting paid sooner is more valuable.
While it’s easy enough to compare two deals that have the same cap hit but different payment structures, it gets a bit more difficult when the cap hits, or even the contract lengths, are different. One way we can make a fair comparison, however, is by calculating the net present value (NPV) of a contract, and then using the NPV to figure out what the equivalent cap hit would be, if all the payments were the same amount over the life of the contract.
While this may seem a bit complicated (and perhaps even unnecessary), a simple example shows how this can impact a team’s cap position. In his most recent deal with the Lightning, Steven Stamkos’ signed for an average annual value (cap hit) of $8.5M per year, earn $64M total of the course of the 8 years of the deal. But because of the way it was structured, Stamkos will earn nearly 75% of that over the first 5 years, taking home just $20.5M in the last 3 year seasons. How much money cap hit would he have commanded if the Lightning had insisted on equal payments over the life of the deal?
If we assume a 5% discount rate[1], the total payments on Stamkos’ deal had a net present value of just over $58.5M when he signed the deal. To create an 8-year deal with the same present value and with constant payments each season, you’d need to pay an average of $8.625M per year. Not a huge difference, but that means that the Lightning saved roughly $125K on their cap just by structuring the deal in a manner more friendly to the player. If you’re able to do that for five long term deals, you’ve saved yourself the cost of a whole ELC.
Which players have seen their cap hit reduced the most by structuring? The table below has the 10 deals with the greatest savings from structuring since the July 1st, 2015.
Player | AAV | Equal Payment AAV | Savings From Structuring |
Carey Price | $10,500,000 | $10,804,266 | $304,266 |
Jamie Benn | $9,500,000 | $9,799,430 | $299,430 |
Anze Kopitar | $10,000,000 | $10,287,619 | $287,619 |
Brent Burns | $8,000,000 | $8,219,435 | $219,435 |
Connor McDavid | $12,500,000 | $12,693,446 | $193,446 |
Brent Seabrook | $6,875,000 | $7,066,703 | $191,703 |
Ryan O’Reilly | $7,500,000 | $7,666,430 | $166,430 |
Brad Marchand | $6,125,000 | $6,274,331 | $149,331 |
Andrew Ladd | $5,500,000 | $5,647,751 | $147,751 |
Milan Lucic | $6,000,000 | $6,146,556 | $146,556 |
Unsurprisingly, big value deals for longer terms have saved teams the most cap space, but the list of names here also raises a slight problem with the “pay them early strategy”, and that’s the possible increased buyout cost or cap recapture penalties. While ideally a GM’s player evaluation skills are so good that they never run into a scenario where they no longer want a player that they signed to a long term deal, a quick look at the table above shows quite a few names that may be candidates for buyouts in the future.
As such, this buyout/retirement risk needs to be considered alongside any initial cap benefit that you might get from front-loading a deal, and giving front-loaded deals to players past their prime should almost always be avoided as the savings may not be worth the eventual cost.
We can also look at which deals have theoretically left teams with less cap space than they otherwise would have had on a balanced deal.
Player | AAV | Equal Payment AAV | Cost From Structuring |
Michael Matheson | $4,875,000 | $4,741,956 | -$133,044 |
Adam Larsson | $4,166,667 | $4,086,521 | -$80,146 |
Damon Severson | $4,166,667 | $4,087,939 | -$78,727 |
Brandon Saad | $6,000,000 | $5,933,451 | -$66,549 |
Oscar Klefbom | $4,167,000 | $4,103,884 | -$63,116 |
Aleksander Barkov | $5,900,000 | $5,837,267 | -$62,733 |
Nathan Mackinnon | $6,300,000 | $6,249,147 | -$50,853 |
Alexander Wennberg | $4,900,000 | $4,850,978 | -$49,022 |
Hampus Lindholm | $5,250,000 | $5,206,106 | -$43,894 |
Vincent Trocheck | $4,750,000 | $4,711,394 | -$38,606 |
There are two things to note here: first, the list of names on here are almost exclusively from clubs that rank towards the bottom of the league in attendance, meaning that these structuring decisions may have been made by ownership rather than the front office.
Second, the magnitude of the “costs” here are much smaller than the savings from front-loading deals. This may reflect the fact that while owners would obviously prefer to push back payments, the people negotiating these deals (GMs and players) often see the same benefit from front-loading them, and so there are simply fewer deals that significantly push money back into later years.
The last thing we can look at is which teams have saved or spent the most cap space on structuring over the last few seasons.
We see a bit of an expected trend here, as most teams have seen a net cap benefit from pushing payments forward, while some of those with weaker finances have actually theoretically paid more to delay some of the cash payments until later. For most teams it doesn’t amount to a significant savings, but some clubs (Dallas, Montreal) have seen more than half-a-million in theoretical savings (although whether they’re really savings is debatable when you consider that they’re locked in with Ben Bishop and Carey Price in net for the 2022-23 season).
While there clearly may be benefits to teams to ensuring they structure their deals properly, how big those benefits are is heavily dependent on the assumptions you make about the discount rate. We’ve used a 5% rate here, but if it’s higher (i.e. good markets) the benefits jump, while in lower growth environments, the benefits of structuring will be muted.
We’ve also significantly over-simplified our analysis here with our handling of signing bonuses. Since signing bonuses are paid out on July 1st of every year, while salaries are paid over the course of the season, they’re actually even more valuable than regular salary to a player. While owners may be hesitant to give out signing bonuses (there’s a risk you pay a significant cost to a player without having them actually play for your team if you trade them before the season starts) they’re a significant way in which teams can offer more value to a player without seeing a corresponding jump in cap hit.
[1] Entirely and completely arbitrary, but probably somewhat reasonable.
]]>While defensemen are often taught to take the pass and leave the shooter to the goalie, Gardiner’s execution left a lot to be desired. As Justin Bourne noted, when you give the shooter that much space you’re basically turning a 2-on-1 into a 1-on-0.
Beyond the poor execution though, there’s a more general question about the wisdom of playing the pass and not the shot. Taking the pass and trusting your goalie to stop the shot is an idea that’s drilled into defencemen’s minds from a young age, and on its face it makes a lot of sense – we know that shots of passes go in more often than those off shots, so if you’re looking to minimize goals against (a pretty good idea for a defencemen) your best bet is to take away the higher percentage play.
The problem with this thinking though is that if you play the pass every time it will start to affect your opponent’s behaviour. Shooters will become more aggressive and work towards better shooting locations, which will start to eat away at some of the advantage from defending the pass. There’s a balance to be struck between preventing the more dangerous play and becoming too predictable. Playing the pass more often will probably always be the right play, the question is how often defenders should change things up and play the shot.
On the flip side of things, attackers face a similar decision when they choose whether to shoot or pass. A player with the puck on a 2-on-1 will obviously prefer to setup the higher percentage opportunity (or should prefer to a pass at least, I’ve certainly played with players who would shoot 100% of the time), but if they choose to pass every time they become easy to defend. We know that how each player acts will impact the other’s optimal decision, the question is whether we can predict how often each player will (or should) choose each action based on their incentives (expected goals).
This kind of problem makes it the perfect time for some game theory game theory. The decision of how often to pass/shoot or defend the pass/defend the shot is just a matter of finding the Nash equilibrium in a mixed strategy game. For each player, we want to find the percentage of the time they should pass (or defend the pass) so that their opponent is indifferent between defending the pass (passing) and defending the shot (shooting). In other words, for an attacker we want to find P such that:
P * xG(Pass | Defend Pass) + (1 – P) * xG(Shot | Defend Pass) =
P * xG(Pass | Defend Shot) + (1-P) * xG(Shot | Defend Shot)
To run the math though, we need a few numbers that will help us calculate the cost-benefit for both the attacker and the defender. First, we need to know how often teams score when they shoot or pass on a 2-on-1. While the NHL doesn’t track this kind of stat, we can use data from Ryan Stimson’s passing project to estimate the shooting percentage when a team passes or shoots on a 2-on-1. Within Stimson’s data, teams that had their last pass before a shot in the offensive zone scored on 26.8% of their shots, while teams that had their last pass before a shot on a 2-on-1 outside of the offensive zone scored on just 14.2% of those shots. While these are broad estimates (there were only 374 2-on-1 shots recorded in his dataset), they should be good enough for our purposes.
Next, we need to know how often teams are successful when they chose to pass on a 2-on-1. Unfortunately, no one has tracked this data so we’ll have to pick numbers that seem reasonable. We also need to provide estimates by the defenders actions, so we’ll assume that the attacker is successful in making a pass 50% of the time when the defender chooses to play the pass, and 80% of the time when the defender chooses to play the shot.
Lastly, we need to know how often a defender will block a shot when they choose to play the shot rather than the pass. Once again, we don’t have the actual data to calculate this number, but in this case we know that defenders block around 25% of the shots that are taken during all situations in a game. We can assume that the block rate on a 2-on-1 is much lower, so let’s put it at 12.5% – again it’s a guess, but it’s a fairly low number and it should be good enough to give us a general estimate.
With all of our assumptions out of the way, we can look at the expected goals for each scenario (attacker shoot or pass, and defender play the shot or pass).
Attacker Shoot[3] | Attacker Pass | |
Defender Play Shot | (1 – 0.125) * 0.142 = 0.124 | 0.8 * 0.268 = 0.214 |
Defender Play Pass | 0.142 | 0.5 * 0.268 = 0.134 |
Obviously if you’re a defender you want to minimize your expected goals against, so your best case scenario is that the attacker shoots and you play the shot. But if you’re the attacker and you know that the defender is likely to play the shot to get their best case, you’ll probably pass, since your expected goals are higher if you pass when the defender plays the shot. But if you’re a defender and you know the attacker is going to pass, you’ll play the pass. And then if you’re the attacker…well you see how you could go on for a while, right?
But, if we use the equation we had above, we can figure out the equilibrium for this problem, that is how often the attacker should shoot so that the defender doesn’t care whether they play the shot or the pass (and similarly, how often the defender should play the pass so the attacker doesn’t care whether they shoot or pass).
And as it turns out the conventional wisdom of always play the pass is *almost* right – the equilibrium for this game (based on our assumptions that we noted above) is that defenders should play the pass roughly 92% of the time and defend the shooter just 8% of the time. On the other hand, shooters should take the shot 82% of the time, while trying the pass just 18% of the time. And while these numbers are really heavily dependent on our assumptions, they do make a lot of sense – often the conventional wisdom exists because it’s right, and if you’re a forward who knows the defender is probably playing the pass, you’re likely going to opt for the shot most of the time, while occasionally taking a risk for the higher percentage tap in.
But even though the results make sense, what if our assumptions are actually wrong? The blocked shot number might not matter that much and is probably within a reasonable range given that we actually had some data to base it off of, but the pass completion rates were kind of drawn out of thin air. If these numbers are different in reality, our view of the optimal strategy for both forwards and defenders would change as well.
We can see the impact of our assumptions by looking at how the equilibrium for the attacker and defender change as we vary the pass success rates depending on defender choice. We’ll assume that the probability of a pass being successful is always higher if the defender chooses to play the shot than the pass, which is why you won’t see any data in the top left half of the graphs below.
First, let’s look at defenders – each point on the graph below represents how often the defender should play the shot, depending on the pass completion percentage when playing the shot (x-axis) and the pass completion percentage when playing the pass (y-axis).
There are 3 things that stand out in this graph:
We can also look at what happens to the attacking player’s decision making when we change our assumptions.
Attackers don’t show the same discrete decision regions as defenders – there’s no rates that would cause an attacker to always shoot or always pass – but we do see the same wide range of results that we saw for defencemen. While we had originally estimated that forwards should shoot 82% of the time, if we had instead assumed that attackers were successful completing a pass just 70% of the time that the defender played the shot, that equilibrium number would drop to 75%.
While it’s difficult to know where the true equilibrium lies without access to better data on player positioning and pass success rates, it’s likely that there’s no “one-size-fits-all” approach for defending an odd-man rush[4]. Even when one strategy seems to be clearly preferable to another, becoming too predictable can give your opponent an advantage and will certainly make their decision making simpler. Not having strict rules but rather broad guidelines about how to play in a given situation will ultimately lead to better results, and at the very least will help defenders avoid looking like Jake Gardiner did last night.
[1] It’s unlikely that Gardiner knew that Daley is one of the league’s most lethal defencemen on the penalty kill, sitting third amongst blueliners in shorthanded goals since 2014-15 with 2.
[2] Again, not really defying the odds since Trevor Daley may be the last player a goalie wants to face one-on-one on the power play, but how could Jake Gardiner possibly have known that.
[3] These numbers are definitely wrong, since what we observe (the 14.2%) is a blend of player’s shooting when the defender is playing the pass and when the defender is playing the shot, but without knowing how often they’re actually doing each we can’t really break it up any better.
[4] Unless Trevor Daley has the puck while shorthanded, in which case you always cover Trevor Daley.
]]>
Those spurious challenges are one reason why the NHL modified the rules around coach’s challenges yesterday. Starting next season, instead of a failed challenge simply resulting in the loss of a team’s timeout, clubs will now face a 2 minute penalty for losing an offside challenge. Upon hearing of this change many fans were apoplectic, complaining that this rule change could bury teams who were already reeling from giving up a goal against, and would severely limit the willingness of coaches to challenge even legitimate missed offside calls.
Fan reaction notwithstanding, however, the question coaches should be asking is whether they should be changing their approach in response to the new rules. The threat of killing off a penalty for a failed challenge may seem like a big deal, but it’s important to note that teams only score on roughly 20% of their power play opportunities. Fans will surely remember when a failed challenge leads to a power play goal against, but there will certainly be occasions when the potential gain from overturning your opponent’s goal outweighs the risk.
The question for any coach then is how sure you have to be of your video coach’s recommendation in order to call for a video review. If you think there’s only a 5% chance of success it’s unlikely that the 1 in 20 odds of taking a goal away outweigh the cost of the power play you’re going to give up 95% of the time. But that decision won’t always be so easy: what if you’re 33% sure you’ll be successful, but you’re already down 1 with just 10 minutes left to play? Is it worth the risk of going down two goals with half a period left for a 1 in 3 shot of being tied?
To answer this question we can look to the always excellent insight of Micah Blake McCurdy of Hockeyviz.com. Last year, Micah wrote about the concept of leverage, which he defined as the cost (benefit) in expected standings points for a team allowing (scoring) a goal given the current score state and time left in the game.
Offensive leverage is the increase in expected points from scoring a goal, while defensive leverage is the cost in expected points from conceding a goal. Leverage allows us to estimate how big the cost of a failed challenge will be by looking at how a team’s expected points will change based on the success or failure of their challenge.
Because a team that’s deciding whether or not to challenge has already conceded a goal, a coach who decides to challenge faces two possible outcomes:
We can use this information to model how a team’s expected points will change based on the probability of success of a challenge.[i]
Change in Expected Points = P(Success) * Offensive Leverage + P(Fail) * 0.2 * Defensive Leverage
As long as a team’s Change in Expected Points is positive, a coach should feel confident that challenging a goal is the right choice. We can then calculate the Break Even Certainty, which is the confidence that a coach needs to have in order to ensure that the Change in Expected Points is positive.
Break Even Certainty = (0.2 * Defensive Pressure)/(0.2 * Defensive Pressure + Offensive Pressure)
With this formula and Micah’s leverage data, we can plot the Break Even Certainty for a team playing at home given the current score (after the goal that’s being challenged) and time remaining. So, when should teams challenge under the NHL’s new rule?
For a home team that’s tied before the challenge (i.e. they were up 1 and gave up a goal that they think may have been offside, the green line in the plot above), the break-even rate is pretty consistent around the 18% mark until roughly the 35^{th} minute of the game, at which point the break-even rate starts to rise steadily throughout the rest of the game. This makes a lot of sense – when you get closer to the end of the game, the cost of giving up a goal on the penalty kill rises, as you have less time to score a tying goal afterwards (and you lose the OT point). As such, you need to be more sure you’re going to get it right before you call for a replay review.
The other curves appear mostly where we’d expect[ii] – a team that’s leading needs to have a much higher certainty than a team that’s trailing, which isn’t exactly surprising. What’s interesting to note is how low a team’s certainty needs to be in order to make challenging worthwhile. A team that’s trailing at basically any point of the game needs less than a 15% chance of success to see a positive expected value,. And even a team that’s ahead by 2 goals can still challenge with less than 50% certainty for most of the entire game.
All of which is to say that while these rule changes may get rid of a few longshot challenges, it will still be worthwhile for a team to challenge in many instances where their odds are far below even. While killing off a 2 minute penalty may seem like a substantial deterrent, the benefits of potentially saving a goal are simply too large to ignore in many instances. As long as the offside challenge remains an option, finding ways to discourage its abuse will be difficult for the league.
The flipside of this, however, is that teams should continue to challenge even if they’re less than certain on their likelihood of success. The value of a timeout is significantly reduced now that teams can’t call one after an icing, and as we’ve seen above, the required success rates are generally so low that coaches can afford to take a flyer every now and then with the hopes of saving a goal.
[i] We’ll ignore the lost 5-on-5 time for simplicity here. Practically speaking, this would be a further net negative for a trailing team, but it should be small enough that it doesn’t significantly impact our conclusions.
[ii] There are a few weird areas where small samples and/or smoothing may make things look odd, but in general the modelled results are pretty intuitive.
This post originally appeared on Hockey Graphs.
One of the weird things about sports that I find fascinating is how often coaches and players seem to go out of their way to avoid having a negative impact on the game, even at the expense of potential positive impacts. People often seem to prefer to “not lose” rather than to win, which can result in sub-optimal decision making, even in the presence of evidence to show that the correct decision is not being made.
There are many examples of this across sports, but the biggest two in hockey are pulling the goalie and playing with 3 forwards on the power play. Analysts have been arguing for many years now about why teams should pull their goalies earlier, but it’s only been in recent seasons that teams have become more aggressive in getting their netminders out earlier.
Similarly, a lot has been written about how much better 4 forward units are on the power play, but adoption of this approach is still far from universal, likely due to coaches fearing being blamed when their teams allow a shorthanded goal.
With that in mind, one of the ideas I’ve been pushing for a while now is that teams should be playing with 4 forwards at 5-on-5 when they’re down late in the game. The thinking behind this tactic is pretty simple: when a team is down late they have little to lose from allowing a goal, and a lot to gain from scoring one.
Micah McCurdy has written previously about this imbalance, presenting the idea of the “leverage” of a given situation. Leverage encompasses two things – first, the change in expected standings points of a goal scored at a given point in time with a given score (or the offensive leverage), and second, the change in expected standings point of a goal allowed at a given time with a given score (or the defensive leverage).
The chart below has the average leverage values for a team down 1 in the 3^{rd} period (I’ve folded together different score states and home/away for simplicity). We can see that as you get closer to the end of the game, the value of scoring a goal increases exponentially (since there’s little time left to score after that, or to be scored on).
Similarly, as we approach the end of the game, the cost of allowing a goal also goes down towards zero – it doesn’t matter if you lose 1-0 or 2-0, so giving up an additional goal has little effect near the end of a game.
All of which is to say that the incentives for a team down late in a game aren’t balanced – the benefit to scoring vastly outweighs the cost of allowing a goal. Given this imbalance why would teams not try playing with 4 forwards when they’re trailing late in the game?
Coaches certainly seem to realize that there’s a point where it’s worth it to throw caution to the wind, as they’ve been pulling the goalie for an extra attacker as far back as 1931. Switching to 4 forwards could be used as an intermediate step, to ratchet up pressure without completely giving up on playing defence.
Unfortunately, this strategy hasn’t exactly caught on with teams, and so we don’t really have a good dataset to work with[1]. While teams have played with 4 forwards at 5-on-5 for over 700 minutes since 2009-2010, most of that time was immediately following the end of a power play, which doesn’t exactly make for an unbiased sample.
Given our limited data then, how can we evaluate whether this strategy might work, without having to convince a handful of coaches to try it long enough to get a sufficient sample?
One way to do it would be to guess the impact of using 4 forwards on scoring rates, and then evaluate the impact on a team’s expected points due to the change in scoring rates. If we can come up with a reasonable estimate of how much scoring would increase for a team playing with 4 forwards (and for the team playing against only 1 defenceman), we can use Micah’s leverage numbers to find the optimal time (if there is one) to switch to 4 forwards.
The question then becomes what a reasonable scoring impact would be. A fair starting assumption would be that using 4 forwards will increase both the rate of goals scored and allowed at 5-on-5 – you’ll get a bonus offensively, but at a cost of weaker defence. But we still need to know how big that increase is for each team to run the number
We could take an initial estimate from the data we have on the power play, where a team’s Goals For Per 60 is roughly 1.26 times higher with 4 forwards than with 3 forwards. Similarly, teams using only 1 defencemen on the man advantage allow goals 1.48 times as often as those with 2. If we assume a constant GF60 and GA60 across the third period, we can use the following scoring rates:
Scoring Rates When Down 1 in 3rd | 3F-2D | 4F-1D Estimate |
Goals For Per 60 | 2.33 | 2.94 |
Goals Allowed Per 60 | 2.08 | 3.08 |
One thing that’s interesting to see is that you will likely end up being a negative goal differential team if you use 4 forwards. While this seem to kill off the idea of using 4 forwards (generally you don’t want to be outscored in hockey, or so I’ve been told), because the value of a goal scored is much greater than the cost of a goal allowed when you’re down 1, it makes sense to increase the overall rate of scoring for both teams for a short period to try to tie the game.
Now that we have an initial estimate for scoring rates we can take a look at the net estimated benefit of adopting a 4 forward approach when down 1. To simplify the situation, I’ve made two important assumptions about how teams will approach this strategy:
First, once a team switches to using 4 forwards, they’ll continue to use 4 forwards until they score to tie the game, or are scored on to go down 2 (the latter point is to simplify the calculation). Second, under both strategies, teams will pull their goalie with 1.5 minutes to play in the 3^{rd} (and playing with 4 forwards prior to pulling the goalie will have no impact on how teams perform with an empty-net).
With all the pesky details ironed out, we can (finally) look at whether this strategy might work. The graph below shows the predicted change in expected points versus the amount of time remaining when the team switches over to 4 forwards.
There’s three things to note here: first, for a good portion of the 3^{rd} period, switching to 4 forwards would be a net negative. This makes sense given the net negative long term goal differential for a 4F team.
Second, the 4 forward approach is likely to result in a positive change in expected points if adopted any time after the 10 minute mark in the 3^{rd} period.
Finally, the ideal time to swap a defencemen for an additional forward, assuming our goal scoring estimates are right, is around the 5.5 minute mark in the third period. Any time before that and you’re likely allowing your opponents too many opportunities to score; any time after that and you’re probably not giving yourself enough of an opportunity to score.
Although this initial approach gives us some indication that a 4 forward approach holds promise, we don’t have enough data to know whether our assumption of a 1.26x increase in goal scoring is valid.
To get around that, we can test out various other scoring impacts and see how they change our estimate of when (or if) we should switch to 4 forwards, assuming the increase in goals against stays constant. For the purposes of this exercise, we’ll leave the goals against impact constant at 1.48, as that seems like a safe upper bound to me.
While the size of the impact might vary, there’s still some indication that switching to 4F-1D late makes sense. In fact, even if you only see a 5% increase in your goals for rate, it’s still a net positive to switch after the 2:30 mark.
Given that the evidence seems to point towards an offense-oriented 4F approach providing benefits when down late, how large would those benefits be for an average team? If we go back to the original graph and assume that teams change over at the optimal point (around 5.5 minutes for a 1.26X offensive impact), every time a team uses that approach it would be worth about 0.022 points.
Since 2009-10, 35% of games have had one team leading by a goal with 5.5 minutes left, meaning an average team would play in roughly 28.7 of those games over the course of a season. If we assume that they’re leading in half of those 28.7 and trailing in the other half, that works out to a total benefit of 0.32 points over the course of a season. It’s not enormous, but at a rough cost of $1.5 million per standings point that’s a pretty valuable tweak for an NHL team[2].
Even though switching to 4 forwards late likely won’t be the difference between making or missing the playoffs, it is one area where teams can give themselves a marginal boost in the standings without having to go out and change their roster at all. Given how difficult it can be to improve even slightly in today’s NHL, it does as if it may be a worthwhile tactic for a team looking to get an additional edge in the season to come.
[1] I remember the Sens using this strategy once, but I can’t think of any other times a team has done it intentionally.
[2] NHL teams, please make cheques payable to “Matt Cane”. Cash is also accepted.
]]>Using 4 forwards on the power play is generally a good strategy. Four forward units take more shots, score more often on those shots, and post a better goal differential than 3 forward groups do.
It’s also a strategy that has become more popular over the last few years. 4 forward units have accounted for roughly 56% of the 5-on-4 ice-time this season, up 4% from last year and more than 15% from 5 years ago.[1]
While usage of the 4 forward strategy is up, its adoption has been far from universal. Many clubs will use 4 forwards on their first unit, but stick to a more traditional 3 forward, 2 defencemen setup on their second unit. Since the start of the 2015-16 season (the first year in which 4 forward usage was greater than 3 forward usage) teams were 1.2 times as likely to use a 3 forward unit on shifts that did not start a power play.[2]
This decision is somewhat confusing because a team’s second power play unit often has a much more difficult task than their first unit does. Second units tend to start their shifts on-the-fly more often, requiring them to carry the puck down the ice and enter their opponent’s zone before they’re able to generate any offensive opportunities.
These zone entry attempts are critical to success on the power play. And not all zone entries are created equal: controlled entries are far more effective than dump-ins at generating offense on the man advantage.
In theory, forwards are better puck handlers than defensemen, and should be better suited to generating controlled entries. Forwards tend to get more opportunities to attempt zone entries at 5-on-5, and we’d expect this experience to produce players who are stronger at carrying the puck in.
And when we look at the data, we see exactly that. Using Corey Sznajder’s zone entry data from the 2013-14 season, we can see that 4 forward units were more likely to execute a controlled entry at 5-on-4 than 3 forward units.
Entry Type[3] | 3F-2D | 4F-1D |
Controlled | 52.4% | 60.4% |
Dump-In | 37.7% | 29.6% |
Failed | 9.4% | 9.3% |
How much value is there in these extra zone entries? The average team attempted roughly 732 5-on-4 zone entries in the 2013-14 season. The average second unit has received roughly 60% of the on-the-fly shift starts since 2010, so we can assume that they also attempted 60% of those entries, or about 439 per season.
If a team that had been using 3 forwards on their second unit exclusively were to switch to a 4 forward setup, on average they’d earn an additional 35 entries. Given the differences in scoring rates between 4 forward carry-in entries and 3 forward dump-in entries, that works out to 1.1 goals over the course of a full season.
That’s not a huge amount, but it’s not nothing either. At around 6 goals per win, that’s 0.37 extra points per year, just by switching the setup of your second unit.[4]
To put it another way, Michael Schuckers, Tom Pasquali, and Jim Curro estimated that a faceoff win on the power play was worth about 0.028 goals. At that rate, you’d need 39.6 extra faceoff wins on the power play to have the same impact as switching your second unit to 4F-1D. Given the amount that teams tend to obsess over ensuring they have a faceoff specialist on the PK, this seems like a much simpler way to give your team a small boost.[5]
So if there’s a clear case for playing 4 forwards on your second unit, why are so many teams hesitant to change over? One reason may be a desire to finish an unsuccessful power play with 2 defencemen on the ice, but that fact alone is likely not enough to produce a difference this large.
Another factor may be that using two units with 4 forwards requires teams to have 8 forwards who are more talented than their third most offensively gifted defencemen. While for most clubs this probably isn’t an issue, teams who lack offensive depth or who have their bottom 2 lines setup as checking lines may lack the personnel to use 4 forwards on their second unit.
In either case, the argument for using 4 forwards on your second power play unit remains fairly strong. With teams fighting hard for even the most marginal advantages, going all-in on the 4 forward setup is one way teams can give themselves an extra boost when scoring rates are already high.
[1] Somewhat frustratingly, 5 forward usage actually seems to be decreasing.
[2] Note that because we’re measuring only when the shift starts and not which units are on the ice we may be understating the difference in 4 forward usage between 1^{st} and 2^{nd} units.
[3] Excludes faceoff entries.
[4] This also ignores the benefit of playing with 4 forwards off of a faceoff, so it likely represents the lower end of the estimated advantage.
[5] Claude Giroux had the most defensive zone faceoff wins above average on the penalty kill last year (25), to give you a sense of how rare it is for a player to have an impact that large (data via puckbase.com)
]]>This post originally appeared on Hockey Graphs.
Edit 2017-02-15: An earlier version of this piece had a small error in the regression coefficient for PP Structure Index. While the article previously indicated the coefficient was -0.19, it should in fact be -0.30. The text both above and below has now been corrected.
The importance of structure in a team’s power play is something that’s really easy to see. We’ve all watched a power play executing at the top of its game: the puck flies from player to player, leaving defenders pivoting in place to try to keep up. Each shot looks exactly like it was diagramed by the coach, with attackers working to set up a specific shot from a specific player in a specific location.
A solid structure doesn’t just look good; it actually produces better results. Arik Parnass has written extensively on the importance of structure to power play success, showing that teams who get set up in a dangerous formation score more goals than those who don’t.
Perhaps the best example of the importance of structure is the Columbus Blue Jackets. In late December, hockey graphs alum and Visualisation Visionary Micah Blake McCurdy tweeted out these images showing the year-to-year change in shot locations for the Jackets power play:
On the left we see a power play which appears to have few well-defined roles; one where Jack Johnson roamed the entire right side of the ice and David Savard seemed to hang out at both the left point and far right corner. That group was the first unit of a power play that finished 19th in goals for per 60 last year.
On the right, however, we see a much more structured group, one where each player appears to have a specific location they’re aiming to shoot from. It shouldn’t be surprising to see that this improved structure has helped Columbus, as they currently sit first in the league with 8.8 goals for per 60 on the power play, despite being one of the lowest ranking teams in terms of shot attempt generation.
While these cases are of course anecdotal, they may offer some clues about what makes a good power play. Teams who are well-structured on the man advantage will see their players frequently creating opportunities from the same part of the ice, and will have shot location charts that look more like Columbus from this year than the Jackets from the previous campaign.
The question then is how we go from shot location maps, which are easy to read on their own but difficult to compare across teams, to a statistic that allows us to easily summarize a team’s ability in one number.
One way to do this is to measure the size of each of those shapes on the shot location diagrams. We can do that by simply measuring how far each player on a team’s power play shoots from their average shot location. Players on well-structured teams will shoot closer to the same location most of the time, while players on poorly structured power plays will take more shots from locations that are further from their central point.
We’ll start by looking at the average distance of each player’s shots from their average shot location.
Player Structure = (Σ Distance of Shot To Player’s Average Shot Location) / (# of Shots For Player)
We then define a club’s power play Structure Index*, by finding the average of these average distances and weighting by the number of shots each player contributes to a team’s power play.
Team Structure Index = (Σ # of Shots for Player * Player Structure ) / (Σ # of Shots for Each Player)
Lower values for a team’s Structure Index are good because they represent a stronger structure, meaning each player’s shots are closer to their average shot location, while higher values signal a weaker structure, meaning each player’s shots are more spread out over the ice.
A simple example illustrates how straightforward the metric is to calculate. First, let’s choose a random player (Nicklas Backstrom), and we’ll assume he took 3 shots on the power play from the following locations:
The average X location is 61.3, while the average Y location is 11. His 3 shots are 1.67, 1.94, and 0.33 feet from the average shot location, respectively. Therefore, Nicklas Backstrom’s average distance from his average location (what we’ve called Player Structure above) can be calculated as follows:
(1.67 + 1.94 + 0.33) / 3 = 1.31
The table below has (fake) data for the Caps power play, including the number of shots each player took and the average distance each player was from their average shot location for those shots.
Player | Average Distance from Average Shot Location | # of Shots |
Alex Ovechkin | 5.0 | 4 |
Nicklas Backstrom | 1.3 | 3 |
John Carlson | 3.0 | 3 |
TJ Oshie | 3.0 | 2 |
Marcus Johansson | 4.0 | 2 |
Totalling up all this data, Washington’s Structure Index becomes:
Washington SI = (5.0*4 + 1.3*3 + 3.0*3 + 3.0*2 + 4.0*2)/(4+3+3+2+2) = 3.35
While this fabricated data doesn’t accurately represent a reasonable structure index (3.35 would be the best number ever recorded by far, and an actual team will have many more than 5 players contributing to their structure index), it should give you a sense of what goes into each team’s total.
So which teams are well structured according to our metric?
The Washington Capitals have been the recent kings of our new stat. The Caps have topped the league in each of the last 4 years, and currently sit second this year, behind – you guessed it – the Columbus Blue Jackets. Columbus is actually more structured this season than any other club in the past 7 years, with the exception of the 2012-13 Capitals.
Interestingly enough, the Caps, along with the Philadelphia Flyers (currently 3^{rd} in structure this season), are two clubs that Arik flagged as being able to consistently establish dangerous formations last year, providing a nice bit of anecdotal support for our new metric.
At the bottom end of the rankings this year are the Florida Panthers, LA Kings, and San Jose Sharks, sitting 30^{th}, 29^{th} and 28^{th} in Structure Index respectively. All three clubs have also struggled to score on the power play, with each of them sitting in the bottom half of the league in Goals For per 60 despite being above average in terms of shot generation.
Full data from the past 7 years, including data up to this weekend’s games, are available here.
With any new statistic, we always want to check both its repeatability (is this statistic more a product of skill or luck) as well as its predictive power (how useful is that skill, if it exists). We measure the former by dividing the data into odd and even games and looking at how well our metric in one half predicts the same metric in the other half, which we call the split-half correlation. The table below lists the split-half correlation for our new power play structure index, as well as 5-on-4 Corsi For Per 60, and 5-on-4 Goals For Per 60, based on data from 2010-11 to 2015-16.
Metric | Split-Half Correlation |
PP Structure Index | 0.64 |
5-on-4 CF60 | 0.70 |
5-on-4 GF60 | 0.09 |
While PP Structure Index is less repeatable than shot attempt generation, a split-half correlation of 0.64 is still quite high, especially given the smaller sample sizes involved in special teams play, indicating that there appears to be a repeatable skill in generating opportunities from a consistent location. It is also significantly more repeatable than raw goal scoring.
What about predictive power? We can perform a similar test, except instead of looking at how well our metric in one half predicts itself in the other half, we’ll look at how well it predicts goal scoring in the other half.
Metric | Split-Half Correlation |
PP Structure Index | -0.17 |
5-on-4 CF60 | 0.20 |
5-on-4 GF60 | 0.09 |
There are two important things to take away from this table. First, PP Structure Index is negatively correlated with goal scoring, which we would expect given that a low Structure Index is good (and obviously a high goal scoring rate is good). The second is that PP Structure Index is nearly as good at predicting goal scoring as shot attempts, and much better than past goal scoring, confirming our original hypothesis about the importance of structure in creating a dangerous power play.
Perhaps even more interesting, however, is that the correlation between PP Structure Index and 5-on-4 CF60 is only 0.07, indicating that shot attempt generation and PP structure are somewhat independent skills.
We can then test the relative worth of each of these skills by building a simple model to predict goals in one half using both PP Structure and CF60 in the other half. We first normalize each of our variables to ensure that the coefficients produced by our model are comparable. When we run our model we see that shot generation is almost twice as important as PP structure in predicting out of sample goals (coefficient of 0.34 for CF60 vs -0.19 for Structure Index).
Variable | Coefficient | P-Value |
Intercept | 6.17 | < 2e-16 |
PP CF60 (Z) | 0.34 | 0.000039 |
PP Structure Index (Z) | -0.30 | 0.0002 |
What this is essentially saying is that a 10.75 increase in a team’s CF60 (basically going from league average to the top third of the league) would result in a 0.34 increase in their GF60. Similarly, a 0.77 improvement in a team’s Structure Index (once again going from league average to the top third of the league) would result in a 0.30 increase in their GF60. While shot generation is more important when both metrics are considered together, power play structure is still an important driver of on-ice results.
While our new statistic shows promising results, it is based on crude estimates of a team’s power play structure, and thus has several limitations and weaknesses that should be considered.
First, our model makes the implicit assumption that each team is trying intentionally to have every player shoot from a given location, which is obviously untrue. Our model therefore may underestimate the effectiveness of power plays where one or more player is designed to move freely throughout the ice.
In addition, rush shots may introduce additional noise into our estimates, and they likely skew our view of how teams get setup in formation. Excluding them going forward may improve both the consistency and predictive power of our metric.
Nevertheless, the predictive power of even our crude metric shows that power play structure is a key driver of team success, and that shot location data may provide important clues about how well teams are able to establish dangerous structures in their power play.
*It could have technically been called the Weighted Average Average Distance From Average Shot Location, but WAAFDASL seemed like a terrible acronym.
]]>This post originally appeared on Hockey Graphs.
The usefulness of on-ice save percentage (and derivative metrics such as Sv% Rel and Sv% RelTM) has been the source of many, many heated debates in the analytics blogosphere. While many analysts point to the lack of year-over-year repeatability that these metrics tend to show (past performance doesn’t predict future performance very well) as evidence of their limitations, others (primarily David Johnson of HockeyAnalysis.com) have argued that there are structural factors that haven’t been accounted for in past analyses that artificially deflate the year-to-year correlations that we see.
David’s point is a fair one – a lot can change about how a player is used between two samples, it’s not unreasonable to think that those changes could impact the results a player records. But we don’t just have to speculate about the impact those factors have – we can test the impact, by building a model that includes measures of how these factors have changed and seeing how it changes our predictions.
In order to assess David’s claim, I asked him which variables he felt had an impact on save percentage metrics that could potentially be reducing the simple correlations that we observe. Here were his responses:
To play things on the safe side, I decided to check whether future results for any of the 3 on-ice save percentage variables (Sv%, Sv% Rel, and Sv% RelTM) could be predicted using past results of any of the 3 on-ice save percentage variables. I also split the data by position to see whether forwards and defencemen had a different impact on whether their goalie stopped the puck.
I ended up running 18 linear regressions (3 future variables to predict * 3 predictor variables * 2 positions). In each regression the dependent variable was a player’s 2012-2016 results for the variable in the “Predicting Column” (one of the 3 Save Percentage metrics listed above from 2012-2016). The predictors I used were a player’s results from 2008-2012 for the variable in the “Using Variable” column (the player’s past Save Percentage metrics from 2008-2012), as well as the change in the following variables between 2012-2016 and 2008-2012:
-TMGA60
-OppGF60
-DZ FO%
These change variables should capture the impact of changes in Quality of Teammates (defensively), Quality of Opponents (offensively), and Usage/Role, respectively.
Data was taken from puckalytics.com. All players with more than 500 minutes in each period were included.
For each regression I’ve presented 3 results:
-The Adjusted R^2 – how well the overall model predicts the variable it’s trying to predict
-The Regression Co-Efficient for the variable in the “Using Variable” column
-The P-Value for the variable in the “Using Variable” column
Models where the P-Value of the variable in the “Using Variable” column is less than 0.05 have been highlighted in green.
As you can see, the Adjusted R^2 for all of the models never gets above 0.086, indicating that even in the best case we’re only explaining less than 10% of the variability in our metrics using a player’s past results and any contextual changes that occur. What this means is that although there are a few models where past results are a statistically significant predictor, there is a whole lot of randomness even in 4 years of save percentage data. The implication of this is that we need to be extremely careful when we use save percentage related statistics to describe the defensive play of skaters. We simply can’t say with a high degree of confidence that the players who have posted the best on-ice save percentage (or SvPct RelTM or SvPct Rel) in the past will be the players who post the best on-ice save percentage (or SvPctRelTM or SvPct Rel) in the future.
None of this is to say that players don’t have any impact on the likelihood that a given shot will go in. Personally, I do believe that players can have in impact on save percentage, however, the problem is that the impact is relatively small compared to the natural randomness in the samples we observe, so the data that we capture isn’t a good reflection of what that ability is. Even if we can know exactly what situation we’re going to put a player into, predicting how their save percentage is going to come out is a rather futile exercise – their past save percentage just doesn’t give much information to go off of.
]]>