A few days ago, Conor Tompkins of Null Hypothesis Hockey tweeted out an interesting set of graphs showing the correlation between a goalie’s save percentage in each of the War-On-Ice danger zones and their overall success rate. Conor found that (unsurprisingly) a goalie’s performance on high danger shots was most closely correlated with overall success, with medium shots having slightly less influence, and low danger shots showing almost no relationship. While Conor’s model focused on correlations within the same season, Sam Ventura suggested that a useful extension would be to look at how well the danger zone save percentages predicted future overall save percentages. After all, if performance on high danger shots is most critical for a goalie in determining his current season save percentage, it stands to reason that this would also be a key predictor of future success.
One way we can look at this is to run a multiple linear regression between a goalie’s current season save percentage and his past save percentages broken down by danger zone. We’ll focus on 5v5 data only to avoid the issue of varying penalty rates between teams, and look at goalies who played at least 1000 minutes in back-to-back seasons (all data from War On Ice).
Read more ›
The idea of the replacement level player is one of the most important concepts in sports analytics. While not strictly necessary to do any basic player comparisons, the value of the replacement level lies in providing a baseline below which a professional player should not perform. After all, if a player is performing below replacement level, we should listen to the stats and do exactly what they tell us to: replace him with almost any other player.
In hockey though defining replacement level can be a difficult task. Part of that stems from the fact that we currently don’t have exact methods of rating player’s individual contributions. We can say which players generally perform well when they’re on the ice, and we can estimate how a player’s team performs with and without him, but distilling all the information we have down to an opinion about a player’s value is currently more art than science. Hockey is a complex game with many moving parts and because of that creating an aggregation method for all the data we gather is a complex task. Read more ›
Over at Hockey Prospectus I’ve got an article up on calculating Weighted Shots (or, more specifically, Score Adjusted Weighted Shots) at the individual player level. Give it a read, here. The article expands on my presentation at the Ottawa Hockey Analytics Conference, which you can find here.
Lastly, if you’re interested in seeing the player level SAwSH data from 2008-2009 through to 2013-2014, it’s available here.
Corsi Rel is a stat that, in theory at least, is meant to address the fact that a good player on a poor team is still likely to post a bad CF%. We don’t want to punish superstars who are surrounded by replacement level players in the same way that we don’t want to reward hangers on playing on Cup winners (*cough* Dave Bolland *cough*). For defencemen in particular, Corsi Rel is often a better way to measure their impact, given that they have much less control over play in general and are driven heavily (at least in terms of raw results) by the talent up front that they’re paired with.
The problem with Corsi Rel, however, is that it’s too blunt of an instrument – it assumes that each player can only affect his team’s results by a set amount, regardless of the talent of that team. A good player on a bad team is assumed to be a good player on any team he plays on, which we know is unlikely to be true in practice. A player with a +1% Corsi Rel on a 42% team is unlikely to make a 56% team into a 57% squad, but pure Corsi Rel assumes that this would be the case. So while we know that there’s value in the information that Corsi Rel contains, the question is how to maximize that value.
Read more ›
In Part I of our review of zone starts, we looked at the how the traditional definition of zone starts varied from what most people would consider a “true” zone start, and found that when we applied the true zone start definition to our data, the spread between players in zone start percentages decreased significantly. One key reason for the difference between methods is the inclusion of on-the-fly starts, which tend to make up around 60% of a players total shifts, and which drastically decrease the impact of each defensive/offensive/neutral faceoff. Another driver is the fact that often a player’s zone start percentage is impacted by their own performance: bad players end up with more defensive zone faceoffs due to their inability to drive possession, which incorrectly inflates their defensive zone start percentages. This also helps to create a false link between zone start percentages and possession numbers, leading people to incorrectly infer that tough zone starts are a key driver behind a player’s results.
While it’s useful to know that the true difference in zone starts between players is generally minimal, that doesn’t necessarily mean that we can just ignore them completely. To make a judgement about the overall impact that zone starts have we first need to figure out what the impact of a single zone start is on possession. To do that, we can simply look at all the 5v5 shifts taken since 2008 in aggregate, and calculate the overall Corsi For Percentage broken down by starting location.
Read more ›
Tagged with: Zone Starts
Posted in Zone Starts
In 2013-2014 Boyd Gordon and Manny Malholtra were two of the worst players in raw CF% across the league at 42.3% and 41.6% respectively. Most people would argue that their results were not all that surprising given that they faced the toughest zone starts of any players in the league, with over 59% of their shifts starting in the defensive zone according to stats.hockeyanalysis.com, almost 10% higher than anyone else in the NHL. The problem with this argument, however, is that neither player actually started 59% of their shifts in their own end. While both players did see 59% of the faceoffs they were on the ice for come at their own end of the rink, if we look at where each shift actually started and ignore faceoffs that started mid-shift, we see a much different story. While both players still faced some of the toughest zone starts of any player in the league, the actual percentage of Boyd Gordon’s shifts that started in front of his own goaltender was only about 32%, almost half of what’s traditionally reported. Malholtra, on the other hand, has a much larger gap: only 25% of his shifts actually started in the defensive zone, nearly 35% lower than his faceoff-based metric.
It’s not just Malholtra and Gordon and those at the extreme ends of the spectrum who are grossly misrepresented by traditional zone start percentage either. Every player across the NHL has their usage numbers skewed by the fact that most sites use faceoffs to measure zone starts rather than looking at the actual shift data (I should point out that most of the main stats sites do make it very clear that they use faceoffs, and that Hockey Analysis actually refers to the metrics as OZFO%/DZFO%/NZFO% now). Part of the reason for the differences is that the traditional measurements don’t take into account shifts that start on-the-fly as opposed to at a stoppage in play. And while this explains some of the difference we see, it’s not the bulk of the problem. The main issue with the current approach to measuring zone starts is that the measurement is often skewed (and sometimes heavily) by the performance and talent of the player in question. Bad players tend to end up with more defensive zone faceoffs because their opponents tend to get more shot attempts against them, which leads to more opportunities for their goalie to freeze the puck and more defensive zone faceoffs. The same idea is true in reverse for good players, and it all adds up to a false correlation between the traditional zone start measure and possession numbers.
Read more ›
Tagged with: Zone Starts
Posted in Zone Starts
Today on Hockey Prospectus I’ve got an article up looking at how each team’s special teams units have performed against expectations this year. Take a look at it here.