This article originally appeared on Hockey Graphs.
Zone starts are not that great of a metric. Although certain players do tend to be put out almost exclusively for offensive or defensive purposes, the reality is that for most players’ zone starts have a relatively small effect on a player’s performance. And yet, many hockey writers still frequently qualify a player’s performance based on observations like “they played sheltered minutes” or “they take the tough draws in the defensive zone”. Part of the problem is that we’ve never really developed a good way of quantifying a player’s deployment. With many current metrics, such as both traditional and true zone starts, it’s difficult to express their effect except in a relative sense (i.e. by comparing zone starts between players). So when a pundit says that a player had 48% of his on-ice faceoffs in the offensive zone, it’s difficult to communicate to most people what that really means.
Going beyond that, even if we know that 48% would make a player one of the most sheltered skaters in the league, the question that we should ask is so what? Simply knowing that a player played tough minutes doesn’t give us any information that’s useful to adjust a player’s observed results, which is really the reason that we care about zone starts. We know that if you start your shifts predominantly in the defensive zone, you’ll likely see worse results, but zone start percentages don’t tell us how much worse they should be. Traditional deployment metrics are too blunt of a tool – they provide a measurement, but not one that gives any context to the performance numbers that we really care about.
To create a better zone start metric, one that we can use to adjust our observed results, we have to go back to basics. While most analyses to date have looked at the effect of zone starts on a player’s Corsi Percentages, these studies implicitly assume that, on average, shooting and save percentages are constant over time. This assumption is problematic because in reality, we’d expect the game to look different in the first few seconds following a faceoff – because the winning team generally needs to take time to get organized (or take the puck to the other end of the ice) we shouldn’t expect the offensive opportunities in the first few seconds following a faceoff to be as plentiful, or as dangerous. If teams aren’t shooting in the seconds immediately following a faceoff, or they’re taking rushed, low-percentage shots instead of waiting for better chances, using Corsi as a measuring stick doesn’t make sense.
We can simplify this picture by focusing solely on plain old goal differential. By looking at goals we’re able to capture both the change in shooting and save percentages, as well as in shot generation rates. Because we’re going to be looking at multiple years’ (from 2009-10 to 2014-15) worth of data across all the teams in the league, we don’t need to worry about the sample size issues that usually push us towards using Corsi.
We’ll start by looking at the expected goal differential, by second, for a faceoff in a given zone. We’ll also break it down by which team won the faceoff, as we’d obviously expect the goal differential to be much different following a defensive zone loss than a defensive zone win. The results for the 3 winning faceoff situations are given in the graph below (the results for the losing team are simply the winning team’s results multiplied by -1).
The first observation worth noting is how small the effect is – even after winning an offensive zone draw and continuing play for 45 seconds, the net effect on goal differential is still less than 0.02 goals. This is in line with what others have previously found, although it’s slightly lower because we don’t assume a constant shooting percentage across the whole interval. It also emphasizes how many additional faceoffs a player would need to win to have a signficant effect on a team’s results – as it turns out, faceoffs aren’t as big a deal as we might have guessed.
The other key thing to note here is that the amount of time before we see the full effect of a zone start (i.e. where the graph begins to level off) varies by where the faceoff was won. A player who is on the ice for an OZ faceoff win won’t see the full effect of that win for nearly 60 seconds, while for NZ and DZ wins the effect stabalizes in much less time (approximately 14 and 8 seconds, respectively). This is kind of a big deal, because it means that a player who leaves the ice 15 seconds after an OZ win is really only experiencing about 75% of the “benefit” that a player who played a full 45 second shift follwing an OZ win would experience.
To illustrate this more clearly, look at the difference between the following 2 situations:
- A player loses a defensive zone faceoff and plays continuously for 30 seconds before leaving the ice on an on-the-fly change.
- A player loses a defensive zone faceoff, the other team controls the puck for 3 seconds, takes a shot on goal and it’s frozen. The player stays out for the faceoff and loses it, and then plays for 27 seconds before leaving the ice.
It should be obvious from the graph above that these 2 cases don’t represent the same level of danger – in the 2nd situation, where the play lasted only 3 seconds before another faceoff occurred, the risk the player faced was less, since the shot was rushed and unlikely to be the result of his opponents setting up a high quality chance. The faceoff provides a reset, and the seconds immediately following the faceoff are less dangerous than those that occur after a team has had sufficient time to formulate an attack.
All of which is to say that knowing both the result of the faceoff and the amount of time a player played following a faceoff is really important if we want to accurately quantify a player’s usage. A 60% center who leaves the ice immediately following each defensive zone win should have a much smaller zone start adjustment than a 40% center who plays a full 40 second shift, regardless of the faceoff result.
So if we know that we need to change the way we’re measuring and adjusting for zone starts, what’s the best way to do it, considering all that we’ve learned so far? The simplest method is to use a player’s on-ice faceoff results, his shift times, and the graph above to find the aggregate effect on goal differential we’d expect to see for an average player who played in the exact same situation to see.
To create this individual level adjustment, we’ll perform the following steps:
- Find each 5v5 faceoff for which a player was on the ice
- Classify that faceoff by zone (Off. Zone, Neu. Zone or Def. Zone) and result (win or loss)
- Calculate the number of seconds until either:
- Another faceoff occurred with the player on the ice; or
- The player left the ice.
- Using 1), 2) and 3), lookup the expected goal differential in the graph above. For example, if a player was on the ice for a neutral zone win and left the ice 12 seconds following it, we’d credit him with +0.0016 goals, while if he was on for a defensive zone loss and a 20 second shift afterwards, that shift would be worth -0.0153 goals to him.
- Sum the expected goal differentials for each faceoff to come up with an Expected Faceoff Goal Differential, or xFOGD
Expected Faceoff Goal Differential (xFOGD)
xFOGD represents our best estimate, using six years of historical data, of how much a player’s goal differential was impacted by his zone starts and on-ice faceoff results. The table below shows the top and bottom 10 players in total expected faceoff goal differential (xFOGD) from 2009-2010 to 2014-2015.
|20112012||MALHOTRA, MANNY||-3.08||20102011||EHRHOFF, CHRISTIAN||2.18|
|20132014||GORDON, BOYD||-2.98||20132014||SHARP, PATRICK||2.20|
|20142015||GAUSTAD, PAUL||-2.84||20132014||TOEWS, JONATHAN||2.28|
|20132014||KRUGER, MARCUS||-2.35||20102011||BURROWS, ALEX||2.45|
|20142015||GORDON, BOYD||-2.26||20142015||TAVARES, JOHN||2.49|
|20132014||MCCLEMENT, JAY||-2.22||20112012||BURROWS, ALEX||2.88|
|20132014||BOLLIG, BRANDON||-2.19||20112012||SEDIN, DANIEL||2.94|
|20142015||RISTOLAINEN, RASMUS||-2.14||20102011||SEDIN, HENRIK||2.99|
|20142015||KRUGER, MARCUS||-2.07||20102011||SEDIN, DANIEL||3.17|
|20112012||BREWER, ERIC||-2.06||20112012||SEDIN, HENRIK||3.37|
Here we can see that for certain players, the effects can be rather large, with the Sedins seeing benefits worth about 3 additional goals in both 2010-11 and 2011-12, while their teammate Manny Malholtra’s stats decreased by roughly the same amount in the latter season. While it may seem odd to express our usage stat in goals, the easiest way to think of it is that xFOGD represents the amount we’d need to adjust a player’s plus-minus by, if plus-minus were a perfect measure of player performance.
While the raw stats above are useful in adjusting aggregate performance, it’s also helpful to look at rate statistics (i.e. xFOGD/60) to get a sense as to which players faced the toughest usage on a per minute basis. This metric should align more closely with our traditional zone start numbers, but note that it will still be heavily influenced by a player’s on-ice faceoff results.
|20112012||MALHOTRA, MANNY||-0.25||20132014||TOEWS, JONATHAN||0.12|
|20142015||GAUSTAD, PAUL||-0.24||20142015||KANE, PATRICK||0.13|
|20132014||GORDON, BOYD||-0.22||20142015||RICHARDS, BRAD||0.13|
|20142015||GORDON, BOYD||-0.20||20112012||TOEWS, JONATHAN||0.13|
|20142015||NYSTROM, ERIC||-0.18||20102011||SEDIN, HENRIK||0.15|
|20122013||MCCLEMENT, JAY||-0.18||20102011||BURROWS, ALEX||0.16|
|20132014||BOLLIG, BRANDON||-0.16||20112012||BURROWS, ALEX||0.16|
|20132014||KRUGER, MARCUS||-0.16||20102011||SEDIN, DANIEL||0.17|
|20142015||HENDRICKS, MATT||-0.16||20112012||SEDIN, DANIEL||0.17|
|20132014||MCCLEMENT, JAY||-0.15||20112012||SEDIN, HENRIK||0.18|
Both lists are pretty consistent, which shouldn’t be surprising given that we’re looking at the extreme ends of the spectrum, but it also shows that players do shift around, and that those with highly specialized roles are still identified by our new metric. Full results, including both seasonal xFOGD and xFOGD/60 numbers can be found here.
The most interesting thing to look at is which players’ “usage” change the most when we compare metrics – that is, players whose usage differs significantly when measured by xFOGD/60 versus a more traditional metric such as OZFO%. Since the numbers won’t be on the same scale, we’ll normalize each stat by calculating each player’s Z-Score for both xFOGD/60 and OZFO%, and then find the absolute difference in Z-Scores between the two metrics. The players with the 10 highest absolute deltas are presented in the table below.
|Season||Player||z-xFOGD/60||z-OZFO%||Z Score Delta|
Here we can see that there are some pretty significant differences in how the two metrics evaluate a given player’s usage. 5 of the 10 players see the sign of their usage change (i.e. they went from having favourable to unfavourable usage, or vice versa), and most of the changes have players going from relatively average usage numbers to much more extreme numbers. The list is mostly populated with players who posted exceptionally good faceoff numbers, which is what we’d expect – their own abilities on the draw tend to remove a lot of the risk they’d seen from their raw deployment numbers.
While some might argue against making such large adjustments to a center’s usage based on their own faceoff results, it’s actually one of the main benefits of using an approach like this. By seperating out the faceoff result from the zone in which it takes place, we can isolate the value of the win from the value (or cost) of the deployment. And because 80% of players on the ice aren’t the ones taking the draw, it’s also a better measure of how they’re specifically deployed – after all, if you’re a defenceman put out for 10 faceoffs a night with Patrice Bergeron you’ll likely see significantly better results than if your most common centerman is Kevin Hayes. Classical zone start measures don’t capture that, but xFOGD does.
Furthermore, the approach presented here is intuitively more accurate – we’re using a more granular view of what actually happened to each player when they were on the ice, so our adjustments should be more reflective of the actual risk or opportunity each player saw from their deployment. Although for most players the overall effect is going to be small, the examples above clearly show that for some players it’s quite significant. It’s those players that justify the additional effort requried to calculate xFOGD, as our assessment of a player’s deployment can go from “extremely tough” to “easier than average” simply by digging a bit deeper into their shifts.