This article originally appeared on Hockey Graphs.
Zone starts are not that great of a metric. Although certain players do tend to be put out almost exclusively for offensive or defensive purposes, the reality is that for most players’ zone starts have a relatively small effect on a player’s performance. And yet, many hockey writers still frequently qualify a player’s performance based on observations like “they played sheltered minutes” or “they take the tough draws in the defensive zone”. Part of the problem is that we’ve never really developed a good way of quantifying a player’s deployment. With many current metrics, such as both traditional and true zone starts, it’s difficult to express their effect except in a relative sense (i.e. by comparing zone starts between players). So when a pundit says that a player had 48% of his on-ice faceoffs in the offensive zone, it’s difficult to communicate to most people what that really means.
Going beyond that, even if we know that 48% would make a player one of the most sheltered skaters in the league, the question that we should ask is so what? Simply knowing that a player played tough minutes doesn’t give us any information that’s useful to adjust a player’s observed results, which is really the reason that we care about zone starts. We know that if you start your shifts predominantly in the defensive zone, you’ll likely see worse results, but zone start percentages don’t tell us how much worse they should be. Traditional deployment metrics are too blunt of a tool – they provide a measurement, but not one that gives any context to the performance numbers that we really care about.
Read more ›
A few days ago, James Mirtle of the Globe and Mail brought up one of the first significant shifts in tactics under the Mike Babcock regime in Toronto.
While the change may be surprising to some fans, particularly given the lack of depth in the Leafs forward corps, it shouldn’t be altogether unexpected. Read more ›
Over the last few years, Hockey Analytics pioneer Rob Vollman has been putting out a book of scoring projections using historically comparable players to calculate a player’s best case/worst case/average scoring totals. His method is really interesting and yields a lot of great fodder for discussion (for example, is David Clarkson really a similar player to Donald Brashear?), as well as a useful baseline to build a fantasy team off of.
This year, he was kind enough to invite me to contribute to the project, as I put together a similar set of comparable players and projections using some of the modern “enhanced” stats that we now have available. The full guide, with complete projections on over 700 NHL players, is available for $4.99 in the Dobber Store here, or is also available for free (free!!!) with your purchase of the complete 2015-2016 Dobber Fantasy Hockey Guide.
As a sneak peak of what’s inside, we’re going looking at one player from each team and analyzing what his comparables suggest is in store for the year to come. The whole series will be up on Dobber Hockey and will be going up over the next few weeks. To date, we’ve got articles up on:
I’ll keep this list updated as we go through the series, so you can always check back here to see what’s new.
Score effects – they’re real, and they’re spectacular. And while the idea and impact of score effects are generally understood by most in the hockey analytics community, they can often be a difficult subject to introduce to newcomers to the field. A solid knowledge of score effects is critical to understanding why teams that outshoot their opponents in a given game tend to lose more often than not, or why a player or team’s unadjusted statistics may be misleading you.
Furthermore, while many people have hypothesized about what causes score effects, there’s currently little documented analysis examining the actual drivers of score effects. So consider this article, and its coming sequel, as first steps at drilling deeper into the factors that lead us to observe score effects. Hopefully these pieces will serve as both a reference point to provide a basic understanding of score effects, and as evidence that helps justify any adjustments we do make when devising new statistics.
Read more ›
With the NHL draft behind us, attention across the hockey world has turned to July 1st, when unrestricted free agents will be free to sign with the team that will give them the best shot at winning a cup the highest bidder, and fans of all franchises pray that their GM doesn’t screw anything up for a superstar to vault them into contender status. And while every deal signed will be scrutinized from a million different angles to determine whether or not a team paid fair money, most of this discussion will end up conflating the ideas of value (will Martin St. Louis still be putting up 20+ goals at age 42?) and price (should the Bruins, err Flames, really pay Dougie Hamilton $5.5MM per year?).
This, unfortunately, is a rather large mistake, because what teams pay for a player and what that player is actually worth to a team are two critical yet extremely different questions. Worth (or value) is what a player adds to a team on the ice (in goals or wins), and must be measured based on a player’s total contribution (see, for example, WAR on Ice’s Wins Above Replacement, Hockey Reference’s Point Shares, or Hockey Prospectus’ Goals Versus Threshold). Price, on the other hand, is simply what a team is willing to pay for a player, in contract dollars or cap hit. While ideally these numbers would align, the reality is that teams often value observed historical results more than they should. Teams tend to pay for basic counting stats while ignoring other potentially useful indicators of future success (such as general shot generation or prevention) or contextual factors (such as quality of teammates).
This market inefficiency presents an opportunity for teams, as GMs that can identify which players are over or underpaid relative to their actual contribution should have a long-term advantage in a cap-restricted world. The key to taking advantage of these opportunities, however, is being able to predict what your opponents are going to do. If a general manager knows what the rest of the league will pay a player, this information can be used to help identify potential targets in free agency before the negotiating period begins. Given that marquee free agents can often sign within hours of hitting the open market, this knowledge can help teams avoid chasing after players they know will be out of their budget. On the other hand, it can also help GMs know when they can play hard ball with their own players and refuse to submit to offers that are above what the market is likely to pay. All of which is to say that there’s a lot of value in being able to guess what the other 29 clubs in the league will be willing to pay a free agent, which brings us to the main question we’ll look at in this article: can we build a model to predict how much a free-agent will end up signing for, based on his historical stats?
Read more ›
Calculating the value of a draft pick is not an easy task – there’s a great amount of uncertainty involved in any given pick, and the number of “misses” are significantly more than the number of “hits”, particularly in later rounds. Compounding the issue is the lack of a clear definition for success – while most methods to date have defined a successful pick as a player who reached the 200 NHL game mark, this approach is flawed in that it ignores differences in contribution and is biased against higher round picks. In this article, we’ll look at how to address these issues with a few simple changes to the standard approach, and use a novel evaluation approach to build a model that should more accurately estimate the value of an NHL draft pick.
Read more ›
Tagged with: Draft
Posted in Draft
With round 2 of the playoffs wrapped up and a few days of rest penciled into the schedule, there’s no better time to check in on how the entrants in the Puck++ Playoff Prediction Challenge are doing. As a reminder on the format, each entrant provided the probability that each team would win a given series, with each entry scored using the Brier Score. Entrants who provided picks for round 1 but missed the round 2 deadline were given a default guess of 0.5 in each series to keep things interesting and to allow people to jump back in for round 3. I’ve also included a Naïve set of predictions, which were generated using the regular season results as a measure of a team’s “true talent”.
With all that said, let’s take a look at who’s sitting in the driver’s seat heading into the conference finals:
Read more ›