What are we missing when it comes to measuring the various ways a player helps their team succeed? That was the question hanging over me on a flight to Vancouver in August 2021. And although I wouldn’t loop back to fully tackle that brainstorm until almost exactly one year later, that was the moment that LB-Hockey was born. It wasn’t built with any specific tools or fancy visualizations in mind, but rather on the foundation of an idea that attempts to quantify a player’s worth on the ice using more than just expected goal shares. And that’s what we’ll be diving into today: how we’ve built a Standings Points Above Replacement (SPAR) model that both captures NHLers’ contributions to their team, and highlights their playing styles to a level beyond what is available elsewhere in the public hockey analytics sphere.
The Current State of Hockey Analytics Models & Charts
The concept of SPAR isn’t new for player evaluation, given that it is just a different side of the WAR (Wins Above Replacement) coin. WAR is prevalent throughout hockey analysis and is the big number on the cards over at AdvancedHockeyStats, whereas GAR (Goals Above Replacement) and SPAR are used throughout Evolving Hockey’s work. While you need to score goals to win games and move up in the NHL, the standings are dictated by points, so that is what we identified as the optimal final output for our model.
In the above sites however, these metrics are constructed using a skater’s play-driving ability as determined by on-ice expected goals (along with finishing and penalties). This is a great way to estimate player value since, when accounting for factors such as zone deployment, quality of teammates, quality of competition, game score, and more, it stabilizes over large samples and allows us to isolate impacts on the offensive and defensive sides of the ice. Evolving Hockey’s RAPM charts visualize this well and you can read more about them here.

Unfortunately, outside of pure offensive and defensive impacts, plus finishing and penalty inclinations, this doesn’t shed a ton of light on skaters’ playing styles. Take the graph above for example. It’s easy to interpret and tells us that Jeff Skinner is an offensively strong but defensively porous player with a roughly average impact on the powerplay. However, we have already reached the extent of analysis possible through this, and that’s one of the two main areas we’re trying to tackle with our new model.
So how do we keep a good level of interpretability, maintain SPAR as the final output, and include play-driving ability like this while expanding on possible takeaways and capturing more types of on-ice contributions?
Maximizing Available Data
With this exercise, we are bounded by what is publicly available. MoneyPuck (MP) is one of the most prominent websites in the space as it offers cleaned data from the NHL ready to be downloaded on the skater, goal, team, and shot levels. They have also built an expected goal model which has been tacked on. We are still confined to on-ice xG analysis with most of this however. And unfortunately, the NHL hasn’t made the raw EDGE data accessible to the public (yet anyway, hopefully that day comes eventually). This infrared tracking would inject a wealth of data points that could permit the automated tracking of important events such as zone entries, puck recoveries, passing plays, and more.

But luckily, Corey Sznajder’s AllThreeZones Project (A3Z) gives the public and data-starved analysts like me access to these microstats anyway for around 400 games of each NHL season. As much as we would love to have this depth of statistics over a fully encompassing sample, Corey’s diligent manual tracking allows us to continue our mission of expanding on the concept of SPAR.
Now that we know all of this exists, why haven’t we seen more projects or models including it all? The answer lies in hurdles with data merging, as there isn’t an obvious key to bring both datasets together. Ideally, we would have access to the NHL-provided “playerID” in both sources, which would allow us to link player games easily. Unfortunately, A3Z’s primary identifier is the player names, which change too much over time and between sources to be relied upon as a long-term solution. Luckily for us however, A3Z’s game sheets still hold the team and player number, which are enough for us to build a pipeline that runs through all of their game sheets, calls the NHL API for team rosters of those individual games, and attaches playerIDs to everyone.
Although I won’t dive into it here, a slightly more complicated but similar process is undertaken to link both sources’ shot data. We wrote an entire article on the process and insights it can provide below.
Resulting from all this work are two important pieces: A3Z’s manually-tracked game sheets with added player and game identifiers, and our linked MP-A3Z shot dataset that blends expected goals with passing information plus various contextual indicators (rush vs forecheck vs cycle, screen presence, crossing royal road, etc.) for every tracked shot.
Now that we’ve got all the important data architecture out of the way, we can move on to the fun stuff: blending the stats & eye test.
Identifying Quantifiable Skills
As mentioned at the top of this article, the main goal of this model is to enable deeper evaluation regarding how each skater plays. We can accomplish this through a skills-centric framework. With this approach, every metric that will be fed into the model looks to capture a specific skill within a player’s toolkit. And the calculations will be devised to appropriately measure that particular ability, rather than the other way around (where the formulas would be built first, then assigned to skills that could explain their scores).
First things first, let’s divide player toolkits into categories. I have always wanted to separate success by zone, so we will group up plays that begin in the offensive, defensive, and neutral zones to get the Zone Offence, Zone Defence, and Transition categories. But what’s beautiful about hockey is the constant flow & chaos between a player’s linemates and competition. So we add two more categories: Checking and Teamplay. These include plays all over the ice that capture how a skater affects their opposition and teammates respectively. In the end, we’ve settled on five skill metrics within each category for a total of 25.
Because it would take an unnecessarily hefty amount of time to type (in an already long article), I’ve compiled examples and descriptions of each skill in the video below. Or you could read the quick bullet points in the multi-year cards glossary instead if you prefer.
The final 25 aren’t exactly the same as the initial brainstorm; they’re not even the same from when LB-Hockey launched! Some metrics just didn’t have the necessary data (Shot Pressures), were moved to make room for another (Passing Entries), turned into badges (Compatibility), or removed entirely due to insignificant results (Smart Changing). But after maintaining this data for over three years now, this current iteration feels like a good group that captures almost everything I was hoping to.
Which Skills Are More Important to Team Success?
Each skill metric is transformed using the following method:
1) Splitting up by position (forwards & defensemen)
2) Dividing by the positional standard deviation
3) Subtracting the positional replacement level (so players with a score at the replacement level will be zero)
This is similar to a z-score for those familiar with statistics, but by centring over the replacement level rather than the mean. This follows our goal of wanting a final “SPAR” output as we established earlier. Now that we have all of our scaled data points, how do we combine them? Not all skills are made equal; some are used more frequently, have bigger individual impacts, require more luck, etc. To determine the weights we’ll assign to each skill, we will calculate three different measures: impact, repeatability, and multicollinearity.
When we talk about impact here, it’s about quantifying how important these skills are in team success. To calculate this, we take all team seasons from 2021-22 to 2024-25 and isolate their actual SPAR from forwards and defencemen (total team standings points minus replacement level team threshold and goalie SPAR). We also sum their roster-wide score for each skill and position weighted by ice time, which tells us how much of a specific skill each team averaged on the ice at any given time.

Now, we can simply conduct a linear regression to correlate each team’s weighted average forward and defencemen talent in a specific area against the team’s actual output attributed to skaters. Thankfully, all slope coefficients were positive, indicating that every skill we’ve outlined helps teams win games (whether they are held by an F or a D).
As we can see above, our “Consistency” metric ends up as the most important skill for team success. This makes plenty of sense, since we can easily think of consistency as an overall multiplier that dictates how often a player applies their entire toolkit to its full potential. We have a whole article and tool dedicated to how we measure consistency, which you can find below if interested (don’t worry, it’s nowhere near as long as this piece).
Moving to repeatability, this is where we’re looking to establish not just who can be a good player, but a sustainably strong one. Much has been said about the importance of finishing chances, however it is highly dependent on luck when it comes to reproducing that over a large sample size. It is a much more difficult skill to repeat year-over-year, especially for defencemen where it ranks last of the 25 metrics in repeatability. The calculation is fairly simple here, as we just average the correlation from year 1 to year 2 for every player. Overall, adding this sustainability factor to our equation should help us isolate the sparks from the embers.
And last we have multicollinearity, which estimates the cross-over of coverage with the other measured skills. With this, we can minimize how often we may be doubling up on measuring how a player contributes. For example, Chance and Sustained O-Zone Pressure place bottom 3 for both Fs & Ds due to how many other offensive skills we are measuring that would drive up these stats. Naturally, we would expect a player to be on the ice for more shots/chances if they were a strong playmaker, puck-carrier, individual creator, etc. On the flip side, skills like finishing, d-zone retrievals, and net-front play grade out as relatively unique because they hold more independent slots within a skater’s toolkit.

Once we combine all three of these dimensions, we get our final model weights. Fortunately, the final rankings make a lot of sense! Overall, passing ability appears to be the most recurring one regardless of position as Importance to Teammate Offence and Playmaking are the only aptitudes in both top fours.
Looking at the defencemen, Zone Defence is the most popular category within the top 10, to no one’s surprise. D-men who can drive sustained pressure at both ends of the ice while being reliable breakout options that know how to use their teammates seems to be the favoured playing style. As for forwards, the optimal choice is shaped like an offensive catalyst who can create chances for themselves or for others, and can generate zone entries at a high level of frequency & efficiency (and maybe draw a penalty in the process).

Our main cards bring all of these metrics together to paint a picture of player styles. Looking at Brady Tkachuk’s here, we can see that his unique way of playing is captured pretty well: he’s a power forward who battles very well on the forecheck and creates chances in front of the net, while deferring in transition and struggling to maintain good finishing & penalty ratios. These 25 skills appear to provide us a nice canvas to build stylistic aspects within one’s game, so let’s continue towards scaling our final SPAR output.
Converting to SPAR
By applying the impact, repeatability, & multicollinearity weights to every skill and summing them, we have finally combined everything into one number to capture a skater’s toolkit. However, it is still just formatted as a Skills-Weighted Average currently (SWAV for short), and so the final step is how we turn this into an interpretable measure of player contributions.
While the math and code behind this section are a little more complicated than the last few steps, the approach remains simple. Similarly to what we did in the “Impact” calculation for the skills, we grab each team’s actual F+D SPAR from 2021 to 2025 and their average SWAV they deploy on ice at any given time. Finding the correct way to scale SWAV into SPAR may seem a little tricky over a full season, but the key is tackling it on a game-by-game basis.
If we look at it from a team points percentage (PTS%) standpoint, it is bounded between zero and two, with the majority of actual values landing in the middle, and very few close to the extremes.
This would create a sort of S-shape, which can be made using a logistic function. We could then scale up to a full season (82 for now) to get season SPAR.

With our shape established, all we need to do now is conform the model to transform our player SWAVs into PTS% added in a way that maximizes goodness of fit with the teams’ actual standings results. We take a 3-year weighted average (4-2-1 from most recent to oldest) to get our final player SPAR values since three-season samples have been proven to be far more reliable than single-season ones. Now that we’re done this gruelling journey, how did the model fare at measuring individual player talent on a team’s roster?
Our Results

The conversion worked out even better than I had imagined, with a 0.78 R2. This means that almost 80% of the variation in actual forward and defencemen contributions within a team can be attributed to our model’s estimations of these skaters. Above, you can see each team season’s model-estimated skater SPAR with this method versus their actual since 2021. Even when diving into the outliers that this method is most wrong about, there are circumstances justifying them.
For example, the 2021-22 New Jersey Devils are the furthest off their projection (44 est, 18 actual). They were one of the biggest underachieving teams in the sample set, and immediately course-corrected the following season as they moved up to top 3 in the league. Directly to its left on the plot are the 2021-22 Seattle Kraken (38 est, 18 actual). It makes sense for them to have underperformed since it was their first season as a team. While the strength of the roster may have looked fine, we would expect them to start in a disjointed manner and not play together as well as all the other teams.

Even the biggest positive outlier, the 2024-25 Washington Capitals (45 est, 60 actual) were potentially the biggest surprise team of the last 5 years. Few expected them to make the playoffs, having barely squeezed in the year prior, let alone topping the Eastern Conference standings. So it’s not at all far-fetched to have had this roster’s estimated strength surpass its actual output to this degree.
All around, I am elated with the results. But how do they look at the player level?
Using this method, these are the projected top 10 players by position for the 2025-26 season. So these take into account all of our established skill metrics from data going back to 2022-23, with some age and progression factors to account for developing or declining trends.
And these line up pretty well with the public consensus of fans across the NHL, with the usual suspects topping each list. In terms of trophies, the last 9 Art Rosses, 9 Ted Lindsays, 7 Harts, 5 Rockets, 2 Norrises, and 2 Vezinas were all won by players in the overall top 10. The positional dispersion seems to be appropriate too, with at least one of each appearing here. We are currently in a superstar-centre-driven league, and the top 4 consisting of all Cs does make that apparent.
The SPAR Distribution Visualizer
You didn’t think I’d go through all of this without making a new free tool for you all, did you? With the SPAR Distribution Visualizer, everyone now has a way to parse through this model’s results openly by seeing where selected player seasons rank relative to specific groups.

For example, above you can see the most dominant offensive zone seasons of the era by isolating for only skaters’ Zone Offence SPAR. Auston Matthews’ 69-goal season, Nathan MacKinnon’s ongoing juggernaut campaign, Nikita Kucherov’s 100-assist year, and Connor McDavid’s 153-point landmark all shine here.
Or below, we can picture how exceptional Matthew Schaefer’s rookie season has been thus far. When comparing to all U21 defencemen seasons in the 2020s, he surpasses even the best and brightest by a large margin (almost 2 extra standings points contributed) despite having just turned 18 in September.
The SPAR Distribution Visualizer is free for everyone here and will hold a spot in our Free Tools page for easier access as well.

No model is perfect, and there are definitely still areas I’d like to improve on with this iteration (one-note goalscorers being undervalued for example), but for now I’m very comfortable with letting it breathe and continuing to evaluate player styles & contributions with it.
The entire LB-Hockey arsenal of analytics tools is built with this model at its core, and you can subscribe here for just $6.99 to access it all. That allows us to continue making articles like these, building new tools like our new Mini Cards we unveiled earlier in the week, and most importantly, to keep innovating in the hockey analytics space.






