By The NumbersHockey Analytics... the Final Frontier. Explore strange new worlds, to seek out new algorithms, to boldly go where no one has gone before.

IMO when you get too far into the math of it you can't see the forest for the trees.

Are adjusted stats perfect? No. Yeah, there are some small scoring distribution issues in there.

But when you look at the gap between raw numbers and actual relative performance, adjusted stats close the vast majority of that gap and are by far the best tool for comparing post-1967 NHL players.

As long as you have perspective, it's fine. If one player's 1984-85 season comes in at 68 adjusted points and another players 1998-99 comes in at 71, you'd obviously be an idiot if you're claiming player B's season is better - in that situation, all you can say is that the two seasons are very similar.

The problems I have with Adjusted Stats are two fold.

The first and by faaaar the biggest issue, is how they are used far too often by far too many.
Too many people believe and/or use them as a straight up replacement for raw stats.

Second, they are an averaging/normalizing equation and the further you get away from the median, the more inaccurate their projections will get.
In most averaging situations, this is not that big of a deal because you can eliminate low and high numbers to keep outlier numbers from skewing the results.
The problem though is that 99% of the time it's those outliers or top 1% that we're talking about around here.

As Devil touched on early in this thread, the reduction in scoring has in no way affected every tier of player equally. So applying a mathematical formula that treats everyone equally is obviously going to have major flaws.

IMO when you get too far into the math of it you can't see the forest for the trees.

Are adjusted stats perfect? No. Yeah, there are some small scoring distribution issues in there.

The distribution issues are anything but small. To take one example, compare the adjusted scoring leaders from 1985-86 to 2002-03.

In 1985-86:
5th place Mike Bossy is credited with 97 adjusted points
10th place Dale Hawerchuk/Neil Broten is credited with 82 adjusted points
20th place Dino Ciccarelli is credited with 69 adjusted points.

In 2002-03:
5th place Todd Bertuzzi is credited with 108 adjusted points
10th place Mike Modano/Zigmund Palffy is credited with 95 adjusted points
20th place Jaromir Jagr/Alexei Kovalev is credited with 86 adjusted points.

Does anyone really believe that 20th place in 2002-03 was slightly better than 10th place in 1985-86?

Quote:

But when you look at the gap between raw numbers and actual relative performance, adjusted stats close the vast majority of that gap and are by far the best tool for comparing post-1967 NHL players.

I disagree that adjusted stats are the best tool for comparing post-1967 players. If we are comparing high scoring players (the ones we usually talk about in a historical perspective), I'd rather compare them to only other high scoring players either through rankings or percentages, not to all players in the league.

Quote:

As long as you have perspective, it's fine. If one player's 1984-85 season comes in at 68 adjusted points and another players 1998-99 comes in at 71, you'd obviously be an idiot if you're claiming player B's season is better - in that situation, all you can say is that the two seasons are very similar.

68 adjusted points would be the 29th best scorer in 1984-85
71 adjusted poitns would be the 33rd best scorer in 1998-99

Well, that comparison works out fine.

It seems there are certain years where top scorers score an unusually high percentage of league points (2002-03 is one, 1992-93 is definitely one, 1998-99 seems like it is not one) and this creates major distortions in what stats based off league averages say.

The distribution issues are anything but small. To take one example, compare the adjusted scoring leaders from 1985-86 to 2002-03.

In 1985-86:
5th place Mike Bossy is credited with 97 adjusted points
10th place Dale Hawerchuk/Neil Broten is credited with 82 adjusted points
20th place Dino Ciccarelli is credited with 69 adjusted points.

In 2002-03:
5th place Todd Bertuzzi is credited with 108 adjusted points
10th place Mike Modano/Zigmund Palffy is credited with 95 adjusted points
20th place Jaromir Jagr/Alexei Kovalev is credited with 86 adjusted points.

Does anyone really believe that 20th place in 2002-03 was slightly better than 10th place in 1985-86?

Actually, that wouldn't surprise me much. Since the early-mid 90s, overseas/US players have roughly doubled the number of top end scoring forwards, so a 20th place finisher during that time might be expected to be roughly equivalent to a 10th place finisher from the WHA merger to the early 90s (of course it will vary by the actual seasons being compared).

Quote:

Originally Posted by TheDevilMadeMe

I disagree that adjusted stats are the best tool for comparing post-1967 players. If we are comparing high scoring players (the ones we usually talk about in a historical perspective), I'd rather compare them to only other high scoring players either through rankings or percentages, not to all players in the league.

One needs to make an adjustment for the change in competition though. Between expansion and the WHA merger, there is significantly less population, very little overseas/US presence in the NHL, and talent siphoned off by the WHA. The last two decades see a much larger representation by overseas/US players, esp. in the top tiers. Neither method is flawless, but I certainly wouldn't use ranking amonst peers over the best adjusted numbers available.

Quote:

Originally Posted by TheDevilMadeMe

68 adjusted points would be the 29th best scorer in 1984-85
71 adjusted poitns would be the 33rd best scorer in 1998-99

Well, that comparison works out fine.

It seems there are certain years where top scorers score an unusually high percentage of league points (2002-03 is one, 1992-93 is definitely one, 1998-99 seems like it is not one) and this creates major distortions in what stats based off league averages say.

Some of this can be caused by new talent that is disproportionate (for instance, overseas talent being composed of top forwards and offensive d-men in much higher proportion than checking forwards, defensive d-men and goalies). Other factors like large changes in PP opportunities are important as well.

The '01-04 period is a rather strange one that I can't exactly explain with much confidence. I'm really not sure what was going on during that time, besides a lot of the superstar forwards from the '90s having aged and become injured (and one would expect the opposite effect of that being observed).

Second, they are an averaging/normalizing equation and the further you get away from the median, the more inaccurate their projections will get.

If there's anything that anyone should take away from this thread, it's that adjusted stats do not normalize. It does not draw players toward the mean; all players in a particular year are either adjusted upward or downward, not adjusted toward the mean.

Quote:

Originally Posted by Rhiessan71

So applying a mathematical formula that treats everyone equally is obviously going to have major flaws.

HR's Adjusted Scoring, at least, does not actually treat everyone equally.

If there's anything that anyone should take away from this thread, it's that adjusted stats do not normalize. It does not draw players toward the mean; all players in a particular year are either adjusted upward or downward, not adjusted toward the mean.

HR's Adjusted Scoring, at least, does not actually treat everyone equally.

Oh really?
So you're saying that the difference between 1984 and 2011 is not broken down to an average % that is then applied equally to every player whether they are #1 or #500?

A % that implies that it is equally as hard for Chris Nilan to produce points as it for Wayne Gretzky in a lower scoring league.
Suuuuuure it is

Sorry but that is NOT what has happened over the last 30 years. What has happened is that lower tier player production has been reduced by a much greater degree than middle tier player production and middle tier player production has been reduced by a greater degree than top tier player production.
That is actually fact.

So please explain to me how a flat league average is going to address the greatly differing levels of production loss through the different tiers of plays.

Last edited by Rhiessan71: 10-30-2012 at 09:36 PM.

Oh really?
So you're saying that the difference between 1984 and 2011 is not broken down to an average % that is then applied equally to every player whether they are #1 or #500?

Indeed I am, because if you look at HR's method, they remove the individual's scoring totals when calculating each player's adjustment. This means that players at different scoring levels have somewhat different adjustments, though it's within a pretty narrow range.

Indeed I am, because if you look at HR's method, they remove the individual's scoring totals when calculating each player's adjustment. This means that players at different scoring levels have somewhat different adjustments, though it's within a pretty narrow range.

Pretty narrow range?
If that isn't the understatement of understatements lol

Chris Nilan 1986, points reduced to 79.4%
Wayne Gretzky 1986, points reduced to 79.1%

Sorry but how I said they do it is almost EXACTLY how they do do it.

Pretty narrow range?
If that isn't the understatement of understatements lol

Chris Nilan 1986, points reduced to 79.4%
Wayne Gretzky 1986, points reduced to 79.1%

Yeah, in a league that is larger than 6 teams, removing the player himself from the average makes so small a difference, it might as well not be there at all.

Yeah, in a league that is larger than 6 teams, removing the player himself from the average makes so small a difference, it might as well not be there at all.

Not to mention that Gretzky is actually getting punished more than anyone else because the more he scores, the lower the average gets when his points are removed.
While Nilan, who is below the average, when his points are removed, the average actually goes up and he benefits from it.

Oh really?
So you're saying that the difference between 1984 and 2011 is not broken down to an average % that is then applied equally to every player whether they are #1 or #500?

A % that implies that it is equally as hard for Chris Nilan to produce points as it for Wayne Gretzky in a lower scoring league.
Suuuuuure it is

Sorry but that is NOT what has happened over the last 30 years. What has happened is that lower tier player production has been reduced by a much greater degree than middle tier player production and middle tier player production has been reduced by a greater degree than top tier player production.
That is actually fact.

So please explain to me how a flat league average is going to address the greatly differing levels of production loss through the different tiers of plays.

Applying the average across the board is not a perfect solution. One can look at how different tiers performed in different eras, but what are the reasons? For instance, from '64 to '87, the avg. % of total goals scored on special teams was ~25.3%, while from '88 to '12, the avg. % of goals on ST was 30.6%. So if one adjusted for the fact that top tier players were scoring more points than lower tier players, it would not be fair to adjust equally amongst all players. Generally, players like Jagr, Lindros, LeClair and Sundin were not helped as much by the extra PPs as players like Lemieux, Sakic, Kariya and Thornton were.

Another possible reasons for the top tier scoring proportionately more than lower tiers is that the influx of talent from overseas/US tended to be disproportionately composed of higher scoring forwards and d-men compared to their lower scoring counterparts. This bolstered the top tiers of players more than the lower tiers, skewing the distribution of points. If a class has 25% each of A, B, C and D students and suddenly a bunch of new students arrive, if those new students are disproportionately more A/B students, the upper tiers are going to appear stronger than before in comparison to the lower tiers. That doesn't mean it suddenly became easier for the top tiers to excel, only that there were suddenly higher quality students in the class, which bettered the avg. for both the top tier(s) and the class as a whole.

Not to mention that Gretzky is actually getting punished more than anyone else because the more he scores, the lower the average gets when his points are removed.
While Nilan, who is below the average, when his points are removed, the average actually goes up and he benefits from it.

That's hilarious

That's not my understanding of the results of HR's method (with which I don't necessarily agree). The more points a player scored, the lower the league average gets when the points are removed... which helps that player, because it increases the ratio of standard gpg used to league avg. gpg that season, which is what is being multiplied by the players actual points:

Applying the average across the board is not a perfect solution. One can look at how different tiers performed in different eras, but what are the reasons? For instance, from '64 to '87, the avg. % of total goals scored on special teams was ~25.3%, while from '88 to '12, the avg. % of goals on ST was 30.6%. So if one adjusted for the fact that top tier players were scoring more points than lower tier players, it would not be fair to adjust equally amongst all players. Generally, players like Jagr, Lindros, LeClair and Sundin were not helped as much by the extra PPs as players like Lemieux, Sakic, Kariya and Thornton were.

I hope you're not trying to suggest that an increase of about 5% in ST goals of which only account for about 20-25% of total goals scored in a season actually accounts for the much greater than even the raw 5% increase in ST goals, let alone the actual 1-1.5% increase it represents overall, that scoring gap between top tier and lower tier players has increased by?

Quote:

Another possible reasons for the top tier scoring proportionately more than lower tiers is that the influx of talent from overseas/US tended to be disproportionately composed of higher scoring forwards and d-men compared to their lower scoring counterparts. This bolstered the top tiers of players more than the lower tiers, skewing the distribution of points. If a class has 25% each of A, B, C and D students and suddenly a bunch of new students arrive, if those new students are disproportionately more A/B students, the upper tiers are going to appear stronger than before in comparison to the lower tiers. That doesn't mean it suddenly became easier for the top tiers to excel, only that there were suddenly higher quality students in the class, which bettered the avg. for both the top tier(s) and the class as a whole.

If as many, including yourself I believe, have stated that today's league as a whole is so much more talented and that the lower tier players are closer in skill to the top tier players today than they were in the 80's AND if it is so much harder to stand out in today's NHL...how in the holy hell is what you're saying even possible???
I mean if all that were true, the allocation of goals between the tiers should be smoother and more equal, just like it was in the 80's.

No, I'm sorry but you're wrong.
Occam's razor couldn't be more appropriate here.
The simplest and by far the best answer here is that when goaltending and defenses got more stingy, it choked off the lower tiered/lesser skilled players offensive contributions while the top tier/higher skilled players were not choked off by any where close to the same level.

Quote:

Originally Posted by Czech Your Math

That's not my understanding of the results of HR's method (with which I don't necessarily agree). The more points a player scored, the lower the league average gets when the points are removed... which helps that player, because it increases the ratio of standard gpg used to league avg. gpg that season, which is what is being multiplied by the players actual points:

Adj. Pts = Raw Pts * (6.00 gpg) / League avg. gpg

Please, for the love of god, show me how it helps Gretzky when his points are being multiplied by a lower number than Nilan's are?
Seriously!

And honestly, do you not realise that everything you just said here is all about figuring out and making excuses for why it's a FACT that scoring between tiers has changed so much.
But does absolutely nothing to change the FACT that it did and is happening and the FACT that Adjusted Stats, in its current form, does absolutely nothing to adjust for it!!!

Which is the whole point that I (and Devil earlier) are making!

You know why I always find math fun? Because when you are right and someone else is wrong, it's so easy to prove them wrong. You simply say hey look and there's no debating of or degrees of being right.
So just to show you how truly inaccurate Adjusted Stats are, all that has to be done is is go back to say 1986, go through the raw stats, separate the players into their respective tiers (for simplicity you use 5 tiers, 1 tier/20%) and figure out how much each tier contributes to overall scoring.
Now plug the adjusted stats in for 1986, then separate the players into the previously established tiers and see if the contribution of each tier towards league scoring remains the same.
I guaran-damn-tee you they will change drastically! The higher tiers' piece of the pie will decrease, while the lower tiers' piece of the pie will increase and that is NOT inline with the FACTS of what is actually happening today and has happened over the last 30 years. It's actually completely and utterly the opposite!

The pie %'s will change because you are attempting to pull everything towards the average and you know what, there actually is a term used to describe this phenomenon...it's called NORMALIZATION!

Last edited by Rhiessan71: 10-31-2012 at 03:35 AM.

I hope you're not trying to suggest that an increase of about 5% in ST goals of which only account for about 20-25% of total goals scored in a season actually accounts for the much greater than even the raw 5% increase in ST goals, let alone the actual 1-1.5% increase it represents overall, that scoring gap between top tier and lower tier players has increased by?

Perhaps I was unclear. Of the total goals scored in the season, from '64-87, an avg. of 25.3% were ST goals. From '88-'12, an avg. of 30.6% of the total goals were ST goals. That's an increase of 5.3% of the total goals. Another way to look at it is that in the first 24 year period, there was ~3:1 ratio of ES:ST goals, while in the most recent 24 season period the ratio was < 2.3 to 1.

Quote:

Originally Posted by Rhiessan71

If as many, including yourself I believe, have stated that today's league as a whole is so much more talented and that the lower tier players are closer in skill to the top tier players today than they were in the 80's AND if it is so much harder to stand out in today's NHL...how in the holy hell is what you're saying even possible???
I mean if all that were true, the allocation of goals between the tiers should be smoother and more equal, just like it was in the 80's.

No, I'm sorry but you're wrong.
Occam's razor couldn't be more appropriate here.
The simplest and by far the best answer here is that when goaltending and defenses got more stingy, it choked off the lower tiered/lesser skilled players offensive contributions while the top tier/higher skilled players were not choked off by any where close to the same level.

I'm going to try to keep it as simple as possible, using the classroom analogy. We start with this distribution, representing the NHL before Euro/Russian players started arriving en masse:

2 A students
4 B students
8 C students
2 D students

So the GPA of each of the 25% tiers is:

1st tier = 2 A + 2 B = 3.5
2nd tier = 2 B + 2 C = 2.5
3rd tier = 4 C = 2.0
4th tier = 2 C + 2 D = 1.5

Now let's add 8 new students of a higher quality on average to the class: 4 A , 2 B, and 2 C students. The new 25% tiers would be:

1st tier = 6 A = 4.0
2nd tier = 6 B = 3.0
3rd tier = 6 C = 2.0
4th tier = 4 C + 2 D = 1.67

The top 2 tiers are now stronger relative to the bottom two tiers than they were before. This is the type of effect I'm talking about.

Quote:

Originally Posted by Rhiessan71

Please, for the love of god, show me how it helps Gretzky when his points are being multiplied by a lower number than Nilan's are?
Seriously!

As I tried to show previously, Gretzky's points would not be multiplied by a lower number (the calculated league avg. gpg that season), but multiplied by a constant (6.00 gpg is the most frequently used standard) and divided by the lower number (league avg. gpg). Dividing by a lower number yields a higher result, hence why I said that such a calculation would help the higher scoring player, such as Gretzky.

Quote:

Originally Posted by Rhiessan71

And honestly, do you not realise that everything you just said here is all about figuring out and making excuses for why it's a FACT that scoring between tiers has changed so much.
But does absolutely nothing to change the FACT that it did and is happening and the FACT that Adjusted Stats, in its current form, does absolutely nothing to adjust for it!!!

Which is the whole point that I (and Devil earlier) are making!

You know why I always find math fun? Because when you are right and someone else is wrong, it's so easy to prove them wrong. You simply say hey look and there's no debating of or degrees of being right.

I guess one thing we can agree on is a reason we both find math fun. That's not really the exact, main reason I enjoy math though.

What I was trying to explain was that if you further adjust the simple adjusted numbers for the fact that certain tiers score different proportions of points in different eras, without knowing the reasons for the change in relative scoring between tiers, then some players are likely to be favored/disfavored by such an additional adjustment. In the case of the change in talent distribution causing the change, one might find that additional adjustment is unnecessary/unfair. In the case of the change in PP opportunities causing the change, such an additional adjustment (if applied evenly across all players) would not adjust fully for some players and over-adjust for other players, and so not be equitable in that sense.

Quote:

Originally Posted by Rhiessan71

So to show you how Adjusted Stats are wrong, all that has to be done is is go back to say 1986, go through the raw stats, separate the players into their respective tiers (for simplicity you use 5 tiers, 1 tier/20%) and figure out how much each tier contributes to overall scoring.
Now plug the adjusted stats in for 1986, then separate the players into the previously established tiers and see if the contribution of each tier towards league scoring remains the same.
I guaran-damn-tee you they will change drastically! The higher tiers' piece of the pie will decrease, while the lower tiers' piece of the pie will increase and that is NOT inline with the FACTS of what is actually happening today and has happened over the last 30 years. It's actually completely and utterly the opposite
The pie %'s will change because you are attempting to pull everything towards the average and you know what, there actually is a term used to describe this phenomenon...it's called NORMALIZATION!

I've given two valid reasons why scoring between tiers may have changed, so the fact that they have changed does not negate the value of simple adjusted stats. Until the reasons for the change are explored further and agreed upon, additional adjustments may cause as much or more harm than good. This is especially true, because of an important principle of simple adjusted stats: they maintain a fixed proportion to league gpg, which equates to a fixed value for each goal/points. IOW, 50 points in a 5 gpg league has roughly the same value as 70 points in a 7 gpg league. Any further adjustment that is made distorts the value of the new adjusted goals/points, so to introduce such a distortion into the data, one had better be very sure that the extra adjustment is A) necessary, B) applied properly and fairly (based on valid reasons that should have substantial evidence and quantification) and C) realize that even if A & B are deemed to be true, it still distorts the actual value of the adjusted goals/points (since it now deviates from their actual value which is fixed in proportion to the avg. gpg). To blindly declare that "there was a change between tiers, who cares why, so now we must adjust further" is really dangerous from a statistical perspective IMO. Remember, there are (at least) two aspects to adjusted data: calculating relative value (simple adjusted data will always be fixed and correct in this regard) and examining players' data in other ways (projecting to other eras, estimating the difficulty of achieving such stats, etc.). The latter is influenced by a multitude of competing, simultaneous factors (changes in league rules, systems, styles... changes in composition and distribution of talent as a whole, per-team, between teams, by position, etc. ... changes in opportunity and a host of other factors).

BTW, another example of a change that can cause a change in scoring between tiers is an increased roster size will change the scoring between tiers due to talent dilution and a new lower tier of players being created which will not have similar PP opportunity as the higher tiers (among other possible reasons).

Also, as Iain has pointed out in response to Dalton's incessant use of power distributions and such, normalization has to do with the shape of the distribution. If a process (such as simple adjusted stats) affects all players roughly equally, then it won't change the shape of the distribution, it will only change the individual magnitudes for each player (but they will maintain the same proportional value to one another).

Last edited by Czech Your Math: 10-31-2012 at 04:19 AM.

Perhaps I was unclear. Of the total goals scored in the season, from '64-87, an avg. of 25.3% were ST goals. From '88-'12, an avg. of 30.6% of the total goals were ST goals. That's an increase of 5.3% of the total goals. Another way to look at it is that in the first 24 year period, there was ~3:1 ratio of ES:ST goals, while in the most recent 24 season period the ratio was < 2.3 to 1.

I'm going to try to keep it as simple as possible, using the classroom analogy. We start with this distribution, representing the NHL before Euro/Russian players started arriving en masse:

2 A students
4 B students
8 C students
2 D students

So the GPA of each of the 25% tiers is:

1st tier = 2 A + 2 B = 3.5
2nd tier = 2 B + 2 C = 2.5
3rd tier = 4 C = 2.0
4th tier = 2 C + 2 D = 1.5

Now let's add 8 new students of a higher quality on average to the class: 4 A , 2 B, and 2 C students. The new 25% tiers would be:

1st tier = 6 A = 4.0
2nd tier = 6 B = 3.0
3rd tier = 6 C = 2.0
4th tier = 4 C + 2 D = 1.67

The top 2 tiers are now stronger relative to the bottom two tiers than they were before. This is the type of effect I'm talking about.

As I tried to show previously, Gretzky's points would not be multiplied by a lower number (the calculated league avg. gpg that season), but multiplied by a constant (6.00 gpg is the most frequently used standard) and divided by the lower number (league avg. gpg). Dividing by a lower number yields a higher result, hence why I said that such a calculation would help the higher scoring player, such as Gretzky.

I guess one thing we can agree on is a reason we both find math fun. That's not really the exact, main reason I enjoy math though.

What I was trying to explain was that if you further adjust the simple adjusted numbers for the fact that certain tiers score different proportions of points in different eras, without knowing the reasons for the change in relative scoring between tiers, then some players are likely to be favored/disfavored by such an additional adjustment. In the case of the change in talent distribution causing the change, one might find that additional adjustment is unnecessary/unfair. In the case of the change in PP opportunities causing the change, such an additional adjustment (if applied evenly across all players) would not adjust fully for some players and over-adjust for other players, and so not be equitable in that sense.

I've given two valid reasons why scoring between tiers may have changed, so the fact that they have changed does not negate the value of simple adjusted stats. Until the reasons for the change are explored further and agreed upon, additional adjustments may cause as much or more harm than good. This is especially true, because of an important principle of simple adjusted stats: they maintain a fixed proportion to league gpg, which equates to a fixed value for each goal/points. IOW, 50 points in a 5 gpg league has roughly the same value as 70 points in a 7 gpg league. Any further adjustment that is made distorts the value of the new adjusted goals/points, so to introduce such a distortion into the data, one had better be very sure that the extra adjustment is A) necessary, B) applied properly and fairly (based on valid reasons that should have substantial evidence and quantification) and C) realize that even if A & B are deemed to be true, it still distorts the actual value of the adjusted goals/points (since it now deviates from their actual value which is fixed in proportion to the avg. gpg). To blindly declare that "there was a change between tiers, who cares why, so now we must adjust further" is really dangerous from a statistical perspective IMO. Remember, there are (at least) two aspects to adjusted data: calculating relative value (simple adjusted data will always be fixed and correct in this regard) and examining players' data in other ways (projecting to other eras, estimating the difficulty of achieving such stats, etc.). The latter is influenced by a multitude of competing, simultaneous factors (changes in league rules, systems, styles... changes in composition and distribution of talent as a whole, per-team, between teams, by position, etc. ... changes in opportunity and a host of other factors).

BTW, another example of a change that can cause a change in scoring between tiers is an increased roster size will change the scoring between tiers due to talent dilution and a new lower tier of players being created which will not have similar PP opportunity as the higher tiers (among other possible reasons).

Also, as Iain has pointed out in response to Dalton's incessant use of power distributions and such, normalization has to do with the shape of the distribution. If a process (such as simple adjusted stats) affects all players roughly equally, then it won't change the shape of the distribution, it will only change the individual magnitudes for each player (but they will maintain the same proportional value to one another).

Look, before we go any further in this (and believe me, I have so many things to say about your base premises in the above post, it's not even funny), lets remember something.
The point of this thread is not to come up with valid reasons why and/or solutions to the flaw in the current version of Adjusted Stats.

The point of this thread is to determine the value of the current version of Adjusted Stats.

Now on that front, it has obviously been proven that there is a flaw or we wouldn't be talking about cause and solutions in the first place.
My point that the further away from the median you get, the more inaccurate Adjusted stats get IS valid (btw in my example in my previous post I made a mistake. You don't separate the tiers by a %, you separate them by point levels 80+/Tier 1, 61-80/tier 2, 41-60/tier 3 ect ect so you don't end up with an equal number of players in each tier because in real life, the tiers are not equal in their number of players in each).

Since adjusted stats get worse the further you get from the median and 99% of the players we are comparing are as far away from the median as you can get...

The only conclusion that can be made and the answer to the actual question and point being asked is that the current version of Adjusted Stats is NOT even close to being as valuable, for the majority of what they are used for, as they are usually represented(most often quite stupidly at face value) to be!

That said, I do understand that trying to figure out exactly what is causing the flaw will help determine value but at the same time it's pretty clear that the flaw is big enough that any determination of cause is only going to change the severity of that "bigness" not whether it is big in the first place.

Again, as I have said a million times now. Use them to help determine the difference between what player A did in 1985 compared to what player B did in 2010 but do NOT use them at face value.

Adjusted Stats are a helpful guide and are part of the equation, they are NOT a god damned final answer!!!

And don't tell me you don't do that Czech because not that long ago, you actually tried to use them at face value to compare points by Gretzky in the 90's to points by Jagr in the 90's and I just about lost my mind on you! http://hfboards.hockeysfuture.com/sh...=#post53910551

Last edited by Rhiessan71: 10-31-2012 at 11:33 AM.

The distribution issues are anything but small. To take one example, compare the adjusted scoring leaders from 1985-86 to 2002-03.

In 1985-86:
5th place Mike Bossy is credited with 97 adjusted points
10th place Dale Hawerchuk/Neil Broten is credited with 82 adjusted points
20th place Dino Ciccarelli is credited with 69 adjusted points.

In 2002-03:
5th place Todd Bertuzzi is credited with 108 adjusted points
10th place Mike Modano/Zigmund Palffy is credited with 95 adjusted points
20th place Jaromir Jagr/Alexei Kovalev is credited with 86 adjusted points.

Does anyone really believe that 20th place in 2002-03 was slightly better than 10th place in 1985-86?

It's possible. Also with 9 more teams in 03 you have 9 more sets of top PP duty and top line scoring. not sure if that's enough to make up the difference but it would account for some of it right?

Quote:

I disagree that adjusted stats are the best tool for comparing post-1967 players. If we are comparing high scoring players (the ones we usually talk about in a historical perspective), I'd rather compare them to only other high scoring players either through rankings or percentages, not to all players in the league.

It probably doesn't matter if it is the best tool or not.

Like in all comparisons the more information and different angles and perspectives we use to evaluate and compare players the more complete the conclusion or guess will be in the comp.

68 adjusted points would be the 29th best scorer in 1984-85
71 adjusted poitns would be the 33rd best scorer in 1998-99

Well, that comparison works out fine.

It seems there are certain years where top scorers score an unusually high percentage of league points (2002-03 is one, 1992-93 is definitely one, 1998-99 seems like it is not one) and this creates major distortions in what stats based off league averages say.[/QUOTE]

Prove yourself right first. You're making a specific claim about adjusted scoring, it's up to you to prove that it's right.

All NHL games are decided by invisible ice gnomes who magically control where the puck goes. Prove me wrong.

Damn I knew it, Montreal had the technology for the longest time too right?

We all know that Vancouvers version wasn't as good as Bostons.

We could post this on the other boards and people would be all over it.

Back to the value of adjusted stats, it's pretty simple to see that they are quite valuable when comparing vastly different scoring eras, much more so than simply counting stats.

Back to the value of adjusted stats, it's pretty simple to see that they are quite valuable when comparing vastly different scoring eras, much more so than simply counting stats.

Half right.
When comparing middle tier players across era's they are more valuable than raw stats.
When comparing top tier players across era's they most certainly are not more valuable than raw stats.
That does not mean that raw stats don't still hold some value in the first example nor does that mean that adjusted stats don't still hold some value in the second example though.

Comparing players from different eras is not about ONLY using one or the other. You use both as well as anything else you can get your hands on.

Adjusting to 82 games/season gives each season equal value, regardless of actual schedule length. Assuming each game has equal value, adjusting to a constant, fixed number of games gives each season equal value.

Adjusting to 6 gpg is a way of expressing each goal/point's value in proportion to the league scoring context, using a fixed (although arbitrary) reference. I.e., if the league avg. gpg increases by 50%, a player's output would need to increase by 50% to be of equal value in the new scoring environment.

-----------------

I've read some of your other recent posts and the numbers don't even make sense to me. I'm not even sure what you're trying to say, to be honest. I think you should better explain what the numbers mean (in as simple terms as possible), if you expect much meaningful feedback.

One thing that seems to be ignored is that the talent pool has changed dramatically over time. Howe was not facing the same top end competition as Gretzky, and Gretzky in his prime wasn't facing the same top end competition as has been present for most of the past two decades.

Using top X% of players is reliant on size of the league... as league size increases, the pool of top X% players increases proportionately. Using top Y players is reliant on the composition of the talent pool... as the talent pool increases, the quality of the top Y players increases.

It's important to remember that simple adjustment of data (for schedule, league scoring avg., etc.) is justified based on equivalent value. Once the data is properly adjusted, 40 adjusted goals has the same value in any season. This must be understood before moving on to alternative questions, such as how difficult/impressive is 40 adjusted goals in one year compared to another, because that is a much different question. The latter relies on factors such as the quality of players in the league, the concentration/dilution of talent in the league, the disparity of talent between teams, how different types of players are utilized in different eras, etc. If one does not or will not understand the basis of the former, it will only make it that much more difficult to understand how to approach and resolve the latter IMO.

Each season isn't of equal value. Seasons can be outliers too. Forcing them to the same value is ignoring outliers.

What you're not seeing perhaps is that whether I take players, teams or seasons there are outliers. At any point that one averages the 'bell curve' thinking occurs. It just doesn't matter how I present it. And we haven't even discussed defence and unbalanced shedules.

40 adjusted goals does not have the same value every season. It moves up and down according to the average number of goals scored per season without taking into account where that change comes from. A season with poor defence and average offence could be equal to a season with strong offence and average defence. Outliers are denied.

It's possible. Also with 9 more teams in 03 you have 9 more sets of top PP duty and top line scoring. not sure if that's enough to make up the difference but it would account for some of it right?

That probably does account for some of it. And is nothing but a distortion when comparing offensive value between two different seasons.

Top players should be compared to top players, not the league average.

The 50 and 60's had more quality of players too choose from any era the minor league teams had better players than some of the nhl clubs but never opportunity.Every block had kids playing hockey now it has diminished greatly

Jean Belliveau was playing in the Quebec senior league and making over 20 thousand a year.The Owners of the habs had to buy the league to get Jean to play for Montreal.Their are dozens of examples of players like jean.The vancouver team in the sixties might have made the playoffs in the nhl they were that good

The distribution issues are anything but small. To take one example, compare the adjusted scoring leaders from 1985-86 to 2002-03.

In 1985-86:
5th place Mike Bossy is credited with 97 adjusted points
10th place Dale Hawerchuk/Neil Broten is credited with 82 adjusted points
20th place Dino Ciccarelli is credited with 69 adjusted points.

In 2002-03:
5th place Todd Bertuzzi is credited with 108 adjusted points
10th place Mike Modano/Zigmund Palffy is credited with 95 adjusted points
20th place Jaromir Jagr/Alexei Kovalev is credited with 86 adjusted points.

Does anyone really believe that 20th place in 2002-03 was slightly better than 10th place in 1985-86?

Perhaps not, but it's worth noting that they'd be comparable in terms of percentile rank.

EDIT: For clarity, I'm not suggesting that percentile rank is perfect. The 90th percentile in 1966-67 is more impressive than the 90th percentile in 1967-68. But league size is still a relevant factor when assessing the impressiveness of the nth place scoring finish in one year versus the nth place scoring finish in another.

Last edited by Master_Of_Districts: 10-31-2012 at 03:10 PM.

Look, before we go any further in this (and believe me, I have so many things to say about your base premises in the above post, it's not even funny), lets remember something.
The point of this thread is not to come up with valid reasons why and/or solutions to the flaw in the current version of Adjusted Stats.

The point of this thread is to determine the value of the current version of Adjusted Stats.

It would seem difficult to assess the value of adjusted stats without first trying to determine what flaws exist and their magnitude (and the reasons for those flaws). This would seem to pertain to whatever other system is being compared to adjusted stats to determine relative value as well. To me, it's all inter-related, but I am confident that in the vast majority of cases adjusted stats are much more accurate than raw stats. Since simple adjusted stats tell us the comparative value of each goal (but possibly not exactly how difficult it was to attain such value, etc.), that makes it much more useful than raw data IMO.

Quote:

Originally Posted by Rhiessan71

Now on that front, it has obviously been proven that there is a flaw or we wouldn't be talking about cause and solutions in the first place.

It would be difficult to find any system of attributing value to or comparing players that did not have some significant flaw(s). However, the mere fact that some are discussing potential flaws does not prove that such flaw(s) exist. Also, the mathematical basis of simple adjusted stats is very sound, because it tells us the approximate value of the goals/points based on the league scoring context. Again, that's different than telling us how difficult it was to attain that level of value, given the many changing conditions and factors in the league, but it's a rather important piece of information, much more than raw data usually is.

Quote:

Originally Posted by Rhiessan71

My point that the further away from the median you get, the more inaccurate Adjusted stats get IS valid (btw in my example in my previous post I made a mistake. You don't separate the tiers by a %, you separate them by point levels 80+/Tier 1, 61-80/tier 2, 41-60/tier 3 ect ect so you don't end up with an equal number of players in each tier because in real life, the tiers are not equal in their number of players in each).

I (and others, such as Overpass) use % or percentile type tiers that are more fixed in proportion to the number of players in the league. I don't know what purpose using tiers based on arbitrary levels of production serves. E.g., Overpass has done some studies of scoring changes between tiers, and uses 1st liner, 2nd liner, etc. to group forwards into tiers, which sort of corresponds to the 25% tiers I used in my example. I've looked at the 1st N, 2nd N, etc. # of players (where N = # of teams). I've also looked at fixed numbers of players (e.g. #1-6, 7-12, etc.), which is the basis of comparing players to their peers (e.g. 2nd place or avg. of top 10). However, as previously stated, this method has its own (much larger IMO) pitfalls, since A) it has no basis of fixed value in proportion to the league scoring context, B) it is using a very small sample for comparison purposes, and C) it usually ignores the vast changes in the talent towards the top end of the spectrum of players.

Quote:

Originally Posted by Rhiessan71

Since adjusted stats get worse the further you get from the median and 99% of the players we are comparing are as far away from the median as you can get...

The group of players in the NHL is already at the very far left of the spectrum of hockey players as a whole, but I understand your point. However, just because we are most often examining players at the very far left of the NHL spectrum does not automatically mean that adjusted data yields flawed results for those players. Yes, it's very possible that it can, but it's still a vast improvement on raw data, with a foundation in actual value based on quantified measurements. Any further adjustment should be thoroughly justified based on quantified and reasoned evidence as to the reasons and magnitude of distortions that occur from using the simple adjusted process.

Quote:

Originally Posted by Rhiessan71

The only conclusion that can be made and the answer to the actual question and point being asked is that the current version of Adjusted Stats is NOT even close to being as valuable, for the majority of what they are used for, as they are usually represented(most often quite stupidly at face value) to be!

Maybe adjusted stats are not as valuable as some claim them to be, due to potential inaccuracies when trying to equate the difficulty of attaining certain levels of adjusted production in different seasons. However, it is again important to remember that they are based on a foundation of value in proportion to the league scoring environment. I would also point out that whatever flaws or distortions simple adjusted data may have, it is likely no greater and probably less than that for most other systems: raw data, comparing players rankings amongst their peers or to a very small subset of their peers, using the results of awards/AS voting, quotes from writers/managers/coaches/players/fans, etc.

Quote:

Originally Posted by Rhiessan71

That said, I do understand that trying to figure out exactly what is causing the flaw will help determine value but at the same time it's pretty clear that the flaw is big enough that any determination of cause is only going to change the severity of that "bigness" not whether it is big in the first place.

First, studying potential causes for distortion in isolation may result in each potential source to appear to have a larger effect than it actually does. Such distortions may often negate each other to a large degree. Second, no matter the size of the alleged "flaw", without knowing the reason for such a flaw, further adjustment may only cause further distortion. I previously gave the example of the influx of overseas players being composed of a disproportionately higher group of scoring forwards/d-men. When measuring scoring, this is like adding a bunch of high quality students to the classroom. It makes it look like it's suddenly substantially easier to get an A, when actually the student population became much higher quality on average (and particularly at the top). If one used a curve to further "adjust" those students' grades downward, it would unfairly penalize their achievements for the sake of making the distribution of grades look more "normal."

Quote:

Originally Posted by Rhiessan71

Again, as I have said a million times now. Use them to help determine the difference between what player A did in 1985 compared to what player B did in 2010 but do NOT use them at face value.

Adjusted Stats are a helpful guide and are part of the equation, they are NOT a god damned final answer!!!

And don't tell me you don't do that Czech because not that long ago, you actually tried to use them at face value to compare points by Gretzky in the 90's to points by Jagr in the 90's and I just about lost my mind on you! http://hfboards.hockeysfuture.com/sh...=#post53910551

I wouldn't say adjusted stats should be taken at absolute face value and are the final answer, but they are still a heck of a lot better than most alternatives (raw data, peer rankings, award voting, etc.). I've actually studied and presented the results of a lot of relevant topics: scoring of a fixed group of high quality players over 60+ seasons... scoring of various % tiers over time... estimating the effective NHL talent pool over time. I also have read studies of others on various relevant topics. Actually, I was one of the first people that I know of to create and use adjusted stats (along with others like HockeyOutsider), long before HR.com existed. So to imply or state that I don't understand or know how to use adjusted stats is going a bit off the deep end, don't ya think?

It's hard for me to explain why adjusted stats are useful even when comparing players across the same range of seasons. Basically, as league scoring goes down, it becomes much more difficult to separate from the pack in raw point (not %) terms. So, if one player is 50% better than avg. and then becomes 20% above avg., and the other player is 20% player above avg. and becomes 50% above avg., changes in the league scoring context will distort that in raw point terms:

Year 1
--------
league avg. 50
player A 75 (50% above)
player B 60 (20% above)

Year 2
---------
league avg. 100
player A 120 (20% above)
player B 150 (50% above)

Each player was once 20% above and once 50% above league avg., yet their totals are: Player A 196, Player B 210. Because Player B was better at a time when the league avg. was much higher, he appears to be significantly better than Player B based on a sum of raw point totals over the same seasons, when that wasn't the case.