View Single Post
11-01-2012, 07:41 AM
Registered User
Dalton's Avatar
Join Date: Aug 2009
Location: Ho Chi Minh City
Country: Vietnam
Posts: 2,096
vCash: 500
Originally Posted by Czech Your Math View Post
It's hard for me to explain why adjusted stats are useful even when comparing players across the same range of seasons. Basically, as league scoring goes down, it becomes much more difficult to separate from the pack in raw point (not %) terms. So, if one player is 50% better than avg. and then becomes 20% above avg., and the other player is 20% player above avg. and becomes 50% above avg., changes in the league scoring context will distort that in raw point terms:

Year 1
league avg. 50
player A 75 (50% above)
player B 60 (20% above)

Year 2
league avg. 100
player A 120 (20% above)
player B 150 (50% above)

Each player was once 20% above and once 50% above league avg., yet their totals are: Player A 196, Player B 210. Because Player B was better at a time when the league avg. was much higher, he appears to be significantly better than Player B based on a sum of raw point totals over the same seasons, when that wasn't the case.
A possible explanation for this is that they don't do what people think they do. It seems to take a lot more work to defend adjusted stats than to critique them. Of course I'm referring to posters that actually make the effort as opposed to those going for the 'you're too stupid to understand' response.

I disagree with your interpretation of the example you gave.

I would think that the players production was precisely equal. Each produced at a rate 50% better than the league average and also 20% better than the league average.

As for their absolute goals scored well the reality is that the player who scores the most will receive more credit for doing that.

They are different standards of measurement with different goals I think.

Scoring 49 goals in 70 games compared to 92 in 80. Obviously 92 goals is the greater achievement but if we want compare the players production compared to their peers in any season we see that 49 in 70 was a little bit better. At worst they are comparable.

I would also remind that this is not a math question. We are talking about evaluating and predicting production. In that sense we could be talking about any measurable human activity as easily as goals or points.

Is 20% of %100,000 in sales 10 years ago better or worse than 20% of $1,000,000 in sales last year? The actual numbers are different but the productivity is identical. If a salesperson achieved exactly 20% every year then predicting would appear pretty straight forward.

We have a study that shows us that outliers play a large role in determining averages. Hockey GS among left wingers were used in the HR study. It's like a fractal. Outliers influence the averages regardless of the sample size. Teams, divisions, seasons all the same. Using averages results in errors of several magnitudes higher than not using them.

In some seasons the sum of production of all the players is an outlier. Averaging seasons would result in errors several magnitudes above not using means or normalization. Averaging seasons is not a loop hole. This is why I proposed comparing production across eras as a means of comparing production. There is no average worker in a sense. There are outliers and then the rest. The rest are always below the average because of the outliers influence on the averages. Summing the production shouldn't change this. The mere fact of using an average to calculate production should lower the outliers performance at the high end and increase everyone elses by definition. The 'average' worker's production is actually below the average production of all workers because of the effect of the outliers on the average. The outliers production is not only above the average but is sometimes way above the average. This difference is unpredictable and can get unpredictably large. Just look at seasons with very few players.

Selanne scored 52 in 97/98
Howe scored 49 in 52/53

I'll post my numbers in case I made an error.

First I need the average goals per game in each season-

Now I multiply the gs of each player by 6 and divide by the gpg in their respective seasons.


Did I do that correctly?

But comparing their actual productivity as a percentage of the work done or results achieved by all the workers we get dramatically different results.

Howe's production in his season is equivalent to 626 goals among the top 5% in 97/98.
Selanne's production in his season is equivalent to 16 goals among the top 5% in 52/53.

Of course these are not expected results in the real world but it does a much better job of comparing the players real productivity.

I think people started this adjusted thing to actually predict what each player might score in each other's season's as a means of comparing them. Criticism has moved everyone away from this original line of thinking. Unfortunate because I believe it is the source of the debate. Adjusted scoring was meant to predict, It doesn't so supporters are trying to salvage it as something else that they are having trouble explaining instead of moving on.

I think you have to look at another route to achieve prediction. Not math or logic but reasoning. They are different.

Howe outscored his next best producer by a bit more than 50% So in Selanne's season he probably gets about 79 or 80 goals.

Selanne matched his best competitor which suggest he gets 49 too in Howe's season. But OTOH Selanne is just grouped with a bunch of scorers in the 50 goal range. I think it's more likely Selanne gets 30+ goals. Maybe 32.

Howe just spanked his competition but Selanne didn't.

Of course we can down and dirty utilizing available stats such as STs gs to enhance the debate.

Honestly I think this is the best we can do.

I don't believe adjusted stats have any role anymore now that it seems to be accepted by all that they don't do what they were originally intended to do- predict.

Last edited by Dalton: 11-01-2012 at 08:42 AM.
Dalton is offline   Reply With Quote