Quote:
Originally Posted by Iain Fyffe
No it doesn't. At the zero end it follows very closely, because the adjustment essentially applies a coefficient to the raw data (which is, of course, not a normalization), and the lower the raw value, the less effect the coefficient has.
The adjusted curve actually fits the worst at the upper extreme, because at higher raw values the coefficient will have a greater effect.
That is, 1 times 1.1 is 1.1, a change of 0.1 (which will show as no change in adjusted scoring, which displays results in integers), while 50 times 1.1 is 55, a change of 5.0.
And if it happened to be a season that generally decreased raw values in the adjustment (such as the early 80s), you'd see the adjusted data take on larger values, then smaller ones. If samesmallerlargersmaller is a bell curve, then what is largersmaller? If the former is the result of the adjustment applying a bell curve (it's not), what is the latter a result of?
You have, of course, left off the final instance where the adjusted data is higher than the raw. Meaning that even from this bizarre perspective, it's lower, then higher, then lower, then higher. How is that a bell curve?
If it's so fricken obvious man, you should be able to give us a simple function to demonstrate that it's true. Something, anything other than an assertion.
If you respond to nothing else, please finally answer this question: how does adjusted scoring normalize scoring numbers? You have still not shown this to be true.
No, it doesn't. The adjustment doesn't care about distribution. It applies a coefficient to a player's raw totals to get an adjusted one. The coefficient varies a small amount from player to player, but it is not calculated to move players closer to the mean, which is what normalization means.
There are times that normalization is used in hockey analysis (regressing singleseason scoring percentages to the mean, for example). But adjusted stats are not one of those times.
The distribution or raw stats is irrelevant to adjusted stats. So this is a moot point.
If adjusted scoring actually normalized the data, then players with low goal totals would be adjusted upward, and players with high goal totals would be adjusted downward, both toward the mean. This does not happen.
In general terms, in highscoringenvironment seasons, all players have their totals adjusted downward (not toward the mean, just down regardless of where the mean is), and in lowscoringenvironment seasons, all players have their totals adjusted upward (not toward the mean, just up regardless of where the mean is).
This is not normalization.

I have answered these questions.
This is normalization:
"Adjusted Statistics
In order to account for different schedule lengths, roster sizes, and scoring environments, some statistics have been adjusted.
All statistics have been adjusted to an 82game schedule with a maximum roster size of 18 skaters and league averages of 6 goals per game and 1.67 assists per goal."
A bell curve. Everyone's stats are adjusted to suit a 60% average so to speak. You apply this to a power curve and get a blip in the middle of the data where players results are increased at a greater rate then those on either side of the median.
The blip is clearly visible.