By The NumbersHockey Analytics... the Final Frontier. Explore strange new worlds, to seek out new algorithms, to boldly go where no one has gone before.

Improving Adjusted Scoring and Comparing Scoring of Top Tier Players Across Eras

Improving Adjusted Scoring and Comparing Scoring of Top Tier Players Across Eras

I've done a study of a large, but fixed, group of players over several decades.

GOAL: To determine which seasons were most and least difficult for top line forwards and top offensive defensemen to score points.

METHODOLOGY: The concept of the study is simple. It's essentially similar to the method others used in developing "league equivalencies", which has been done for the NHL vs. WHA, NHL vs. minor leagues, and NHL vs. foreign leagues, for various periods. So it's considering "Year X NHL" and "Year X+1 NHL" to basically be different leagues, then examining how players who participated in both performed. It does not necessarily have to be restricted to a certain group of with minimum quality threshold, but some possible reasons for restricting it to higher scoring forwards and defensemen include:

- Such players may be expected to have longer careers, so for each player included there will be more total "player-year"s in each study (decreasing the statistical error in the study).

- Such players produce at a higher level, which is less influenced by random error (variation).

- Such players often have less variation in opportunity (ice time, PP time, etc.), so there results should be less influenced by this.

- Such players are most frequently the subject of comparison. Since the results are derived from such players, the results should be especially applicable to such players, and so particularly useful.

I have continued to add players to the study. The number of players in each season pair (which will usually be less than the number of players in the fixed group which were active at that time):

- From the '47-8 pair of seasons to the '67-8 pair of seasons, it's 32-45 players and an average of ~37 players per pair of seasons. That's an average of about 6 players per team.

- For the '68-9 seasons to the '71-2 seasons, it's 54-62 players and an average of ~53 players or ~4.5 players per team.

- For the '72-3 seasons to '81-82 seasons, it's 72-82 players and an average of ~78 players per pair of seasons or ~4.2 players per team.

- For the '82-83 to '03-4 seasons, it was 79-91 players and an average of ~84 players per pair of seasons or ~3.4 players per team.

Each pair of consecutive seasons was examined separately, the results sorted by % change in adjusted PPG. I looked at several ways to measure the effect, and the results differ significantly depending on which metric is used. I used the median half of players in terms of % change in PPG, since it both includes a full half of the players participating in each season pair and still removes many outliers in both directions that can be caused by irrelevant factors or random error. I believe using the median (middle) half of players to be a good method for obtaining reliable results. This discards the top 1/4 and bottom 1/4 of players in terms of % change in PPG, so that factors such as injury, change in opportunity, change in team or linemates, improvement or decline due to age, etc. are prevented from substantially affecting the results in a harmful fashion.

The method of linking each "year over year" to produce numbers which may be used to compare across longer spans of time is relatively simple math.

Here's what the effective league GPG was for top tiers of players, based on the results of this study:

Last edited by Czech Your Math: 08-03-2012 at 09:53 PM.

Example of how results were calculated: 1966-7 & 1967-8 seasons

Before I show the results of a pair of consecutive seasons, I want to mention a couple things about the data I used. I calculated adjusted PPG differently than other sources may, such as HR.com. There is no adjustment for roster sizes, nor are players' individual stats deducted from league totals beforehand. I used each season's gpg and assist-per-goal ratios in the calculations. I normalized to a fixed gpg (the number chosen is irrelevant, but it was 8.00) and used an apg ratio of 5/3 or 1.667 (this affects goal-scorers and playmakers slightly differently). If someone was to repeat a study similar to this, it's probably easier to use "raw" PPG, but this shouldn't change the results significantly.

So let's look at one hypothetical player: Joe has an adjusted PPG of 1.00 in season Y and 1.05 in season Y+1. So his adjusted PPG increased by 5.0% from season Y to season Y+1. This +5.0% (or +.05) is the basis of all further calculation in the study, once it was determined that an average of % change in PPG of the median half of players (in terms of % change in PPG for the pair of seasons) appeared to be the most reliable metric I had found thus far.

Also, I did delete a season or two for many players. I did this as sparingly as possible, since deleting one season resulted in losing data for two pairs of seasons (e.g. if we deleted Joe's 1957 season, we lose his data for '56 vs. '57 as well as '57 vs. '58). In most instances, the seasons deleted were at the very beginning and/or very end of the player's career, when it seemed very obvious that:

- the player was the not yet near his prime and/or not getting full playing time
- the player was quite past his prime and no longer getting full playing time or often injured
- was injured in the middle of his career and this very dramatically affected his PPG

Most players had all of their full seasons included, while some had one or two omitted from the beginning and/or end of their careers.

Last edited by Czech Your Math: 07-30-2012 at 08:54 PM.

Here's the 44 players for the season pair 1966-7 & 1967-8, sorted by % change in adjusted PPG, and only showing adjusted PPG for each season and % change in adjusted PPG:

#

PLAYER

1967

1968

%Chg

1

McKenzie

0.71

1.30

84%

2

Cournoyer

0.78

1.36

74%

3

Provost

0.51

0.88

74%

4

Beliveau

0.97

1.68

72%

5

Bathgate

0.70

1.16

66%

6

Duff

0.61

1.01

66%

7

Hodge

0.69

1.10

60%

8

Gilbert

0.97

1.54

59%

9

Ingarfield

0.69

1.08

57%

10

TremblayG

0.70

1.05

50%

11

Hadfield

0.65

0.96

48%

12

Esposito

1.20

1.65

38%

13

Prentice

0.90

1.17

30%

14

Delvecchio

1.07

1.38

30%

15

Nevin

0.89

1.14

28%

16

Bucyk

1.10

1.40

27%

17

HoweG

1.28

1.61

26%

18

Armstrong

0.64

0.80

25%

19

MahovlichF

0.99

1.20

21%

20

Goldsworthy

0.60

0.71

17%

21

BackstromR

0.81

0.94

16%

22

Marshall

0.89

1.02

15%

23

Goyette

1.18

1.30

10%

24

WilliamsTo

0.98

1.07

9%

25

Wharram

1.26

1.36

8%

26

Pulford

0.91

0.99

8%

27

Orr

0.91

0.98

8%

28

Ullman

1.40

1.48

6%

29

Pappin

0.67

0.70

4%

30

Ellis

0.91

0.94

4%

31

Westfall

0.70

0.72

3%

32

Rousseau

1.26

1.28

2%

33

Nesterenko

0.74

0.74

1%

34

Oliver

0.73

0.73

0%

35

MartinP

0.81

0.81

0%

36

Keon

1.07

1.05

-2%

37

HullBo

1.64

1.53

-6%

38

Mikita

1.88

1.76

-6%

39

Mohns

1.33

1.19

-11%

40

HendersonP

1.18

1.02

-13%

41

HullD

0.81

0.65

-20%

42

Pilote

1.01

0.74

-27%

43

RichardH

1.15

0.76

-34%

44

Larose

0.69

0.38

-44%

Last edited by Czech Your Math: 07-28-2012 at 12:56 AM.
Reason: formatted table

Calculation of % change in adjusted PPG for the median half of players was done as follows:

The median half in this case are players ranked 12 through 33 in terms of % change in adjusted PPG for this pair of seasons. In the previous post, these are the players starting with Esposito (+38%) and ending with Nesterenko (+1%). Once the median half was selected, the calculation is rather simple. Add the 22 percentages (or decimals, such as +0.38.... to +.01), then divide by 22 to get an average.

In this case, the result is .146 or +14.6%.

What does this result suggest? It suggests if a first liner or top producing d-man scored at 1.00 adjusted PPG in '67, a decent approximation of his expected adjusted PPG in '68 would be 1.146.

The number 1.146 would then be used for the year 1968 (although it's actually a pair of seasons, '67 & '68) in comparison to 1967.

So, using this method, a % change number for each pair of seasons from '46 & '47 until '06 & '07 was produced. Here is the list of the average % change in adjusted PPG from one season to the next (season listed is the second in the pair):

Year

%Chg

1947

-3.7%

1948

-8.4%

1949

7.4%

1950

-2.3%

1951

1.2%

1952

-1.0%

1953

7.5%

1954

-1.6%

1955

-0.8%

1956

-3.6%

1957

-2.9%

1958

-1.6%

1959

1.6%

1960

-2.4%

1961

-1.2%

1962

2.7%

1963

-4.2%

1964

-6.3%

1965

-1.7%

1966

0.6%

1967

-3.1%

1968

14.6%

1969

1.1%

1970

-0.7%

1971

0.4%

1972

0.7%

1973

4.2%

1974

1.7%

1975

4.1%

1976

0.6%

1977

-3.6%

1978

-0.8%

1979

-6.2%

1980

-3.9%

1981

-5.7%

1982

-0.4%

1983

-3.1%

1984

-2.1%

1985

-1.1%

1986

0.2%

1987

-0.1%

1988

6.9%

1989

-1.0%

1990

1.9%

1991

-4.0%

1992

4.0%

1993

6.6%

1994

0.7%

1995

-3.1%

1996

3.2%

1997

-4.2%

1998

0.2%

1999

3.6%

2000

-0.4%

2001

6.7%

2002

-6.7%

2003

0.8%

2004

-0.5%

Last edited by Czech Your Math: 07-30-2012 at 11:53 AM.
Reason: now using median half of players in terms of % change in PPG, rather than median third

As you can see, 1968 shows 14.6%, meaning a +14.6% expected change in adjusted PPG from '67 to '68. So we have some approximation of adjusted PPG of top players can be expected change from one season to the next (or from the season prior), but what about seasons that are further apart? This requires further calculation.

First, we need to use to convert 14.6% into a more useful number, first by using its decimal form of .146, then by adding 1.000 to yield 1.146.

So how do we compare seasons that aren't consecutive? Let's start from the beginning to make it simpler. Our first season in the study is 1946. Since there is no season before this in the study, for now it's our baseline season and is assigned an index number 1.000.

To get an index number for 1947:

1947's % change number is -3.7% (or -.037 in decimal)

add 1.000 to get 0.963
multiply the 1946 index number (1.000) by .963 which is .963

Now we have index numbers of 1.000 for '46 and .963 for '47

To get an index number for 1948:

1948's % change number is -8.4% (-.084)

add 1.00 to get .916
multiply the 1947 index number of .963 by .916 = .881

So what does this suggest? It suggests a top line player with an adjusted PPG of 1.00 in 1946 could be expected to score at approximately a 0.88 PPG pace in 1948.

This process is repeated until an index number for each year is calcualted. These numbers are as follows:

1946

1.00

1947

0.96

1948

0.88

1949

0.95

1950

0.92

1951

0.94

1952

0.93

1953

1.00

1954

0.98

1955

0.97

1956

0.94

1957

0.91

1958

0.90

1959

0.91

1960

0.89

1961

0.88

1962

0.90

1963

0.86

1964

0.81

1965

0.79

1966

0.80

1967

0.77

1968

0.89

1969

0.90

1970

0.89

1971

0.89

1972

0.90

1973

0.94

1974

0.95

1975

0.99

1976

1.00

1977

0.96

1978

0.95

1979

0.90

1980

0.86

1981

0.81

1982

0.81

1983

0.78

1984

0.77

1985

0.76

1986

0.76

1987

0.76

1988

0.81

1989

0.80

1990

0.82

1991

0.79

1992

0.82

1993

0.87

1994

0.88

1995

0.85

1996

0.88

1997

0.84

1998

0.84

1999

0.87

2000

0.87

2001

0.93

2002

0.87

2003

0.87

2004

0.87

2006

0.86

2007

0.86

Last edited by Czech Your Math: 08-11-2012 at 03:02 AM.
Reason: update through 2007 using median half

The index numbers from the previous post can be used in the form presented, but I wanted to offer an alternative format. I summed and averaged each of the 61 index numbers, then normalized the average to 1.00, meaning that the average index number for the period studied is 1.00. Therefore, a revised index number above 1.00 means it was easier than an average year in the study to produce adjusted points, while a revised index number below 1.00 means it was tougher than average to produce adjusted points.

Here are the revised index numbers for the period '46 to '07:

Year

Index#

1946

1.14

1947

1.10

1948

1.00

1949

1.08

1950

1.05

1951

1.07

1952

1.05

1953

1.13

1954

1.12

1955

1.11

1956

1.07

1957

1.04

1958

1.02

1959

1.04

1960

1.01

1961

1.00

1962

1.03

1963

0.98

1964

0.92

1965

0.90

1966

0.91

1967

0.88

1968

1.01

1969

1.02

1970

1.01

1971

1.02

1972

1.02

1973

1.07

1974

1.09

1975

1.13

1976

1.14

1977

1.10

1978

1.09

1979

1.02

1980

0.98

1981

0.92

1982

0.92

1983

0.89

1984

0.87

1985

0.86

1986

0.87

1987

0.87

1988

0.93

1989

0.92

1990

0.93

1991

0.90

1992

0.93

1993

0.99

1994

1.00

1995

0.97

1996

1.00

1997

0.96

1998

0.96

1999

0.99

2000

0.99

2001

1.06

2002

0.99

2003

1.00

2004

0.99

2006

0.98

2007

0.98

Last edited by Czech Your Math: 08-07-2012 at 04:07 PM.
Reason: updated number for median half and thru 2007

with a fluctuating assist per goal ratio by season, how can there just be a factor that one can multiply points by? It sounds like there needs to be two components to this.

Last edited by Czech Your Math: 08-07-2012 at 04:04 PM.
Reason: quoted post deleted

thanks, that works too. Next question: with a fluctuating assist per goal ratio by season, how can there just be a factor that one can multiply points by? It sounds like there needs to be two components to this.

There's no real solution to dealing with fluctuating apg ratios that will be equally fair to both/all players being compared. It only becomes a significant issue when you go back to the mid-50's or further, and/or if you are comparing more extreme examples, like Kovalchuk vs. Oates or something of that nature.

AFAIK, for the past 55 years, the apg has been between 1.62 and 1.75. I use 1.667 (5/3) apg ratio as a rough average and to be consistent (it's easy to remember).

It's another reason why adjusted numbers of any kind are not nearly exact or some sort of gospel... only the best approximation available.

AFAIK, for the past 55 years, the apg has been between 1.62 and 1.75. I use 1.667 (5/3) apg ratio as a rough average and to be consistent (it's easy to remember).
.

ok, fair enough.

Quote:

Originally Posted by Czech Your Math

I would only claim that it's a better starting point than either raw data or simple adjusted data. .

You are definitely right that it's a better starting point than raw data. You are probably right that it's better than "simple adjusted" data. I appreciate the work.

I like it and I think it provides a batter base to start from.

The thing that I agree with the most and why I like what you're doing here is that after you have done your calculations, you are using common sense and real player comparisons to "back check" your results.

The absence of common sense is by far the biggest issue with any adjusted stats system, not the systems themselves.

Last edited by Czech Your Math: 08-07-2012 at 04:15 PM.
Reason: quoted post deleted

The absence of common sense is by far the biggest issue with any adjusted stats system, not the systems themselves.

generalize to stats and i agree
i dont understand why some (correctly) point out all these flaws in adjusted stats while not noting that many (most?) of the criticisms apply to stats in general...

I like it and I think it provides a batter base to start from.

The thing that I agree with the most and why I like what you're doing here is that after you have done your calculations, you are using common sense and real player comparisons to "back check" your results.

The absence of common sense is by far the biggest issue with any adjusted stats system, not the systems themselves.

Actually, it was 70'sLord that did the back-testing of data... and he wasn't exactly thrilled with the results, although there wasn't really much change from what simple adjusted data showed the difference to be. The only back test I did was the spontaneous example of '89 Yzerman vs. '71-74 Esposito. Those numbers I just posted are simple adjusted data (only using gpg and apg), not the adjusted adjusted data using the index numbers that I provided ITT.

I'm confident enough in the methodology to believe that's a definite improvement over simple adjusted stats.

Thanks though, I will post another alternative or two, and anyone is free to back-test, comment, question or attempt to replicate similar results.

Last edited by Czech Your Math: 04-26-2012 at 03:59 PM.

If someone wishes to do a study with similar goals and methodolgoy, then I would suggest the following possible improvements:

A) Using a strict definition for including and excluding players in the study. This could include some combination of various standards, such as seasonal rankings (ranking at least N times in the top X players for a season or for a period of Y years or during a career) and absolute performance (scoring at least Z points in at least N years, or at least Q points over a career). One could use various standards such as points, PPG, games, etc. It's not an easy task, given the changes in league size, league talent pool, league scoring, and other dynamics, all of which may affect the size and composition of the group being studied. Also, as can be seen in the thread for this study, there is the question as to whether the number of players should be held roughly constant, or increase in rough proportion to the size of the league. At the point when I stopped the study, it was more of a compromise between the two. I can see why, as was later suggested, one would want to keep it roughly proportional to league size (and therefore to opportunity), but this also likely significantly changes the composition of the group in terms of absolute (average/median) quality. It seems it might be best to use criteria that would increase the number of players as league size increases (but not necessarily in exact proportion), and that would also keep the (median) quality of players relatively static. Considering that either or both opportunity for and absolute quality of players are affected by such factors as expansions, mergers, competing leagues, newly available or expanding non-Canadian talent pools, population growth, etc., it's not exactly a simple task to do so. Perhaps more than one study is needed, such as one keeping opportunity constant, and one attempting to keep median quality of players constant.

B) Using a strict definition for including or excluding each players' individual seasons in the study. One could use every season, and I frequently did this, but also often eliminated seasons from the study without an exact definition. A minimum number of games might be best, possibly a minimum ranking or performance level, or some combination. One must realize that to include every season means including seasons with minimal games played, seasons when opportunity was obviously limited, seasons when a player is extremely young/old by hockey standards, seasons when the player was obviously injured, etc. OTOH, one should realize that to exclude player seasons may also unintentionally bias the sample in some way and so should not be done hastily (I generally included seasons when in doubt). As previously stated, one advantage of using the median half for the relevant calculations was that larger fluctuations in performance (evenly divided by direction) were filtered out as outliers.

C) I used adjusted PPG as my metric (and then the % changes in such), which was already adjusted for schedule length, league GPG and the assist/goal ratio. However, it would probably be better to use actual PPG, although the effect should be minimal (it all comes out in the wash basically, since there is a large group of player sbeing studied).

D) I used median half of players in terms of % change, after initially favoring the (probably too narrow) median third. One could use a different arbitrary median (such as 2/3, 3/4, 60% or whatever), but I don't really know what the most reliable and proper number would be. I do think using some such median is crucial in eliminating outliers arising from mostly irrelevant factors, as well as simply random error.

Finally, while an improved "duplicate" study of sorts is certainly encouraged, I also proposed an alternative way to study this using multivariable (linear?) regression analysis. One could use PPG as the dependent variable and independent binary variables such as Player A, Player B, Player C,..., Year X, Year X+1, Year X+2,..., Age N, Age N+1, Age N+2,...., etc. The 1.00 value for age could also be split amongst two consecutive variables (i.e. if player is age 25 years 6 months 0 days on the standard date used, use 0.5 for age 25 and 0.5 for age 26).

Last edited by Czech Your Math: 07-10-2012 at 02:12 AM.
Reason: how to use similar methodology to complete such a study

Very much to digest, I may give it a try later tonight.

Always been interested about Mario Lemieux' comeback season of 00-01.

He recorded 35 goals in just 43 games with 76 points.

Bure lead the league with 59 goals, and Sakic had 54.

Meanwhile the Ross winner was Jagr with 121.

Considering Lemieux was 35 years old.

It is Mario you're talking about though and he had a lot less hockey wear and tear at age 35 than most players, given that he had 4.5 seasons off, plus all the games he missed due to injuries. Plus he had a prime Jagr on his line when he returned (the only other time they really played on the same line was '97). I think it's also a case of "half seasons syndrome" where it's unlikely he would have kept up that pace over a full season, especially considering that he had played over 70 games once since '89 and never would again.

You're not the only who thinks there was something funny about the 2001 season though. This study suggests that it was about 5.5% easier to score adjusted points in 2001 than in 2000, so it really seems to capture the effect.

Here are the biggest increases in expected adjusted PPG from the previous season:

No surprise there. The three most recent increases are due at least in part to increased power plays. It may be a bit surprising that 2006 isn't on there, but the league gpg increased substantially, so it was already reflected in that statistic. Also, the first couple and last couple of seasons in the study are probably less reliable due to the number of players being slightly less.

Here are the largest decreases in expected adjusted PPG from the previous season since expansion:

There's a hangover effect in 1997 and 2002, as there were large increases the previous seasons, but power plays declined. There was a contraction by one team in 1979 and then the subsequent WHA merger in 1980.

Last edited by Czech Your Math: 08-07-2012 at 03:20 PM.
Reason: updated data to median half

The numbers you and I ran (adjusted adjusted, I called it) in the Turgeon thread led me to conclude that the prime offensive gap between the two was 1-4% in favour of Savard. You were non-commital but said it was probably 1-8%. But this new system of yours is much more favourable to DPE players from the looks of it (or more punishing to 80s players if you want to look at it that way)

The math it took you to get here is not something I'm interested in critiquing. But the end results might not be what we're "looking for", so to speak. I can live with Turgeon being 1-4% behind Savard once properly adjusted. But 6% ahead doesn't quite seem right. I can be convinced with proper examples to get the point across but just using these two players as guinea pigs since I know their numbers well, I'm not sure what to make of this.

If anything the system tends to help the '80s more than any other era. It does punish some of the big PP years like '93, '96 and 2001. What's strange is that you seem to be a proponent of both adjusted data and Turgeon (vs. Savard)... yet you seem to be questioning both based on my results. This is what's confusing to me. I'm not asking you to accept my results blindly, but to realize that simple adjusted data comes to similar conclusions about these players' best 9 years of point production. One says 4-5%, the other 6%, that's hardly a big discrepancy.

The discussion between you and Big Phil about Turgeon and Savard mainly comes down to one simple thing:

Phil sides with his senses, the opinions of others, and using the "curve" (ranking amongst ones peers) when making an evaluation.

You seemed to side with the scientific method (logic, math, statistics, data analysis)

I think you know which approach I favor. That doesn't mean they both don't have some validity and that they each have room for error.

The main problem is that the senses and memory can lie. An unbiased eyewitness can be certain that he/she witnessed something, yet be dead wrong. It happens all the time. Memory only gets worse with time and people often hold on to their opinions more staunchly than ever in face of the facts.

I did this study, because it was the best I could manage to attempt to remove all the extranneous factors that effect league scoring and ranking amongst differing peer groups. It takes actual production of a fixed group of top tier players across time and quantifies it. That is why it's an improvement upon exisiting adjusted data.

There are things the data can never capture, but the goal is to approach the limit of what the data can tell us and use it as an objective starting point for further discussion.

I know there's at least a handful of posters on this forum who actually can understand both the methodology and the implications of this study, because they have done some fantastic quantitative studies themselves. I hope some of them take the time to read the study and give some feedback as to the methodology and results. Better yet, someone with the proper database and code-writing skills could replicate a similar study in a relatively short time to either affirm or disprove my results. In the absence of such, I stand by the results as a significant improvement over existing adjusted data.

Last edited by Czech Your Math: 04-28-2012 at 03:19 PM.

I think this work and others similar is very important for fans with interest in using math to study hockey. I am very happy that the OP outlined his methods.

But we still need more context to truly capture this. I feel we are in a Ptolemeic age just using numbers and a flawed belief (that past seasons must be adjusted downward). What would these calculations look like if the belief was that the 90's were a golden age and everything revolved around them? Of course this would be unacceptable to most fans of the day to see Ovie and Crosby adjusted downwards.

What's missing is true mathematical modelling. Like what's done for the weather. Maybe we need to take a step similar to the Drake equation. Try to state or isolate what goes into a players season and whether it's directly or indirectly proportional.

Talent *injury*team talent*coach*competition* many factors concerning talent pool. Players per pop, drafting. So many factors and I'm sure more could be added or removed to achieve simplification with high accuracy.

I think something similar to a Drake equation and then people like the OP filling in the details with their analysis.

Until then this type of data manipulation just serves the fan base that thinks the heroes of today are better than the heroes of yore. Giroux is better than Gretzky type arguments. This stuff needs an asterisk and a safe place to keep it until the modelling reflects that the Earth goes around the sun. Talent not numbers.

We do not know which era was the most talented, Which era should be the center that all other numbers revolve around. Today it's just a moving target. The belief that talent is more prevalent per player than it was in any other era except next years. Eventually Gretzky reduces to a 30 goal season.

Maybe we just need a talent quotient. A numerical estimation based simply on how much better a player was than his peers. A player or profile that is 100. Maybe we need a set of numbers according to different talents.

Coffey was a great skater was he the Newton of skaters? Was he a 196 Skating IQ? Has anyone been better compared to their peers? Just an example. I can't say Coffey was the best skater compared to his peers than any other qualified player in history. Orr was probably higher.

Stats are just a reflection of the player's talent in a context not analyzed with similar methods. Many of these adjustments just don't pass the eyeball test.

Just MHO. I'm sure others could state it much better. There's a a lot of IQ on these boards.

Very interesting. I think the year-to-year results are particularly good. You can see the short-term changes in scoring conditions, especially in seasons where more power plays were awarded.

I'm not sold on the usefulness of this metric over longer time periods. If there's anything it's missing, at all, the error will build up over time.

First, this analysis only considers offensive production. Suppose that NHL players peak offensively earlier in their career than they peak defensively (where defensive play includes non-scoring factors such as strength of opposition and zone starts as well as actual defensive play.) Suppose also that NHL players receive playing time based on the sum of their offensive and defensive contributions. If these suppositions were fact, you would see NHL scorers generally tending to score fewer points than the previous season, but it would be a result of the natural aging curve of NHL scoring talent. It would not necessarily mean that it was becoming more difficult to score over time.

I also wonder how much your subjective choices affected the results. Ideally I would rather use an objective metric like estimated ice time (post-expansion only) to choose whether to include seasons or not. I realize that could be more difficult, depending on the data you have available. But generally speaking I think looking at usage rather than results is a good way to avoid "cherry-picking" successful results.

It's extremely difficult to separate the aging curve from the change in league talent level in this type of study. I don't think your study is flawed so much as I doubt whether one can ever put a lot of confidence in the results of a study that chains year-to-year scoring changes over decades, due to the difficulty of removing the aging curve. It might be worth running some numbers to test whether NHL players actually do have their offensive and defensive peaks at different ages on average. Something like an aging curve for PP time vs SH time, or for points vs qualcomp + zone start. While I haven't run the numbers on this, I believe goal scoring tends to peak earlier than playmaking among NHL players, so I think it's very possible that defensive contribution peaks at a different age as well.

I'm not sold on the usefulness of this metric over longer time periods. If there's anything it's missing, at all, the error will build up over time.

You are correct that error will build up over time. I don't see any way around this, unfortunately.

I do think it has much usefulness over longer periods of time, because the effects are generally much larger over longer periods. If the effects were very minimal, then the uncertainty error inherent in any such study would overwhelm the possible usefulness of the data. I don't believe this is the case here, although I am unable to quantify this myself.

Quote:

Originally Posted by overpass

First, this analysis only considers offensive production. Suppose that NHL players peak offensively earlier in their career than they peak defensively (where defensive play includes non-scoring factors such as strength of opposition and zone starts as well as actual defensive play.) Suppose also that NHL players receive playing time based on the sum of their offensive and defensive contributions. If these suppositions were fact, you would see NHL scorers generally tending to score fewer points than the previous season, but it would be a result of the natural aging curve of NHL scoring talent. It would not necessarily mean that it was becoming more difficult to score over time.

While the primary goal was to include as many top tier offensive players as possible, I also tried to balance the players by year of birth. IOW, I would try to include at least 3-5 players born in each year, although there were some years when there just weren't enough top tier players to effectively balance it in this manner. In such cases, I tried even harder to include a sufficient number of players born in the year prior and/or after the deficient year.

Otherwise, I think the age effect should be minimal, since each pair of seasons are examined separately.

Quote:

Originally Posted by overpass

I also wonder how much your subjective choices affected the results. Ideally I would rather use an objective metric like estimated ice time (post-expansion only) to choose whether to include seasons or not. I realize that could be more difficult, depending on the data you have available. But generally speaking I think looking at usage rather than results is a good way to avoid "cherry-picking" successful results.

Ice time data is not available until more recently in NHL history, so this did not seem a real option. However, I know there could be improvements to this study, which is one of the main reasons for presenting it. It's hoped someone can "take the puck and skate with it."

I don't believe I "cherry-picked" results, although I may be misunderstanding your use of this phrase. I used players' entire careers, but eliminated some seasons for some players, when it seemed clear that:

- the player was not yet receiving close to full ice time or perhaps not even close to his prime level (e.g. ppg's from start of career are .50 .60 .90 1.00 .95.... eliminated first two as not reliable)

- the player seemed to have fallen off dramatically due to injury and/or age and was far past his prime (e.g. ppg's at end of career are 1.10 1.05 .95 .55 .60 .40... elminated last three as not reliable)

- the player appeared to have a major injury in the middle of his career (e.g. ppg's are ... 1.10 1.15 1.05 .60 1.00 1.10 1.05... eliminated .60 as unreliable)

When in doubt, I left the data in, since any outliers would be filtered out by only using the median half or third of players, so large % changes in PPG would not affect the results much.

Quote:

Originally Posted by overpass

It's extremely difficult to separate the aging curve from the change in league talent level in this type of study. I don't think your study is flawed so much as I doubt whether one can ever put a lot of confidence in the results of a study that chains year-to-year scoring changes over decades, due to the difficulty of removing the aging curve. It might be worth running some numbers to test whether NHL players actually do have their offensive and defensive peaks at different ages on average. Something like an aging curve for PP time vs SH time, or for points vs qualcomp + zone start. While I haven't run the numbers on this, I believe goal scoring tends to peak earlier than playmaking among NHL players, so I think it's very possible that defensive contribution peaks at a different age as well.

I think studies of production by age would be worthwhile and I believe some have been done. However, I don't see this issue as severely hampering this study. First, trying to somewhat balance the fixed group in terms of birth year was part of my methodology. Of course, talent is not evenly distributed, so this was not mandatory, but definitely a consideration across the entire period of study. Second, seasons that appeared more due to age/injury than talent/environment were discarded.

Perhaps I was not rigorous enough in balancing the age component, but believe me that this issue was given heavy consideration while doing this study.

Last edited by Czech Your Math: 04-27-2012 at 10:41 AM.

I think this work and others similar is very important for fans with interest in using math to study hockey. I am very happy that the OP outlined his methods.

But we still need more context to truly capture this. I feel we are in a Ptolemeic age just using numbers and a flawed belief (that past seasons must be adjusted downward). What would these calculations look like if the belief was that the 90's were a golden age and everything revolved around them? Of course this would be unacceptable to most fans of the day to see Ovie and Crosby adjusted downwards.

What's missing is true mathematical modelling. Like what's done for the weather. Maybe we need to take a step similar to the Drake equation. Try to state or isolate what goes into a players season and whether it's directly or indirectly proportional.

I think something similar to a Drake equation and then people like the OP filling in the details with their analysis.

Until then this type of data manipulation just serves the fan base that thinks the heroes of today are better than the heroes of yore. Giroux is better than Gretzky type arguments. This stuff needs an asterisk and a safe place to keep it until the modelling reflects that the Earth goes around the sun. Talent not numbers.

We do not know which era was the most talented, Which era should be the center that all other numbers revolve around. Today it's just a moving target. The belief that talent is more prevalent per player than it was in any other era except next years. Eventually Gretzky reduces to a 30 goal season.

Maybe we just need a talent quotient. A numerical estimation based simply on how much better a player was than his peers. A player or profile that is 100. Maybe we need a set of numbers according to different talents.

Actually, the '90s was when I watched by the most hockey, and if I have a bias, it is probably more towards that era. The results are actually not favorable to the '90s, at least in comparison to the '80s. I didn't know what the results of this study would be, which was part of the fun of completing it.

Also, I don't see how anyone can deny that there is ever-increasing talent per player over longer periods of time. However, this was a study of top tier players, not just the average.

I actually thought there may be an even fairer way to do this, but I couldn't get it to work properly in Excel. This also relates to Overpass's concerns about age influencing the results.

If you use multi-variable linear regression and use independent variables such as player, age, season, etc., with points being the dependent variable, it seems to me like the results might be even more reliable, although I'm not certain of this.

Last edited by Czech Your Math: 04-27-2012 at 11:04 AM.

It seems that it becomes tougher for top tier players to score adjusted points when their is a compression of talent in the league, particularly toward the top.

It's easier immediately after WWII, then becomes a bit tougher as the depleted talent is replaced or returns. As population increases (and perhaps hockey becomes more popular?) it becomes increasinly difficult in the mid-60's. Then expansion dilutes talent (esp. due to lack of parity) and it becomes much easier. The WHA siphons off talent and along with continued expansion, makes the '70s a much easier era in this regard. The WHA merger and lack of expansion for many years makes the '80s a very tough era for adjusted points. Then the re-emergence of expansion makes it not quite as tough, but his effect if mitigated by the addition of top tier talent from overseas during the '90s. Since there hasn't been any expansion for many years, it's become tougher again.

I noted before how it became easier when there was a dramatic increase in PP opportunities. The exception seems to be 2006. One reason for this may be the lockout. Many players near the end of their careers may have chosen retirement at this time (Messier, Hull, Francis type players), while younger players were deprived of a season which might have helped them develop further. This may have negated the expected increase in adjusted PPG for the first line players.

I don't believe I "cherry-picked" results, although I may be misunderstanding your use of this phrase. I used players' entire careers, but eliminated some seasons for some players, when it seemed clear that:

- the player was not yet receiving close to full ice time or perhaps not even close to his prime level (e.g. ppg's from start of career are .50 .60 .90 1.00 .95.... eliminated first two as not reliable)

- the player seemed to have fallen off dramatically due to injury and/or age and was far past his prime (e.g. ppg's at end of career are 1.10 1.05 .95 .55 .60 .40... elminated last three as not reliable)

- the player appeared to have a major injury in the middle of his career (e.g. ppg's are ... 1.10 1.15 1.05 .60 1.00 1.10 1.05... eliminated .60 as unreliable)

When in doubt, I left the data in, since any outliers would be filtered out by only using the median half or third of players, so large % changes in PPG would not affect the results much.

I guess my concern is that you eliminated player-seasons, but did so in a subjective way based on results (points per game). I'd rather do so based on factors other than results, such as ice time or reported injury, because if you remove seasons based on the results it is more likely that you will unintentionally introduce a bias in the seasons selected. But that might require a lot more work, so I could understand why you would choose not to do so.

Quote:

Originally Posted by Czech Your Math

I think studies of production by age would be worthwhile and I believe some have been done. However, I don't see this issue as severely hampering this study. First, trying to somewhat balance the fixed group in terms of birth year was part of my methodology. Of course, talent is not evenly distributed, so this was not mandatory, but definitely a consideration across the entire period of study. Second, seasons that appeared more due to age/injury than talent/environment were discarded.

Perhaps I was not rigorous enough in balancing the age component, but believe me that this issue was given heavy consideration while doing this study.

Maybe I didn't explain this well enough. I'm sure you have an appropriate selection of ages throughout your data. My concern here is that NHL players may age in such a way that they peak offensively before they peak as overall players. So they may have more seasons in their career where their points-per-game decline instead of increase.

For each age, I looked up the number of player-seasons by forwards from 1998-99 through 2011-12 with 1400+ minutes played. Then I repeated this using 4.5 Offensive Point Shares as the benchmark. Here are the results.

Age

1400+ TOI

4.5+ OffPS

19

12

8

20

15

21

21

24

31

22

31

40

23

54

63

24

62

57

25

83

74

26

80

74

27

92

81

28

89

74

29

85

71

30

82

64

31

68

55

32

55

48

33

50

39

34

36

32

35

20

22

36

20

17

37

11

9

38

9

7

The aging curves are similar, but offensive production trends ahead of ice time in the ages 20-24, and ice time ahead of offensive production starting in the late 20s. So either coaches should be giving young players more playing time (probably what a lot of people on this site would say), or time on ice is a good proxy for value and offensive production peaks earlier in a player's career than overall value .

Anyway, it strikes me that this may not be the main cause for bias. When looking at the long-term changes, the net effect of each player is simply the difference between his first season and his final season, correct? So it is actually very important which seasons you choose to start and end with for each player when looking at the long term trend, although not so much for individual seasons. Maybe you are satisfied that you have made good choices for each player about which seasons to select, but it's hard to evaluate the result when it's entirely built on subjective choices.

I guess my concern is that you eliminated player-seasons, but did so in a subjective way based on results (points per game). I'd rather do so based on factors other than results, such as ice time or reported injury, because if you remove seasons based on the results it is more likely that you will unintentionally introduce a bias in the seasons selected. But that might require a lot more work, so I could understand why you would choose not to do so.

Your concern is valid, but ice time data was not consistently available over the majority of the period studied, while injury reports are also subjective to some degree. How bad was the injury? Which sources are valid? etc.

I don't think this significantly harmed the results, although it would always be better to have an objective standard.

Quote:

Originally Posted by overpass

Maybe I didn't explain this well enough. I'm sure you have an appropriate selection of ages throughout your data. My concern here is that NHL players may age in such a way that they peak offensively before they peak as overall players. So they may have more seasons in their career where their points-per-game decline instead of increase.

The aging curves are similar, but offensive production trends ahead of ice time in the ages 20-24, and ice time ahead of offensive production starting in the late 20s. So either coaches should be giving young players more playing time (probably what a lot of people on this site would say), or time on ice is a good proxy for value and offensive production peaks earlier in a player's career than overall value .

Anyway, it strikes me that this may not be the main cause for bias. When looking at the long-term changes, the net effect of each player is simply the difference between his first season and his final season, correct? So it is actually very important which seasons you choose to start and end with for each player when looking at the long term trend, although not so much for individual seasons. Maybe you are satisfied that you have made good choices for each player about which seasons to select, but it's hard to evaluate the result when it's entirely built on subjective choices.

You touch on something important about production vs. age. It's quite possible that the majority of players peak before the midpoint of their career, and therefore there are more, generally smaller decreases in PPG than there are increases (although these may be correspondingly larger). I think this is one reason that it's so important to try to balance the fixed group of players by birth year, to avoid an unbalanced age demographic influencing the results purely due to the age factor.

The bolded part about the net effect being the difference between the first and last seasons for each player included in the study is not correct. All that matters are the % change from one season to the next. For almost all players, not all of their included seasons directly influence the calculations. That is why I used metrics such as the middle third and middle half in terms of % change. If a player's season N is excluded, it only affects the results for N-1 vs. N and N vs. N+1, and that's only if the player played the seasons before and after season N. If the change for that player from seasons N-1 to N or seasons N to N+1 were atypical compared to other players, it would have been filtered out as an outlier by taking the median third/half of players. If it wasn't atypical, then the results wouldn't have changed much with its inclusion.

Perhaps a very simplified example will illustrate this:

Bob 1.00, 1.10, .99, .99, 1.19
Dave 1.00, .90, .99, 1.18, 1.18
Joe 1.00, .80, .80, 1.00, 1.10
Steve 1.00, .90, .81, .81, .89
Tom 1.00, 1.00, 1.10, .99, .99

If you look at their changes from first to last:

Bob +19%
Dave +18%
Joe +10%
Steve -11%
Tom -1%

So an average change from year1 to year5 is +7%

That is not how the results were calculated however. An analog to the methodology is to take the median 60% in this case (I used median 1/3 and median 1/2), IOW the middle 3 players in terms of % change in PPG.

There % PPG changes were:

Bob +10, -10, 0, +20
Dave -10, +10, +20, 0
Joe -20, 0, +25, +10
Steve -10, -10, 0, +10
Tom 0, +10, -10, 0

So for seasons 1-2 the % changes are: +10, 0, -10, -10, -20
The middle 3 are 0, -10, -10. An average of these is -6.67%

For seasons 2-3 the % changes are: +10, +10, 0, -10, -10
The middle 3 are +10, 0, -10 for an average of 0.

For seasons 3-4 the % changes are: +25, +20, 0, 0, -10
The middle 3 are +20, 0, 0 for an average of +6.67%

For seasons 4-5 the % changes are: +20, +10, +10, 0, 0
The middle 3 are +10, +10, 0 for an average of 6.67%

So season 1 is our baseline of 100
Season 2 is (1-.0667)*100 = 93.33
Season 3 is (1-0)*93.33 = 93.33
Season 4 is (1+.0667)*93.33 = 99.59
Season 5 is (1+.0667)*99.59 = 106.26

So while the +7% average change from season 1 to 5 is highly correlated to the result, it is not the same as the +6.3% figure that was calculated using a method analagous to mine.

When using the middle third or half in terms of % change, among the top 30-75 players which played in both seasons, most or all of the outliers are removed, but there is still a substantial sub-group left in the middle, whose change in production is used as the best approximation for that pair of seasons. There won't be such a high correlation between the result and a simple average of all the participating players' % changes from season X to season X+?.

I'm certain improvements could be made in different parts of the methodology, such as the means of selecting which players and which of their seasons to include. Realize also that any assumptions are going to influence the results. If you use a standard such as "any player who finished in the top X in at least Y seasons", then you'll probably end up with more weaker players from eras where the competition was less intense, while excluding some stronger players who didn't fit the criteria due to competition. Basically, it's a lot more difficult than it sounds to come up with strictly objective criteria free of bias.

Last edited by Czech Your Math: 04-27-2012 at 01:34 PM.