08-09-2012, 02:29 PM
#30
Registered User

Join Date: Jan 2006
Location: bohemia
Country:
Posts: 4,845
vCash: 500
Quote:
 Originally Posted by mindmasher IPP is individual point percentage, is a calculation of the number of times an individual player gets a point (either a goal or an assist) compared to the number of total goals scored while he's on the ice. Very high values of this measure tend to regress back to league averages.
Thanks for the definition. An extreme value for just about any metric would be expected to regress to the mean.

Quote:
 Originally Posted by mindmasher Arguing over the usage of 'advanced' is useless. It is pretty simple, but it is certainly advanced enough for the vast majority of fans that I don't see issue with the use of the word in regards to PDO. Either way, who cares, how we categorize it complexity wise means nothing to the overall debate of usability.
Agreed, but when people start giving metrics fancy names and calling them advanced, some start believing they are more useful than may actually be.

Quote:
 Originally Posted by mindmasher PDO of course is on-ice team shooting percentage and team save percentage, and this is much different than individual shooting percentage, which you keep on bringing up for some reason.
Somehow I missed that it was on-ice team S%, sorry for the mistake. However, team shooting %s don't necessarily regress to the mean, especially certain lines. Do you think the team S% while Gretzky-Kurri & Coffey were on the ice regressed to any sort of mean (league avg.)?

Quote:
 Originally Posted by mindmasher Sigh. You take a team measurement and then use it to logically conclude that Gretzky is lucky? Do you see the logical fallacy there?
The Oilers/Gretzky. It doesn't really matter, they weren't lucky, they were good. Where's the regression to the mean? Average teams are going to regress to the mean, good and bad ones aren't. How do we know which is which, simply by looking at this metric?

Quote:
 Originally Posted by mindmasher The teams you picked have some of the best shot differentials over the past 5 seasons - a much better indicator of success - and also feature two of the most extreme examples of shooting percentage and save percentage outliers respectively: the Sedins and Tim Thomas. Why don't you take a look at all 30 teams at once, like a true statistical analysis would. Maybe you would be surprised to find that on average teams regress to 1000.
I did look at all 30 teams over the last 4 years in terms of overall SV% + S%, but I don't even know where to find 5v5 team S% and SV%. I've already given several examples of teams that stayed above or below the mean for each of multiple seasons. How does this agree with what is supposed to happen? So the handful of consistently good and the handful of consistently bad teams are "outliers", but the fact that a bunch of near-average teams will tend to have near-average data over larger samples is some sort of revelation?

The graph in the OP's line shows it regressing to the mean and then regressing back away from the mean. Is that what one would expect? Why do I need to disprove what isn't even proven by the metric's proponents?

I know conceptually that this metric is highly flawed. If others find it to be useful, that's fine.

However, if it's truly driven by randomness, wouldn't one expect the results to be unpredictable from one season to another for all teams, not just the average ones?

I see I missed an example on the linked site: 2011 Sharks. It says the Sharks were unlucky through Jan. 13 and post graphs to show how that was so. Interesting that Nittymaki (.896 overall SV%) played in 22/45 games through Jan. 13 and 2/37 after that, while Niemi (.920 overall SV%) took over in full after that. Whether Nittymaki was injured or the Sharks decided he was "unlucky", apparently this has no bearing on the matter? Good grief, if I can refute the best examples without even trying hard, how does one really take this seriously?