View Single Post
09-05-2012, 07:47 AM
#113
Dalton
Registered User

Join Date: Aug 2009
Location: Ho Chi Minh City
Country:
Posts: 2,096
vCash: 500
Quote:
 Originally Posted by Czech Your Math To me, it's basically the random chance of any 40-50 point scorer doing something like that. What sticks out as most unusual to me (aside from it being a relatively mediocre player) is that he scored points on all 8 of his team's goals and it was a close game until late. If it was an 11-3 blowout or something, it wouldn't be as surprising to me. Still, it's more like a mediocre pitcher throwing a no-hitter. Great single game feat, takes some talent for sure, but not an indication of true greatness on an all-time level or anything. I'd say the same about Sittler's 10 point game, even though he was a much better player.
Sittler also scored 5 goals in a playoff game and set a couple team records on an O6 team. Guy was pretty lucky eh?

To the OP's question. After reading and skimming this thread I think the KISS rule needs to be applied. Desperately. I don't think we can quantify human behaviour. I don't think we can safely use analytics on databases comprised of judgement calls from observers. But we can look at raw data. But I think the question should be- can we accurately analyze the data- before we ponder the value of collecting more.

I think there is too much over analysis that just paralyzes the debate. Start simple. What question are you trying to answer?

Recently I've considered the question of how to compare players across eras. I came across a study in Human Resources that showed that the top 20% or so of performers are responsible for about 80% of the output. Within that elite group the same rule applies. This elite group influence the mean disproportionately. IOW if the top 20% of year A outperform the top 20% of year B then the mean of year A will be higher than the mean of year B. The same is true of the bottom 20%.

Comparing the mean of two different seasons does not answer any questions. One must look at and compare the top and bottom 20% to see what their effect is. A year in which the worst defensive teams were worse than other years would result in more goals scored by everyone, not just the top 20%. So the top 20% would not have scored a higher proportion of goals than the rest of the players compared to other seasons but the league as a whole would have seen more goals scored. The bottom 20% of goalies would have allowed a higher percentage of goals than the remaining goalies compared to other seasons. The other goalies would be comparable to other seasons.

The same argument could be true of scorers as well. This needs to be tested and I need data to do so.

My point here is that this idea came from Human resources not math. Math alone is not the answer here. There are many fields that study human preformance that can be mined for ideas to analyze hockey. The math comes into play when it presents as the tool for the job. In this case the necessary math is just plain old fractions.

I dl'd last season and found that this rule appeared to hold for points. The top twenty percent had indeed sored a disproportionate amount of the points. It was pointed out to me that this was because of the distribution of scorers on a the teams but that doesn't impact it's possibilities for comparing seasons. It could in fact support that poster's opinion if perhaps scoring is more spread out in a given season or era compared to another. In that case the top 20% wouldn't have scored a disproportionate amount of points compared to the rest of the league as they did last season. Or at least the proportions would differ in a manner supporting the contention.

Just for kicks I added a typical Gretzky season of more assists than the next guys points and 55 goals IIRC. There was a pretty clear lead for Gretzky among the top 20%. It would be interesting to use this to compare other top performers and perhaps see who scored the highest proportion of points or goals among their peers in given seasons, eras or careers. I think it's a useful tool that coud contribute to many debates and even displace some of the dubious formulae that litter the debate.

In a small league the number of players in the top 20% would be smaller as well which could see some aparently bizarre results such as a single player or a very few responsible for all or most of the goals scored in that league. In that case (or maybe all cases) one should use an arbitray number such as what percentage of league scoring does a ppg represent. A baseline to compare seasons and performances. Every ruler needs a 0.

Without data it's just MHO. But OTOH- KISS. HR study. Raw data. Simple fractions. Higher order math need not apply.

The measurement question is simply a matter of careful definition of events and counting them. I think we need to analyze current data better first. That will lead to questions which should lead to more or better data collection for analysis. Just applying math to madly collected data is not the best analysis.