View Single Post
Old
07-13-2009, 01:05 PM
  #39
dubya
Registered User
 
Join Date: Sep 2005
Location: Edmonton
Posts: 931
vCash: 500
Quote:
Originally Posted by Giant Moo View Post
Staples' blog post has a absolute doozy:

"I'm still going over Desjardins' post, but it would seem to me that the Time On Ice-based system of measure quality of competition got it right"

This is classic bad reasoning. If you're really trying to be scientific, you don't find data to support the conclusion you want and disregard the other data that contradicts what you want. That should clue Staples in immediately that the calculation has a tremendous amount of noise in it. Sadly, it does not.

It really is a pseudo-science.
I don't think he's finding data to support what he wants...in fact, Staples doesn't believe in stats based on plus/minus. I think he's saying that the TOI system matches what his eyes have told him, whereas the other contradicts what he's seen. If you're trying to find a numerical model, you need to have something to compare it to in order to define its validity. In this case, you compare against what you've observed...not perfect, since observations can be inherently biased, but the best available.

I'd agree...the last two QoC measures seem way out of wack compared to what I observed. No way Strudwick and Cogs were matched against the Thornton and Iginlas of the world. That was Souray and Horcoff more often than not. I think the TOI do a good job, though I can only judge that against my own observations.

Also, of course this is pseudo-science. What would you expect? True experimental control of all independent variables? It's a sport, not a physics lab.

dubya is offline   Reply With Quote