Quote:
Originally Posted by Iain Fyffe
This is a joke, right? It has to be. Please tell me it is.
Look at the red line in my graph. That's the actual number of goals scored. And yet it also has this "bell curve" that you see. Notice that in this case, the adjusted stats actually reduce the apparent bell curve. Which is, of course, not a bell curve at all, but a figment created by a relatively small sample size.
Please, please, please explain how the adjusted scoring method used by HR "adds a bell curve to a power curve". Demonstrate how it does that. You keep saying it, without showing how. Tell us. Stop asserting and start proving.
If you see a bell curve, it's apparently because you want to see a bell curve. You accuse my analysis of being biased, but your bias is showing in great big neon letters.
If you drop a very large number of observations at the bottom and a few at the top, yes you probably could as it happens for this set of data. But of course, this would also happen if you did the same to the raw data. As such it has nothing to do with the adjusted scoring method, and would merely be a reflection of intentionallymisleading data manipulation.
If you define "outlier" broadly enough, you can turn any power curve into a bell curve. But that's disingenuous. It's not what "outlier" means. If you think that you can remove a few outliers and transform my graph into a normal distribution, I fear you don't know what outlier means. If you consider the mode of a population (the most frequent observation) to be an outlier, you're going to get wonky results.
A definition of an outlier provided by a statistician is "An outlying observation... is one that appears to deviate markedly from other members of the sample in which it occurs." (Emphasis added). It does not mean a "tail value". That is, you can't just remove the zero values in my graph as outliers, because they clearly do not deviate markedly from other members of the sample  they are in fact the most common member of the sample. The bottom and top values are not automatically outliers.
As such, you assertion has no merit.
I think we're done here.

Look closely at your graph. It follows the raw data at the extremes of course. First it clearly decreases compared to the raw data. That means your data takes on smaller values. Next it clearly increases compared to the raw data. That means your data takes on bigger values. Then it decreases compared to the raw data. That means your data takes on smaller values. A bell curve. Lower values then higher values then lower values.
Its fricken obvious man. This is the kind of behaviour one would expect when applying a function based on a bell curve ( normalization) to another curve. You equation is based on average scoring as if the distribution of goals amongst the players was uniform. I've already shown that it isn't. I think most of us including you knew that already without math.
QED. Prove me wrong. Observing the behaviour of graphs is math.
I know the verbal argument. Use math.
Edit: A single season could show any behaviour.