View Single Post
Old
11-15-2012, 08:41 AM
  #390
Dalton
Registered User
 
Dalton's Avatar
 
Join Date: Aug 2009
Location: Ho Chi Minh City
Country: Vietnam
Posts: 2,096
vCash: 500
Quote:
Originally Posted by Iain Fyffe View Post
Take your own advice. The study to which you continue to refer makes two essential points:

1. That outliers should not be automatically be dimissed when analyzing human performance.

2. That human performance generally does not follow a bell curve, but instead a power law curve.

Neither of these points are relevant to adjusted scoring, because adjusted scoring neither ignores outliers nor assumes a bell curve. But of course, I've said this over and over and over again. Perhaps this time you'll take note of it?

If you want to show that this study is relevant to the discussion, you have to demonstrate that adjusted scoring ignores outliers (it doesn't, players are adjusted based on their actual stats regardless of whether they might appear to be outliers or not) and that it assumes a bell curve (it doesn't, since the same essentially flat multiplier is applied to all players in a season). Until you demonstrate these two things (which you cannot, since they are not true), this study is utterly irrelevant to this discussion, and you should stop trying to use it to support your position.
It's not about a literal bell curve. Its about a thinking pattern that assumes it. When you average seasons you are assuming normalcy. You are ignoring the impact of outliers. Why do you think averaging seasons is exempt?

Your argument suggests that you haven't read, didn't understand or read selectively the study in question. Here are the answers you seek (in part)-

"Regarding performance measurement and management, the current zeitgeist is that the median worker should be at the mean level of performance and thus should be placed in the middle of the performance appraisal instrument. If most of those rated are in the lowest category, then the rater, measurement instrument, or both are seen as biased (i.e., affected by severity bias; Cascio & Aguinis, 2011 chapter 5). Performance appraisal instruments that place most employees in the lowest category are seen as psychometrically unsound. These basic tenets have spawned decades of research related to performance appraisal that might “improve” the measurement of performance because such measurement would result in normally distributed scores given that a deviation from a normal distribution is supposedly indicative of rater bias (cf. Landy & Farr, 1980; Smither & London, 2009a). Our results suggest that the distribution of individual performance is such that most performers are in the lowest category. Based on Study 1, we discovered that nearly two thirds (65.8%) of researchers fall below the mean number of publications. Based on the Emmy-nominated entertainers in Study 2, 83.3% fall below the mean in terms of number of nominations. Based on Study 3, for U.S. representatives, 67.9% fall below the mean in terms of times elected. Based on Study 4, for NBA players, 71.1% are below the mean in terms of points scored. Based on Study 5, for MLB players, 66.3% of performers are below the mean in terms of career errors. Moving from a Gaussian to a Paretian perspective, future research regarding performance measurement would benefit from the development of measurement instruments that, contrary to past efforts, allow for the identification of those top performers who account for the majority of results. Moreover, such improved measurement instruments should not focus on distinguishing between slight performance differences of non-elite workers. Instead, more effort should be placed on creating performance measurement instruments that are able to identify the small cohort of top performers."

You also benefit from reading this which explicitly refers to analyzing whole industries with respect to individual performance-averaging seasons-

"There are important differences between Gaussian and Paretian distributions. First, Gaussian distributions underpredict the likelihood of extreme events. For instance, when stock market performance is predicted using the normal curve, a single-day 10% drop in the financial markets should occur once every 500 years (Buchanan, 2004). In reality, it occurs about once every 5 years (Mandelbrot, Hudson, & Grunwald, 2005). Second, Gaussian distributions assume that the mean and standard deviation, so central to tests of statistical significance and computation of effect sizes, are stable. However, if the underlying distribution is Paretian instead of normal, means and standard deviations are not stable and Gaussian-based point estimates as well as confidence intervals are biased (Andriani & McKelvey, 2009). Third, a key difference between normal and Paretian distributions is scale invariance. In OBHRM, scale invariance usually refers to the extent to which a measurement instrument generalizes across different cultures or populations. A less common operationalization of the concept of scale invariance refers to isomorphism in the shape of score distributions regardless of whether one is examining an individual, a small work group, a department, an organization, or all organizations (Fiol, O’Connor, & Aguinis, 2001). Scale invariance also refers to the distribution remaining constant whether one is looking at the whole distribution or only the top performers. For example, the shape of the wealth distribution is the same whether examining the entire population or just the top 10% of wealthy individuals (Gabaix, 1999). Related to the issue of scale invariance, Gabaix, Gopikrishnan, Plerou, and Stanley (2003) investigated financial market fluctuations across multiple time points and markets and found that data conformed to a power law distribution. The same distribution shape was found in both United States (U.S.) and French markets, and the power law correctly predicted both the crashes of 1929 and 1987.

Germane to OBHRM in particular is that if performance operates under power laws, then the distribution should be the same regardless of the level of analysis. That is, the distribution of individual performance should closely mimic the distribution of firm performance. Researchers who study performance at the firm level of analysis do not necessarily assume that the underlying distribution is normal (e.g., Stanley et al., 1995). However, as noted earlier, researchers who study performance at the individual level of analysis do follow the norm of normality in their theoretical development, research design, and choices regarding data analysis. These conflicting views, which may be indicative of a micro–macro divide in OBHRM and related fields (e.g., Aguinis, Boyd, Pierce, & Short, 2011), could be reconciled if individual performance is found to also follow a power law distribution, as it is the case for firm performance (Bonardi, 2004; Powell, 2003; "


Its like a fractal. Averaging seasons is not a free pass or a loop hole. Its the same error. This error is fundamental to AS. If you don't average seasons you don't have AS as it is now constructed.

There is a section on reccommended math technique. Linear regression is not among them. Bayesian is suggested as most promissing.

This doesn't even take into account the impact of outliers on the rest as I explained. Any student of the history of anything can clearly see the impact of outliers on peers. Without calculus there is no explosion of differential equations leading to questions that led to the study of complex numbers. Things build and grow on the past. You can't remove Gretzky or Keppler from history, you can't erase the impact of their achievements for the purpose of answering the question of how they'd fare today. You can only look at what they did and marvel at what they might do today.

What about eras without extreme outliers? How do you compare them to seasons or eras that have them? AS simply averages the extreme outliers downwards. We compare Gretzky , Orr and Howe to whoever is best at their job at the moment. Obvious flaw that doesn't need math to see. I discussed this issue yet it was ignored. The study proves that this thinking or methodology is flawed.

"In addition to the study of leadership, our results also affect research on work teams (e.g., group empowerment, shared efficacy, team diversity). Once again, our current understanding of the team and how groups influence performance is grounded in an assumption of normality. The common belief is that teamwork improves performance through increased creativity, synergies, and a variety of other processes (Mathieu, Maynard, Rapp, & Gilson, 2008). If performance follows a Paretian distribution, then these existing theories are insufficient because they fail to address how the presence of an elite worker influences group productivity. We may expect the group productivity to increase in the presence of an elite worker, but is the increase in group output negated by the loss of individual output of the elite worker being slowed by non-elites? It may also be that elites only develop in interactive, dynamic environments, and the isolation of elite workers or grouping multiple elites together could hamper their abnormal productivity. Once again, the finding of a Paretian distribution of performance requires new theory and research to address the elite nested within the group. Specifically, human performance research should adopt a new view regarding what human performance looks like at the tails. Researchers should address the social networks of superstars within groups in terms of identifying how the superstar emerges, communicates with others, interacts with other groups, and what role non-elites play in the facilitating of overall performance."

As things stand I have suggested that using fractions to look at raw data does a better job than AS. Others may have better suggestions.

Dalton is offline   Reply With Quote