View Single Post
Old
11-12-2012, 12:53 PM
  #5
barneyg
Registered User
 
Join Date: Apr 2007
Posts: 2,234
vCash: 500
Quote:
Originally Posted by Czech Your Math View Post
No, Xn actually appeared significant to me (Bn was almost 4x SEn). The least significant appeared to be Xp with Bp ~1.5x SEp. What's strange is that the individual correlations were:

Xn = 7%, Xp = 47%, Xe = 27%, and Xg = (-10%)

I thought Xn was coincidentally capturing a lot of the other variables, so I wanted to see what the coefficients looked like without Xn as one of the variables.
What are the correlations between those variables? I'd assume Xp is strongly correlated with them if it becomes the least significant variable once the others are included. That said, with 30 data points any t-stat above ~2 will be significant and the t-statistic on Bp seems to be 1.5*5.5 = 8.25 (square root of 30 = 5.5).


Quote:
Originally Posted by Czech Your Math View Post
What's the best way to judge models with a relatively stable Y? Just look at the significance of each individual coefficient or is there a better way of judging/comparing models in such cases?
I'm not an econometrician (I don't play one on TV either), but I know that R-square doesn't mean squat when no intercept is included in the model. Was there one in yours?

I have seen a measure such as 'incremental R-square' i.e. (new-old)/old where 'new' is the R2 from the full model and 'old' is the one with only an intercept or something.

In models with stable time series, people often calculate first differences i.e. Y'(1985) = Y(1985) - Y(1984) and run the model on those changes instead of the level. It changes the narrative because 'last year' is the benchmark for each observation but since your objective is often 'what makes Y change' it works. There's a deeper econometric(al?) reason that justifies first differences as well.

I'll try to find time to look at the other thread too.

barneyg is offline   Reply With Quote