在多元线性回归中,我可以理解残差和预测变量之间的相关性为零,但是残差和标准变量之间的预期相关性是什么?它应该为零还是高度相关?那是什么意思?
在多元线性回归中,我可以理解残差和预测变量之间的相关性为零,但是残差和标准变量之间的预期相关性是什么?它应该为零还是高度相关?那是什么意思?
Answers:
在回归模型中:
通常假设是,我= 1 ,。。。,n是一个iid样本。在假设Ë X我ü 我 = 0和Ë (X我X ' 我)满秩,普通最小二乘估计:
是一致且渐近正常的。残差和响应变量之间的预期协方差为:
如果我们进一步假定和ë (û 2 我 | X 1,。。。,X Ñ)= σ 2,我们可以计算之间的预期协方差y i及其回归残差:
Now to get the correlation we need to calculate and . It turns out that
hence
Now the term comes from diagonal of the hat matrix , where . The matrix is idempotent, hence it satisfies a following property
where is the diagonal term of . The is the number of linearly independent variables in , which is usually the number of variables. Let us call it . The number of is the sample size . So we have nonnegative terms which should sum up to . Usually is much bigger than , hence a lot of would be close to the zero, meaning that the correlation between the residual and the response variable would be close to 1 for the bigger part of observations.
The term is also used in various regression diagnostics for determining influential observations.
The correlation depends on the . If is high, it means that much of variation in your dependent variable can be attributed to variation in your independent variables, and NOT your error term.
However, if is low, then it means that much of the variation in your dependent variable is unrelated to variation in your independent variables, and thus must be related to the error term.
Consider the following model:
, where and are uncorrelated.
Assuming sufficient regularity conditions for the CLT to hold.
are perfectly correlated!!!
(and hence high correlation between error and dependent) may be due to model misspecification.
I find this topic quite interesting and current answers are unfortunately incomplete or partly misleading - despite the relevance and high popularity of this question.
By definition of classical OLS framework there should be no relationship between and , since the residuals obtained are per construction uncorrelated with when deriving the OLS estimator. The variance minimizing property under homoskedasticity ensures that the residual error are randomly spread around the fitted values. This can be formally shown by:
Where and are idempotent matrices defined as: and .
This result is based on strict exogeneity and homoskedasticity, and practically holds in large samples. The intuition for their uncorrelatedness is the following: The fitted values conditional on are centered around , which are thought as independently and identically distributed. However, any deviation from the strict exogeneity and homoskedasticity assumption could cause the explanatory variables to be endogenous and spur a latent correlation between and .
Now the correlation between the residuals and the "original" is a completely different story:
Some checking in the theory and we know that this covariance matrix is identical to the covariance matrix of the residual itself (proof omitted). We have:
If we would like to calculate the (scalar) covariance between and as requested by the OP, we obtain:
(= by summing up of the diagonal entries of the covariance matrix and divide by N)
The above formula indicates an interesting point. If we test the relationship by regressing on the residuals (+constant), the slope coefficient , which can be easily derived when we divide the above expression by the .
On the other hand, the correlation is the standardized covariance by the respective standard deviations. Now, the variance matrix of the residuals is , while the variance of is . The correlation becomes therefore:
This is the core result which ought to hold in a linear regression. The intuition is that the expresses the error between the true variance of the error term and a proxy for the variance based on residuals. Notice that the variance of is equal to the variance of plus the variance of the residuals . So it can be more intuitively rewritten as:
The are two forces here at work. If we have a great fit of the regression line, the correlation is expected to be low due to . On the other hand, is a bit of a fudge to esteem as it is unconditional and a line in parameter space. Comparing an unconditional and conditional variances within a ratio may not be an appropriate indicator after all. Perhaps, that's why it rarely done in practice.
An attempt conclude the question: The correlation between and is positive and relates to the ratio of the variance of the residuals and the variance of the true error term, proxied by the unconditional variance in . Hence, it is a bit of a misleading indicator.
Notwithstanding this exercise may give us some intuition on the workings and inherent theoretical assumptions of an OLS regression, we rarely evaluate the correlation between and . There are certainly more established tests for checking properties of the true error term. Secondly, keep in mind that the residuals are not the error term, and tests on residuals that make predictions of the characteristics on the true error term are limited and their validity need to be handled with utmost care.
For example, I would like to point out a statement made by a previous poster here. It is said that,
"If your residuals are correlated with your independent variables, then your model is heteroskedastic..."
I think that may not be entirely valid in this context. Believe it or not, but the OLS residuals are by construction made to be uncorrelated with the independent variable . To see this, consider:
However, you may have heard claims that an explanatory variable is correlated with the error term. Notice that such claims are based on assumptions about the whole population with a true underlying regression model, that we do not observe first hand. Consequently, checking the correlation between and seems pointless in a linear OLS framework. However, when testing for heteroskedasticity, we take here into account the second conditional moment, for example, we regress the squared residuals on or a function of , as it is often the case with FGSL estimators. This is different from evaluating the plain correlation. I hope this helps to make matters more clear.
The Adam's answer is wrong. Even with a model that fits data perfectly, you can still get high correlation between residuals and dependent variable. That's the reason no regression book asks you to check this correlation. You can find the answer on Dr. Draper's "Applied Regression Analysis" book.
So, the residuals are your unexplained variance, the difference between your model's predictions and the actual outcome you're modeling. In practice, few models produced through linear regression will have all residuals close to zero unless linear regression is being used to analyze a mechanical or fixed process.
Ideally, the residuals from your model should be random, meaning they should not be correlated with either your independent or dependent variables (what you term the criterion variable). In linear regression, your error term is normally distributed, so your residuals should also be normally distributed as well. If you have significant outliers, or If your residuals are correlated with either your dependent variable or your independent variables, then you have a problem with your model.
If you have significant outliers and non-normal distribution of your residuals, then the outliers may be skewing your weights (Betas), and I would suggest calculating DFBETAS to check the influence of your observations on your weights. If your residuals are correlated with your dependent variable, then there is a significantly large amount of unexplained variance that you are not accounting for. You may also see this if you're analyzing repeated observations of the same thing, due to autocorrelation. This can be checked for by seeing if your residuals are correlated with your time or index variable. If your residuals are correlated with your independent variables, then your model is heteroskedastic (see: http://en.wikipedia.org/wiki/Heteroscedasticity). You should check (if you haven't already) if your input variables are normally distributed, and if not, then you should consider scaling or transforming your data (the most common kinds are log and square-root) in order to make it more normalized.
In the case of both, your residuals, and your independent variables, you should take a QQ-Plot, as well as perform a Kolmogorov-Smirnov test (this particular implementation is sometimes referred to as the Lilliefors test) to make sure that your values fit a normal distribution.
Three things that are quick and may be helpful in dealing with this problem, are examining the median of your residuals, it should be as close to zero as possible (the mean will almost always be zero as a result of how the error term is fitted in linear regression), a Durbin-Watson test for autocorrelation in your residuals (especially as I mentioned before, if you are looking at multiple observations of the same things), and performing a partial residual plot will help you look for heteroscedasticity and outliers.