![]() Главная страница Случайная страница КАТЕГОРИИ: АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатикаИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханикаОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторикаСоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансыХимияЧерчениеЭкологияЭкономикаЭлектроника |
Diaognostic checking ⇐ ПредыдущаяСтр 4 из 4
Once a model is specified, it is likely to commit one or more of the following specification errors:
1. Omission of a relevant variables; 2. Inclusion of an unnecessary variable; 3. Adopting the wrong functional form; 4. Errors of measurements; 5. Incorrect specification of the stochastic error term.
There is a distinction between tests of specification and tests of mis-specification or diagnostic tests.
In a test of specification there is a clearly defined alternative hypothesis i.e. we have in mind the true model but somehow we do not estimate the correct model.
In contrast, in a test of mis-specification or diagnostic test we have no clearly defined alternative hypothesis, i.e. we do not know what the true model is to begin with.
Due to any one of the above specification errors, our parameter estimates will be biased, inconsistent and usual confidence interval and hypothesis testing are likely to unreliable and give misleading conclusions. Here we discuss some of the tests for specification errors.
The diagnostic tests are tests that are meant to diagnose a good model.
Tests for Omitted Variables Consider the true linear regression model is
But some reason, we omitting the variable
Due to omitting the variable
Our parameter estimates will be biased as well as inconsistent, the disturbance variance will be incorrect and usual confidence interval and hypothesis testing are likely to give unreliable and misleading conclusions.
Due to omitting the variable we test the hypothesis
In case data on
(6)
We carry out Wald F test (p522, eq. 8.5.18) for the omission of the variables. If the computed F test is significant, one could accept the hypothesis that the model is mis-specified. Overfitting a Model
Let us assume that the true model is
But we commit the specification error by including the un-necessary variable
The consequences of this specification error are that the OLS parameter estimates will be inefficient though they are unbiased and consistent.
To test for unnecessary variables in the model, we can apply usual t-test and F-test; provided we have true model known.
Examination of Residuals and Model Fit Residuals (error, disturbances) are typically analyzed in linear modelling with the goal of identifying poorly fitting values. If it is observed that there exist a sufficiently large number of these poorly fitting values, then often the linear fit is determined to be inappropriate for the data.
Examination of the residuals is a good visual diagnostic to detect autocorrelation or heteroscedasticity. However, if there are specification errors due to dynamic misspecification or incorrect functional form, the residuals will exhibit noticeable patterns.
In this case detection of positive autocorrelation by Durbin-Watson test statistics is a measure of specification error. Also significant values for the statistics designed to test for heteroskedasticity can also be indicative of mis-specification.
Other common use of residual include: looking for signs of nonlinearity, evaluating the effect of new explanatory variables, creating goodness of fit statistics, and evaluating leverage (distance from the mean) and influence (change exerted on the coefficient for individual data points.
Residuals from generalized linear models are often not normally distributed, and might have stochastic behaviour of the data, so we should have apply a wide range of graphical and inferential tools developed for the linear model to investigate potential outliers and other interesting behaviour.
|