Principle Foundations Home Page
|
|||||
OLS Method and Properties of OLS Estimators |
|||||
The OLS
method gives a straight line that fits the sample of XY observations in
the sense that minimizes the sum of the squared (vertical) deviations of
each observed point on the graph from the straight line.
ie We take vertical deviations because we are trying to explain or predict movements in Y, which is measured along the vertical axis. We cannot take the sum of the deviations of each of the observed points form the OLS line because deviations that are equal in size but opposite in sign cancel out, so the sum of the deviations equals 0. Taking the sum of the absolute deviations avoids the problem of having the sum of the deviations equal to 0. However, the sum of the squared deviations is preferred so as to penalize larger deviations relatively more than smaller deviations.
The OLS
estimators (interpreted as Ordinary Least- Squares estimators) are best
linear unbiased estimators (BLUE). Lack of bias means
so that
Best unbiased
or efficient means smallest variance. Thus, OLS estimators are the best
among all unbiased linear estimators. This is known as the Gauss-Markov
theorem and represents the most important justification for using OLS.
An estimator
is consistent if, as the sample size approaches infinity in the limit, its
value approaches the true parameter (ie it is asymptotically unbiased) and
its distribution collapses on the true parameter. Besides, an estimator
is unbiased if the mean of its sampling distribution equals the true
parameter. The mean of the sampling distribution is the expected value of
the estimator. Thus, lack of bias means that
, where
is the estimator of the true parameter, b. Bias is then defined as the
difference between the expected value of the estimator and the true
parameter. That is
. Note that lack of bias does not mean that
, but that in repeated random sampling, we get, on average, the correct
estimate. The hope is that the sample actually obtained is close to the
mean of the sampling distribution of the estimator. However,
non-linear estimators may be superior to OLS estimators (ie they might be
unbiased and have lower variance). Since it is often difficult or
impossible to find the variance of unbiased non-linear estimators,
however, the OLS estimators remain by far the most widely used. OLS
estimators being linear, are also easier to use than non-linear
estimators.
Two
conditions are required for an estimator to be consistent:
1) As the
sample size increases, the estimator must approach more and more the true
parameter (this is referred to as asymptotic unbiasedness), 2) As the sample size approaches infinity in limit, the sampling distribution of the estimator must collapse or become a straight vertical line with height (probability) of 1 above the value of the true parameter. This large-sample property of consistency is used only in situations when small sample BLUE or lowest SME estimators cannot be found.
Copyright
© 2002
Back
to top Evgenia Vogiatzi <<Previous Next>> |