Principle Foundations      Home Page

 

OLS Method 

and

 Properties of OLS Estimators

ORDINARY LEAST-SQUARES METHOD

    The OLS method gives a straight line that fits the sample of XY observations in the sense that minimizes the sum of the squared (vertical) deviations of each observed point on the graph from the straight line. 

ie                                   

We take vertical deviations because we are trying to explain or predict movements in Y, which is measured along the vertical axis. We cannot take the sum of the deviations of each of the observed points form the OLS line because deviations that are equal in size but opposite in sign cancel out, so the sum of the deviations equals 0. Taking the sum of the absolute deviations avoids the problem of having the sum of the deviations equal to 0. However, the sum of the squared deviations is preferred so as to penalize larger deviations relatively more than smaller deviations.   

 

PROPERTIES OF OLS ESTIMATORS

The OLS estimators (interpreted as Ordinary Least- Squares estimators) are best linear unbiased estimators (BLUE). Lack of bias means 

 

so that                                     

Best unbiased or efficient means smallest variance. Thus, OLS estimators are the best among all unbiased linear estimators. This is known as the Gauss-Markov theorem and represents the most important justification for using OLS.

     An estimator is consistent if, as the sample size approaches infinity in the limit, its value approaches the true parameter (ie it is asymptotically unbiased) and its distribution collapses on the true parameter. Besides, an estimator is unbiased if the mean of its sampling distribution equals the true parameter. The mean of the sampling distribution is the expected value of the estimator. Thus, lack of bias means that , where is the estimator of the true parameter, b. Bias is then defined as the difference between the expected value of the estimator and the true parameter. That is . Note that lack of bias does not mean that , but that in repeated random sampling, we get, on average, the correct estimate. The hope is that the sample actually obtained is close to the mean of the sampling distribution of the estimator.  

        The best unbiased or efficient estimator refers to the one with the smallest variance among unbiased estimators. It is the unbiased estimator with the most compact or least spread out distribution. This is very important because the researcher would be more certain that the estimator is closer to the true population parameter being estimated. Another way of saying this is that an efficient estimator has the smallest confidence interval and is more likely to be statistically significant than any other estimator. It should be noted that minimum variance by itself is not very important, unless coupled with the lack of bias. 

     However, non-linear estimators may be superior to OLS estimators (ie they might be unbiased and have lower variance). Since it is often difficult or impossible to find the variance of unbiased non-linear estimators, however, the OLS estimators remain by far the most widely used. OLS estimators being linear, are also easier to use than non-linear estimators.

 

Two conditions are required for an estimator to be consistent: 

1) As the sample size increases, the estimator must approach more and more the true parameter (this is referred to as asymptotic unbiasedness),

2) As the sample size approaches infinity in limit, the sampling distribution of the estimator must collapse or become a straight vertical line with height (probability) of 1 above the value of the true parameter. This large-sample property of consistency is used only in situations when small sample BLUE or lowest SME estimators cannot be found. 

Copyright © 2002                                                                             Back to top

Evgenia Vogiatzi                                                                    <<Previous  Next>>

1