EFFICIENCY OF AN ESTIMATOR

If bi and b2 are the unbiased estimates of the parameter vector b, then compare these estimates in terms of error covariance matrices: E{(b — b>1)(b — Ьі)Г} < E{ (b — b2)(b — b2)T}. The estimator /31 is said to be superior to b2 if the inequality is satisfied. If it is satisfied for any other unbiased estimator, then we get the efficient estimator. The mean square error and the variance are identical for unbiased
estimators and such optimal estimators are minimum variance unbiased estimators. The efficiency of an estimator can be defined in terms of Cramer-Rao inequality (Chapter 9). It gives a theoretical limit to the achievable accuracy, irrespective of the estimator used: F{[/3(z) — bl [b(z) — bl^} > Im4b), the matrix Im is the Fisher information matrix, and its inverse is a theoretical covariance limit. If it is assumed that the estimator is unbiased, then such an estimator with equality valid is called an efficient estimator. The Cramer-Rao inequality means that for an unbiased estimator, the variance of parameter estimates cannot be lower than its theoretical bound Im1(b). The Cramer-Rao bounds define uncertainty levels around the estimates obtained by using the maximum likelihood/output error method (Chapter 9).