Unbiased Estimator — Definition, Formula & Examples
An unbiased estimator is a statistic that, on average across all possible samples, gives a value exactly equal to the population parameter it estimates. In other words, it neither systematically overestimates nor underestimates the true value.
A statistic is an unbiased estimator of a parameter if and only if , where the expectation is taken over the sampling distribution of . The quantity is called the bias; when it equals zero, the estimator is unbiased.
Key Formula
Where:
- = The estimator (a statistic computed from sample data)
- = The true population parameter being estimated
- = The expected value of the estimator over all possible samples
How It Works
To check whether an estimator is unbiased, you compute its expected value across all possible random samples and see if that expected value equals the target parameter. The sample mean is unbiased for the population mean because regardless of sample size. In contrast, dividing by instead of when computing sample variance produces a biased estimator, which is why the standard formula uses . Being unbiased does not guarantee an estimator is "good" overall — variance and mean squared error also matter.
Worked Example
Problem: A population has mean . You draw all possible samples of size from this population. Show that the sample mean is an unbiased estimator of .
Define the estimator: The sample mean for a sample of size 3 is:
Compute the expected value: Take the expectation using the linearity property. Each is drawn from the population, so .
Check the bias: Compare the expected value to the parameter.
Answer: Since the bias is 0, the sample mean is an unbiased estimator of .
Why It Matters
Unbiasedness is a foundational criterion in statistics for choosing between competing estimators. It directly impacts hypothesis testing and confidence interval construction — if you build intervals around a biased estimator, your stated coverage probability will be wrong. Courses in econometrics, biostatistics, and machine learning all revisit this concept when discussing estimator properties like consistency and efficiency.
Common Mistakes
Mistake: Assuming an unbiased estimator is always the best estimator to use.
Correction: Unbiasedness only means the estimator is correct on average. A biased estimator with much lower variance can have a smaller mean squared error and be preferable in practice. Always consider bias and variance together.
