Mathwords logoMathwords

Unbiased Estimator — Definition, Formula & Examples

An unbiased estimator is a statistic that, on average across all possible samples, gives a value exactly equal to the population parameter it estimates. In other words, it neither systematically overestimates nor underestimates the true value.

A statistic θ^\hat{\theta} is an unbiased estimator of a parameter θ\theta if and only if E[θ^]=θE[\hat{\theta}] = \theta, where the expectation is taken over the sampling distribution of θ^\hat{\theta}. The quantity E[θ^]θE[\hat{\theta}] - \theta is called the bias; when it equals zero, the estimator is unbiased.

Key Formula

Bias(θ^)=E[θ^]θ=0\text{Bias}(\hat{\theta}) = E[\hat{\theta}] - \theta = 0
Where:
  • θ^\hat{\theta} = The estimator (a statistic computed from sample data)
  • θ\theta = The true population parameter being estimated
  • E[θ^]E[\hat{\theta}] = The expected value of the estimator over all possible samples

How It Works

To check whether an estimator is unbiased, you compute its expected value across all possible random samples and see if that expected value equals the target parameter. The sample mean Xˉ\bar{X} is unbiased for the population mean μ\mu because E[Xˉ]=μE[\bar{X}] = \mu regardless of sample size. In contrast, dividing by nn instead of n1n-1 when computing sample variance produces a biased estimator, which is why the standard formula uses n1n-1. Being unbiased does not guarantee an estimator is "good" overall — variance and mean squared error also matter.

Worked Example

Problem: A population has mean μ=50\mu = 50. You draw all possible samples of size n=3n = 3 from this population. Show that the sample mean Xˉ\bar{X} is an unbiased estimator of μ\mu.
Define the estimator: The sample mean for a sample of size 3 is:
Xˉ=X1+X2+X33\bar{X} = \frac{X_1 + X_2 + X_3}{3}
Compute the expected value: Take the expectation using the linearity property. Each XiX_i is drawn from the population, so E[Xi]=μ=50E[X_i] = \mu = 50.
E[Xˉ]=E[X1]+E[X2]+E[X3]3=50+50+503=50E[\bar{X}] = \frac{E[X_1] + E[X_2] + E[X_3]}{3} = \frac{50 + 50 + 50}{3} = 50
Check the bias: Compare the expected value to the parameter.
Bias=E[Xˉ]μ=5050=0\text{Bias} = E[\bar{X}] - \mu = 50 - 50 = 0
Answer: Since the bias is 0, the sample mean Xˉ\bar{X} is an unbiased estimator of μ\mu.

Why It Matters

Unbiasedness is a foundational criterion in statistics for choosing between competing estimators. It directly impacts hypothesis testing and confidence interval construction — if you build intervals around a biased estimator, your stated coverage probability will be wrong. Courses in econometrics, biostatistics, and machine learning all revisit this concept when discussing estimator properties like consistency and efficiency.

Common Mistakes

Mistake: Assuming an unbiased estimator is always the best estimator to use.
Correction: Unbiasedness only means the estimator is correct on average. A biased estimator with much lower variance can have a smaller mean squared error and be preferable in practice. Always consider bias and variance together.