Type II Error
A Type II error happens when you fail to reject the null hypothesis even though it is actually false. In other words, there is a real effect or difference, but your test misses it — a "false negative."
A Type II error, denoted by the probability , occurs when a statistical hypothesis test fails to reject the null hypothesis when the alternative hypothesis is true. The probability of avoiding a Type II error is called the power of the test, defined as . A test with low power has a high chance of committing a Type II error, meaning it is unlikely to detect a real effect even when one exists.
Key Formula
Where:
- = the probability of making a Type II error
- = the null hypothesis
Worked Example
Problem: A school claims the average student score on a standardized test is 500. A researcher suspects the true mean is actually 520. She tests a sample of 25 students and gets a sample mean of 510 with a population standard deviation of 50. Using a significance level of (one-tailed test), does she reject ? Could this be a Type II error?
Step 1: State the hypotheses.
Step 2: Calculate the standard error of the mean.
Step 3: Compute the test statistic (z-score).
Step 4: Compare to the critical value. For a one-tailed test at , the critical z-value is 1.645. Since , we fail to reject .
Step 5: Assess the error type. If the true mean really is 520 (as the researcher suspects), then is false and she failed to reject it. This would be a Type II error. The test did not have enough evidence to detect the real difference.
Answer: The researcher fails to reject . If the true mean is indeed 520, this is a Type II error — a false negative.
Why It Matters
Type II errors matter whenever missing a real effect has consequences. In medical testing, a Type II error means telling a patient they are healthy when they actually have a disease. In quality control, it means accepting a defective batch of products. Understanding helps researchers design studies with enough power — through larger sample sizes or higher significance levels — to reduce the chance of missing effects that truly exist.
Common Mistakes
Mistake: Confusing Type I and Type II errors.
Correction: A Type I error is rejecting a true (false positive). A Type II error is failing to reject a false (false negative). One way to remember: Type I is a false alarm; Type II is a missed detection.
Mistake: Thinking that failing to reject proves is true.
Correction: Failing to reject only means you lacked sufficient evidence against it. The null hypothesis could still be false — your test may simply not have had enough power to detect the difference.
