False Negatives and False Positives — Definition, Formula & Examples
A false positive occurs when a test incorrectly indicates a condition is present when it is not. A false negative occurs when a test incorrectly indicates a condition is absent when it actually is present.
In the context of binary classification or hypothesis testing, a false positive (Type I error) is the event of rejecting a true null hypothesis, while a false negative (Type II error) is the event of failing to reject a false null hypothesis. These errors arise whenever an outcome is classified into one of two categories based on imperfect information.
How It Works
Imagine any yes-or-no test: a medical screening, a spam filter, or a hypothesis test. The test can produce four outcomes arranged in a 2×2 table: true positive (correctly detected), true negative (correctly cleared), false positive (incorrectly flagged), and false negative (incorrectly missed). The false positive rate equals the probability of a positive result given the condition is truly absent. The false negative rate equals the probability of a negative result given the condition is truly present. Reducing one type of error typically increases the other, so test designers must balance the costs of each.
Worked Example
Problem: A rapid disease test has a 95% true positive rate (sensitivity) and a 90% true negative rate (specificity). In a group of 1,000 people, 50 actually have the disease. How many false positives and false negatives does the test produce?
Find false negatives: Of the 50 people with the disease, the test correctly detects 95%, so it misses 5%.
Find false positives: Of the 950 people without the disease, the test correctly clears 90%, so it incorrectly flags 10%.
Interpret the results: Even with a seemingly accurate test, the 95 false positives far outnumber the roughly 48 true positives. This means most positive results are actually wrong — a key insight when prevalence is low.
Answer: The test produces approximately 3 false negatives and 95 false positives out of 1,000 people.
Why It Matters
In AP Statistics, understanding false positives and false negatives is essential for interpreting hypothesis tests (Type I and Type II errors) and for probability problems involving medical screening or quality control. Professionals in medicine, law, and data science routinely weigh the consequences of each error type when designing tests and making decisions.
Common Mistakes
Mistake: Assuming a highly accurate test means most positive results are correct.
Correction: When the condition being tested for is rare, false positives can vastly outnumber true positives. Always consider the base rate (prevalence) alongside accuracy rates, which is exactly what Bayes' Theorem helps you do.
