Z-Score
A z-score is the number of standard deviations a data point is away from the mean of a dataset. A positive z-score means the value is above the mean; a negative z-score means it is below.
The z-score of a data point x in a population with mean μ and standard deviation σ is defined as z = (x − μ) / σ. It standardizes values from different distributions onto a common scale, allowing direct comparison. A z-score of 0 indicates the data point equals the mean, while a z-score of ±1 indicates it is exactly one standard deviation away.
Key Formula
Where:
- = the z-score
- = the individual data value
- = the population mean
- = the population standard deviation
Worked Example
Problem: A class has a mean exam score of 70 and a standard deviation of 10. A student scored 85. Find their z-score.
Step 1: Identify the values from the problem.
Step 2: Substitute into the z-score formula.
Step 3: Simplify the numerator, then divide.
Answer: The student's z-score is 1.5, meaning their score is 1.5 standard deviations above the class mean.
Visualization
Why It Matters
Z-scores let you compare values across different datasets that have different means and spreads — for example, comparing a score on a math test to a score on an English test scaled differently. In AP Statistics, z-scores are central to finding probabilities under the normal distribution using a standard normal table. They also underpin hypothesis testing, where you calculate how far a sample result falls from what was expected under a null hypothesis.
Common Mistakes
Mistake: Subtracting x from μ instead of μ from x, getting the sign wrong.
Correction: The formula is (x − μ), not (μ − x). A value above the mean must give a positive z-score, so always subtract the mean from the data point.
Mistake: Dividing by variance instead of standard deviation.
Correction: The formula requires σ (standard deviation), not σ² (variance). Always take the square root of the variance before dividing.
