Significance Level (Alpha)
Significance level (alpha, written ) is the cutoff you choose before running a hypothesis test — if your p-value falls below , you reject the null hypothesis. The most common significance level is 0.05, meaning you're willing to accept a 5% chance of rejecting a true null hypothesis.
The significance level is the maximum probability of committing a Type I error — rejecting the null hypothesis when it is actually true. It is set before data collection and defines the rejection region of the test. A result is called statistically significant when the observed p-value satisfies . Common choices for include 0.01, 0.05, and 0.10, depending on the context and consequences of a false rejection.
Key Formula
Where:
- = the null hypothesis
- = the probability of observing results at least as extreme as the data, assuming the null hypothesis is true
- = the significance level (threshold for rejection)
Worked Example
Problem: A company claims that 50% of customers prefer their new product. You survey 200 customers and find that 58% prefer it. A one-proportion z-test gives a p-value of 0.024. Using a significance level of , should you reject the null hypothesis?
Step 1: State the significance level chosen before the test.
Step 2: Identify the p-value from the test.
Step 3: Compare the p-value to .
Step 4: Since the p-value is less than or equal to , reject . There is statistically significant evidence that the true proportion of customers who prefer the new product differs from 50%.
Answer: At the significance level, we reject the null hypothesis because the p-value of 0.024 is below the threshold.
Why It Matters
Every hypothesis test in statistics requires a significance level, so understanding is essential for interpreting research results. In medicine, a stricter like 0.01 might be used because a false conclusion could harm patients, while in exploratory social science research, 0.10 might be acceptable. Choosing is a deliberate decision about how much risk of a wrong rejection you're willing to tolerate.
Common Mistakes
Mistake: Choosing or changing after seeing the p-value to get the result you want.
Correction: The significance level must be set before conducting the test. Adjusting it afterward undermines the integrity of the test and is considered bad statistical practice.
Mistake: Interpreting as a 5% chance that the null hypothesis is true.
Correction: Alpha is the probability of rejecting a true null hypothesis (Type I error rate), not the probability that the null hypothesis itself is true or false. These are different concepts.
