P-Value Formula and Interpretation
Understanding p-values in hypothesis testing.
How to calculate and interpret p-values for statistical significance.
The Concept
The p-value is the probability of getting results at least as extreme as the observed results, assuming the null hypothesis is true.
For a Z-Test
One-tailed (left): p = P(Z < z)
Two-tailed: p = 2 × P(Z > |z|)
Decision Rules
| P-Value | Common Interpretation | Decision (at α = 0.05) |
|---|---|---|
| p < 0.001 | Very strong evidence against H₀ | Reject H₀ |
| p < 0.01 | Strong evidence against H₀ | Reject H₀ |
| p < 0.05 | Moderate evidence against H₀ | Reject H₀ |
| p ≥ 0.05 | Weak evidence against H₀ | Fail to reject H₀ |
| p > 0.10 | Little to no evidence against H₀ | Fail to reject H₀ |
Common Misconceptions
- A p-value is NOT the probability that the null hypothesis is true
- A p-value is NOT the probability your result is due to chance
- p < 0.05 does not mean the result is practically important
- p > 0.05 does not mean there is no effect — it means you lack evidence
- A very small p-value with a tiny effect size may not be meaningful
Example
A z-test gives z = 2.15. What is the two-tailed p-value?
P(Z > 2.15) = 0.0158 (from z-table or calculator)
Two-tailed p = 2 × 0.0158 = 0.0316
Since 0.0316 < 0.05, this result is statistically significant at the 5% level.
We reject the null hypothesis.