Effect Size (Cohen's d)
Cohen's d measures the practical significance of a difference between two group means.
Learn to calculate and interpret effect sizes.
The Formula
Cohen's d measures how large the difference between two group means is, relative to the variability in the data. While a p-value tells you whether a difference is statistically significant, effect size tells you whether it is practically meaningful.
A treatment might produce a statistically significant result with a huge sample, but if the effect size is tiny, the real-world impact may be negligible. Cohen's d puts the difference on a standardized scale, making it easy to compare across different studies and measurements.
Pooled Standard Deviation
Variables
| Symbol | Meaning |
|---|---|
| d | Cohen's d effect size (dimensionless) |
| x̄₁, x̄₂ | Means of groups 1 and 2 |
| spooled | Pooled standard deviation of both groups |
| s₁, s₂ | Standard deviations of groups 1 and 2 |
| n₁, n₂ | Sample sizes of groups 1 and 2 |
Interpreting Cohen's d
- Small effect: d ≈ 0.2
- Medium effect: d ≈ 0.5
- Large effect: d ≈ 0.8 or greater
These benchmarks were proposed by Jacob Cohen in 1988 and are widely used in social science research.
Example 1
A study compares exam scores between two groups. Group A (n = 30): mean = 78, SD = 10. Group B (n = 30): mean = 85, SD = 12. What is Cohen's d?
Calculate pooled SD: spooled = √[((30−1)(10²) + (30−1)(12²)) / (30 + 30 − 2)]
spooled = √[(29 × 100 + 29 × 144) / 58] = √[(2900 + 4176) / 58]
spooled = √[7076 / 58] = √122.0 = 11.04
d = (85 − 78) / 11.04 = 7 / 11.04
d ≈ 0.63 (a medium-to-large effect — the difference is practically meaningful)
Example 2
A new drug reduces blood pressure by an average of 2 mmHg compared to a placebo. The pooled standard deviation is 15 mmHg. Is this a meaningful effect?
d = (x̄₁ − x̄₂) / spooled = 2 / 15
d ≈ 0.13 (a very small effect — even if statistically significant with a large sample, the practical benefit is minimal)
When to Use It
Use Cohen's d whenever you want to quantify the practical significance of a difference.
- Reporting research results alongside p-values (most journals now require effect sizes)
- Conducting power analysis to determine needed sample sizes
- Comparing results across studies in meta-analyses
- Deciding whether a treatment difference is large enough to be worth implementing
- Evaluating educational interventions and training programs