Bayes' Theorem Calculator
Calculate conditional probability using Bayes' theorem.
Find posterior probability from prior, likelihood, and evidence.
Includes medical test sensitivity/specificity example.
Bayes’ Theorem P(A|B) = P(B|A) × P(A) / P(B) Or equivalently: Posterior = (Likelihood × Prior) / Evidence Named after Thomas Bayes (England, c. 1763), refined by Pierre-Simon Laplace.
Terms Explained P(A) = Prior probability: probability of A before observing B. P(B|A) = Likelihood: probability of observing B given that A is true. P(A|B) = Posterior: probability of A being true after observing B. P(B) = Evidence (marginal probability): P(B) = P(B|A)×P(A) + P(B|¬A)×P(¬A)
Medical Test Example A test for a disease with: Prevalence (prior): P(disease) = 1% → P(A) = 0.01 Sensitivity (true positive rate): P(positive | disease) = 95% → P(B|A) = 0.95 Specificity: P(negative | no disease) = 90% → P(B|¬A) = 1 − 0.90 = 0.10 Result: P(disease | positive test) = (0.95 × 0.01) / ((0.95 × 0.01) + (0.10 × 0.99)) ≈ 8.7% This counterintuitive result shows how low prevalence reduces positive predictive value.
Bayesian Updating Each new piece of evidence updates the probability: New posterior = P(A|new evidence) from current prior. The prior of the next calculation becomes the previous posterior. This iterative process is Bayesian inference — fundamental to machine learning, spam filters, medical diagnosis, and scientific reasoning.
Bayes Factor BF = P(B|H₁) / P(B|H₀) The ratio of likelihoods for two hypotheses. BF > 3: moderate evidence for H₁. BF > 10: strong evidence.