Markov Chain Steady State Calculator
Find the steady-state (stationary) distribution of a 2×2 or 3×3 Markov chain transition matrix.
Calculate long-run probabilities for any Markov process.
What Is a Markov Chain? A Markov chain is a stochastic process where the next state depends only on the current state (not history). Named after Andrei Markov (Russia, 1906). Described by a transition matrix P where P[i][j] = probability of moving from state i to state j. Each row of P must sum to 1 (stochastic matrix).
Steady-State Distribution The steady-state (stationary) distribution π satisfies: π × P = π (π is unchanged after one step) Sum of all π_i = 1 This represents the long-run proportion of time spent in each state.
For a 2-State Chain Given: P = [[p, 1-p], [q, 1-q]] Steady state: π₁ = q / (q + (1-p)); π₂ = (1-p) / (q + (1-p)) Or: π₁ = q / ((1-p) + q), π₂ = (1-p) / ((1-p) + q)
Solving for 3-State Chains Set up the system: πP = π with constraint Σπ = 1. Replace one equation with Σπ_i = 1 and solve the linear system.
Convergence Any ergodic (irreducible + aperiodic) chain converges to the unique stationary distribution. Convergence speed depends on the second-largest eigenvalue (spectral gap). After many steps, starting state doesn’t matter — all chains reach the same steady state.
Real-World Examples Web page rank (Google PageRank uses Markov chains). Customer churn: loyal/at-risk/churned states. Weather: sunny/cloudy/rainy transitions. Gene regulatory networks, queuing theory, inventory models.