Amdahl's Law Speedup Calculator
Calculate parallel speedup using Amdahl's and Gustafson's laws.
Enter the parallel fraction and processor count to find theoretical maximum speedup.
Amdahl’s Law is the fundamental constraint on parallel speedup, and it is often more pessimistic than engineers expect.
If a fraction p of your program can be parallelized and (1-p) must run serially, the maximum speedup with n processors is:
S = 1 / ((1 - p) + p/n)
As n approaches infinity, speedup approaches 1/(1-p). A program that is 90% parallel caps out at 10x speedup no matter how many cores you throw at it. The 10% serial portion is the ceiling.
Gustafson’s Law offers a more optimistic view. It argues that as you add processors, you scale up the problem size rather than fixing it. Under this model:
S = n - (1 - p)(n - 1)
Gustafson’s law applies when you are doing more work in the same time, not the same work faster. High-performance computing workloads (climate simulations, molecular dynamics) often fit this model better than Amdahl’s.
Practical takeaways. Finding and eliminating serial bottlenecks matters far more than adding cores. Profiling to reduce the serial fraction from 10% to 5% doubles your theoretical ceiling. Lock contention, I/O waits, and single-threaded initialization are common hidden serial fractions that don’t show up in the main loop.
Efficiency. Parallel efficiency = speedup / n. At 100% efficiency, n cores give n× speedup. In practice, communication overhead, cache effects, and scheduling reduce this. The chart shows both laws side by side for your inputs.