Ad Space — Top Banner

Amdahl's Law

Calculate the maximum speedup of a program using parallel processors with Amdahl's Law formula.

The Formula

S = 1 / ((1 − P) + P / N)

Amdahl's Law, formulated by computer architect Gene Amdahl in 1967, defines the theoretical maximum speedup achievable when improving part of a system. It is one of the most important formulas in parallel computing because it reveals a fundamental limitation: no matter how many processors you add, the serial (non-parallelizable) portion of a program sets an upper bound on overall performance.

The core insight is surprisingly simple. If 90% of a program can run in parallel and 10% must run sequentially, then even with an infinite number of processors, the maximum speedup is only 10×. That sequential 10% becomes the bottleneck that dominates total execution time as the parallel portion shrinks toward zero time.

This formula has profound implications for hardware design, software engineering, and system architecture. It tells us that simply throwing more cores or processors at a problem yields diminishing returns. To achieve meaningful speedups, engineers must reduce the serial fraction of their code. This is why modern software optimization focuses heavily on minimizing synchronization points, reducing shared-state dependencies, and restructuring algorithms to maximize the parallelizable fraction.

Amdahl's Law applies beyond computing as well. Any process with a sequential bottleneck follows the same principle. A factory assembly line, a project with dependent tasks, or a network pipeline all exhibit Amdahl's Law behavior. The formula helps decision-makers understand where to invest resources for the greatest return.

In practice, the law is sometimes considered pessimistic because it assumes a fixed problem size. Gustafson's Law (1988) offers a complementary perspective by considering that larger systems often tackle larger problems, which can increase the parallelizable fraction proportionally.

Variables

SymbolMeaning
SOverall speedup of the program
PFraction of the program that can be parallelized (0 to 1)
NNumber of processors (or parallel execution units)
1 − PFraction of the program that must remain serial

Example 1

A program is 75% parallelizable. What is the speedup with 4 processors?

Identify values: P = 0.75, N = 4

S = 1 / ((1 − 0.75) + 0.75 / 4)

S = 1 / (0.25 + 0.1875)

S = 1 / 0.4375

S ≈ 2.29× speedup

Example 2

A program is 95% parallelizable. What is the maximum possible speedup (infinite processors)?

With N → ∞, the term P/N approaches 0

S = 1 / ((1 − 0.95) + 0)

S = 1 / 0.05

S = 20× maximum speedup — the 5% serial portion limits everything

When to Use It

Amdahl's Law is essential whenever you need to evaluate the potential benefit of parallelization.

  • Deciding how many CPU cores or GPU threads to allocate to a workload
  • Estimating whether rewriting code for parallel execution is worth the effort
  • Identifying performance bottlenecks in multi-threaded applications
  • Evaluating hardware purchases for high-performance computing clusters
  • Setting realistic expectations for speedup in distributed systems

Ad Space — Bottom Banner

Embed This Calculator

Copy the code below and paste it into your website or blog.
The calculator will work directly on your page.