Amdahl's Law: Understanding Limits of Parallel Computing

Learn about Amdahl's Law and how it defines the theoretical speedup of a program when only part of it can be parallelized. Explore the formula, its implications for parallel processing, and its significance in computer architecture.



Amdahl's Law and Its Proof in Computer Organisation

Amdahl's Law is a principle in computer architecture that predicts the potential speedup of a program when a certain portion of it is parallelized. Proposed by Gene Amdahl in 1967, it states that the speedup of a program is limited by the fraction that cannot be parallelized.

What is Amdahl's Law?

Amdahl's Law defines the maximum speedup achievable when a part of a program is parallelized. Mathematically, it can be expressed as:

Speedup = 1 / (f + (1 - f) / N)

Where:

  • f is the fraction of the program that is sequential.
  • N is the number of processors or computing resources available.

For example, if 20% of a program is sequential and cannot be parallelized, the maximum theoretical speedup would be 5 times:

Speedup = 1 / (0.2 + (0.8 / N))

This shows that as the number of processors increases, the speedup is inversely proportional to the non-parallelizable fraction.

Amdahl's Law in the Case of Parallelization

Amdahl's Law demonstrates how the parallelizable portion of the program and the number of processors affect the speedup. The non-parallelizable portion bounds the speedup and does not depend on the number of processors. As the number of processors increases, the impact of the sequential portion becomes more pronounced.

For instance, if 80% of a program is parallelizable, the maximum theoretical speedup would be:

Speedup = 1 / ((1 - 0.8) + (0.8 / N))

This formula shows that increasing the number of processors has diminishing returns as the sequential portion of the program (20%) limits the overall speedup.

Advantages of Amdahl's Law

  • Performance Estimation: Helps evaluate the impact of parallelization on overall performance.
  • Optimization Focus: Directs attention towards optimizing sequential code for better overall performance.
  • Resource Allocation Guidance: Assists in determining the optimal allocation of computing resources in parallel systems.
  • Decision Making: Provides insights into potential return on investment regarding speedup and performance, aiding in decisions about resource allocation and parallelism.

Disadvantages of Amdahl's Law

  • Simplified Assumptions: Assumes the problem can be divided into parallelizable and sequential portions, which may not hold true in real-world applications.
  • Neglects Other Factors: Ignores factors like memory access patterns, load imbalance, synchronization costs, and communication overhead, which can affect performance.
  • Inaccurate P and N Values: The values for parallelizable portions (P) and number of processors (N) may vary, leading to inaccurate speedup predictions.
  • Oversimplifies Program Behavior: Does not account for dynamic changes in program behavior due to input data, runtime conditions, and other factors.

Difference Between Amdahl's Law and Gustafson's Law

Amdahl's Law and Gustafson's Law describe the scalability of parallel computing systems, but they focus on different aspects:

  • Amdahl's Law: Focuses on speedup based on parallelization of a portion of the program.
  • Gustafson's Law: Focuses on scaling efficiency, expressed as E = 1 - f + fN, where f is the fraction of the program that can be parallelized.

Derivation of Amdahl's Law

To derive Amdahl's Law, let’s consider a program with a fraction “f” of sequential code and a fraction “1 - f” of code that can be executed in parallel.

Let:

  • Ts be the time for the sequential portion of the program.
  • Tp be the time for the parallelizable portion.
  • N be the number of processors or threads.

The total execution time for the program is:

T = Ts + Tp

After parallelizing the code, the sequential portion remains unchanged, while the parallel portion is divided equally among N processors:

Tp' = Tp / N

The new execution time of the parallelized program becomes:

T' = Ts + Tp' = Ts + Tp / N

The speedup (S) achieved is the ratio of the original execution time (T) to the parallelized execution time (T'):

S = T / T' = (Ts + Tp) / (Ts + Tp / N)

Now dividing both the numerator and denominator by Ts:

S = (1 + Tp / Ts) / (1 + Tp / (N * Ts))

Substitute α = Tp / Ts (the parallelizable fraction), we get:

S = (1 + α) / (1 + α / N)

Multiplying the numerator and denominator by N gives:

S = N / (N * (1 + α / N)) = 1 / (1 + α / N)

Finally, putting α = Tp / Ts, we arrive at Amdahl’s Law:

S = 1 / (f + (1 - f) / N)

This formula expresses the maximum speedup that can be achieved by parallelizing a program with a fraction f of sequential code and a fraction 1 - f of parallelizable code, executed on N processors.