What is Parallel Computers?

A parallel computer is a collection of processing elements that cooperates together to solve problems quickly


Speedup

Introduction

One major motivation of using parallel processing is achieving speedup. For a given problem, the ideal speedup is

Amdahl’s Law

However, we can’t achieve ideal speedup due to many factors. Amdahl’s law helps us compute the real speedup

where

  • : portion to be sped up
  • : speedup factor

Observations

Communication

  • Communication limited the maximum speedup achieved
  • Minimization of the cost of communication improved speedup

Work Assignment

  • Imbalance in work assignment limits speedup
  • Improving distribution of work improved speedup

Dynamic Allocation

What is Dynamic Allocation?

Instead of fixing certain job to certain processor, the system will decide in real-time which job should be assign to which processor based on the real-time workload of each processor

Challenges

Overhead of dynamic decision-making

The time and effort the manager spends thinking about assigning jobs to different processor

Potential for Resource Contention

Two processor might fight for the shared resource

For example, two processor might both want to edit the same array, this might cause some problem if you don’t implemented correctly


Instruction Level Parallelism (ILP)

Introduction

When two instructions are independent, then we can do these instructions in parallel with multiple execution unit within a single processor

Superscalar Execution

This is the mechanism that dynamically finds independent instructions in a program then execute in parallel

Problem

As we increase the execution units within a processor, the speedup doesn’t increase as we’ve expected since it’s hard to find so many independent instructions to fully utilize the resources


Why Parallelism?

People find that they can’t increase the efficiency of a single processor anymore, so they turn to make one CPU with multiple processors