Parallel Architectures Introduction

Parallel architectures are a sub-class of distributed computing where the processes are all working to solve the same problem.

There are different kinds of parallelism at various levels of computing. For example, even though you might write a program as a sequence of instructions, the compiler or CPU may make changes to it at compile time or run time so that some operations happen in parallel or in a different order. This is called implicit parallelism.

Explicit parallelism, the kind we care about in this lesson, occurs when the programmer is aware of the parallelism and designs the processes to operate in parallel.

This lesson will explore some of the basic concepts in parallel computing, look at some of the theoretical limitations, and focus on a specific kind of parallel programming called MapReduce.

Lesson Objectives

After completing this lesson, you should be able to

  1. Classify parallel computing on the distributed system three axis diagram
  2. Explain Amdahl’s Law in simple terms
  3. Explain the reasons why communication and synchronization limit good parallel performance
  4. Describe the advantages of hierarchical architectures
  5. Explain how MapReduce works.
  6. Use MapReduce to solve a data processing problem.

Required Reading/Viewing