Imagine you were given a hundred 3-digit numbers to add, how much time would it take you to get the answer? If it would take you 30 seconds to add ten numbers (using a calculator), then it would take you about 300 seconds (or 5 minutes) to add 100 numbers.

Now imagine there are a hundred people and each person has a number. How would a hundred people compute the sum of all numbers? Seems like a recipe for disaster. However, we can do this:

1. Group the people by two, if there is an extra person with no group, this person can join the nearest group (to make a group of 3 people).

2. Each group will add the numbers that they have to get the sum S.

3. Each group will nominate a representative that will carry this new number S. The remaining members can sit down.

4. Repeat step 1 until there is only one person remaining.

5. The last person remaining will have the sum of the 100 numbers.

At the beginning, we have 100 people in groups of two. It takes 3 seconds for them to add their respective numbers. In the next iteration, only 50 people remain that will then add their numbers. Continuing in this way, the remaining number of people will be halved until in the 7th iteration we get our answers. So if each iteration can execute in 3 seconds, then it will take 21 seconds for a hundred people to compute the sum of 100 numbers!

I mentioned that in the 7th iteration, we are able to get the answer. There is a formula to get the number of iterations and it is

where n is the number of people to start with and the symbol is the ceiling function which rounds off the result of the function to the next highest integer. If , the number of iterations is

Having a hundred people to compute the sum of 100 numbers might be practically infeasible. We might not have the space to accommodate them. However, if we only have say 10 people, we can still have a faster computation.

## Map Reduce

Given a hundred numbers, we can group the numbers by 10 and distribute it to 10 people. Each person will then add the numbers they have and combine the sum with the rest of the participants to get the sum of 100 numbers in 1/10th of the time.

Grouping the numbers into smaller subsets and distributing them to each person is called *mapping*. Combining the sum computed by each person to the total sum is called *reduction*. This is called **map-reduce** and the example given is a simple example of map reduce.

## A More Complex Example

Let and be two matrices. The product of matrices and is the matrix

such that

where is the entry of the matrix C on row i and column j.

Here is a sequential algorithm to compute the matrix C:

//a and b are nxn matrices //c[i][j] is initialized to 0 for all i,j for(var i=0;i<n;i++){ for(var j=0;j<n;j++){ for(var k=0;k<n;k++){ c[i][j] += a[i][k] * b[k][j] } } }

There are 3 loops in the above algorithm, the innermost loop will execute n times to compute the sum of element-wise product of row i and column j. In the diagram below, the inner loop will do the following: multiply each element inside the of the box of matrix A and the elements inside the box of matrix B and take the sum. Since there are n such products, the number of addition operations is n.

Then you have to do this n times for each column of matrix B and n times for each row of matrix A, as shown below, for a total of operations.

So if you to multiply a matrix with n=100, you will need to execute (or 1 million) operations!

Parallel Computing can help us here.

## Parallel Matrix Multiplication

The good thing about matrix multiplication is that we can multiply by blocks. For the sake of simplicity, suppose we have 2 square matrices of dimension 100. We can divide the matrices into small square sub-matrices of dimension 25:

where each and are square matrices of dimension 25.

We can then compute the resulting sub-matrix of C using the usual formula:

where .

For example, to compute we have

We then use this mechanism to distribute our sub-matrices to different computers (or in modern parlance “compute nodes”). For this example, we need 4×4=16 compute nodes. Each compute node will contain 2 sub-matrices, one for A and one for B. For easy visualization, we can make a drawing of our compute nodes arranged in a square of side 4 and labelled as shown below:

Now that we have evenly distributed our matrix data to each node, the next question is how do we compute? Notice that not one node contains all the data. Each node has a very limited subset of the data.

### The Trick

We can get a clue by looking at the end result of the computation for Node 00. At the end of the computation, Node 00 should have the following result:

Looking at the above formula, Node 00 only has the following data

The rest of the sub-matrices are not in its memory. Therefore, Node 00 can only compute

We are missing the following products:

The matrix is with Node 01 and the matrix is with Node 10. So if Node 01 can send matrix to Node 00 and Node 10 can send matrix to Node 00, we can then get the product . In fact, if we do a slight rearrangement like the below, we can use the following algorithm to compute matrix :

1. Each node will send the current sub-matrix to the node on the left and receive a new sub-matrix from the node on the right. If there is no node on the left, it will send the sub-matrix to the last node on its row.

2. Each node will send the sub-matrix to the node on top and receive a new sub-matrix from the node below it. If there is no node on top, it will send the sub-matrix to the last node on its column.

3. Multiply the new sub-matrices and add the result to the current value of the sub-matrix of C that it keeps in memory.

4. Repeat until the number of iterations equal to N, where is the number of nodes.

The figure below describes this algorithm for Node 00.

Doing this for all nodes, we can visualize the parallel algorithm at work on all nodes as shown in the animation below:

This algorithm is one of my favorite algorithms since it looks like the pumping of blood in to the heart.

### How fast is this algorithm?

Given a 2 square matrices of dimension N, the number of sequential computations is . Using parallel matrix multiplication above, we divide the matrices into sub-matrices of dimension . The resulting block matrix is of dimension . Each node will now compute the product in parallel using operations per sub-matrix multiplication. Since there are multiplications per node, the number of operations per node is The ratio of these two quantities will give us how fast the parallel algorithm is

For our particular example, the theoretical speedup is , that is, our parallel algorithm can compute the product 16 times faster than the sequential algorithm. If we increase p, we can increase the speedup. However, we can only increase p up to a certain point as the network will then become the bottleneck and will make things slower than the sequential algorithm.

In the next 3 posts, we will see how to program simple parallel algorithms and parallel matrix multiplication. We will also show how you can use OpenShift to prototype your parallel programs.