Once upon a time, in a far away village lay a dying old man. He called his sons to his deathbed and spoke to them one last time. He said “Sons, see that bundle of sticks? Each one of you try to break it. The one who can break it will inherit all my riches”. Each son, being greedy, wanted all the riches for himself. So each one of them tried to break the bundle of sticks but none of them succeeded. The old man asked his servant to untie the bundle and said to his sons, “Each one of you now get one stick and break it”. Without any effort, each son was able to break the stick. The old man said “You see, when you unite, no task will be difficult. The riches that I told you was a lie. We are broke. When i’m dead, make sure you unite so that you can survive.”

Fast forward to modern times. You can think of the bundle of sticks to be a complex problem that is itself composed of smaller problems. The sons are the processors of your computer. When each processor was given the task to solve the complex problem, it fails to solve it in a reasonable amount of time. When the complex problem is decomposed into smaller problems and given to each processor, each processor is now able to solve the smaller problems quickly thereby solving the big problem quickly as well.

The process of decomposing a problem into smaller problems and solving them in separate processors is called Parallel Computing. In this article, we will compute how fast a certain algorithm will run when parallelized. The problem we want to investigate is sorting an array of a million () integers.

**Efficient Sorting **

Suppose you have an array that you want to sort based on pairwise comparison. The sorted array is just one of the many permutations of the array . In fact, if you have different objects to sort, then there are exactly ways to arrange these objects, and one of them is the sorted state. You can imagine the sorting process as a decision tree. Say, for example we have array A={ a,b,c }. To sort this, we first compare a with b and there are 2 ways this can go. Either or . If , we then compare b and c. This also give either or . As you can see from the diagram below, this is nothing but a decision tree.

Since the height of this binary tree is lg(n!), then we have

There are efficient algorithms that are able to sort of this complexity. For example, the merge sort has this complexity. Therefore, if you have an array of elements, then the complexity is

that is, it takes about 20 million comparisons to sort an array of 1 million. Could we do any better than this? We can either upgrade the cpu of the machine doing the sorting or use two or more machines to divide the work among those machines. In this article, we are going to investigate the impact of dividing the work into smaller chunks and farming it to other processors.

**Divide and Conquer**

Assume we have an array elements that we need to sort and suppose we have two identical processors we can use. Divide the array into 2 equal sized arrays. Give the first array to the first processor and the other half to the second processor. Apply an efficient sorting algorithm to the subarrays to produce a sorted array for each processor. We then combine the result of processor 1 and processor 2 to one big array by merging the two sorted arrays. The diagram below illustrates the process of computation:

This is also known as the **MapReduce** algorithm. *Mapping* is the process of assigning subsets of the input data to processors where each processor computes the partial result. *Reducing* is the process of aggregating the results of each processor to the final solution of the problem.

The process of merging is straightforward. Given two sorted arrays, begin by comparing the first element of each array. The smaller of the two will then occupy the first slot in the big array. The second element of the array from which we took the smallest element will now become the first element of that array. Repeat the process until all elements of both arrays have already occupied slots in the big array. The diagram below illustrates the algorithm of merging.

If you count the total number of comparisons that you need to merge two sorted arrays, you will find that it takes comparisons. Therefore, the complexity of the merging process is .

Since each processor has sized subarrays, the sorting complexity is therefore . Furthermore, since the merging process takes comparisons, the total complexity of the parallel sorting process is therefore

In our example, comparisons compared to when run sequentially. For large values of , dominates , therefore the complexity of the parallel algorithm is .

Can we do any better?

For a given value of , what do you think is the value of that reduces the running time to ? If we take and plot complexity against we get the diagram below.

In this diagram, we also plotted the horizontal line . The intersection of this line with the plot of gives us the value of such that the total comparisons is already linear, that is,

To get the value of numerically, we have to solve the root of the equation

Simplifying,

Since this is a non-linear equation, we can solve this using the Newton's method. It is a method to compute the roots by approximation given an initial value of the solution. Starting from a guess solution , the root can be approximated using the recursive formula

where is the first derivative of . Applying the rules of derivatives, we get

Substituting this to the formula for Newton's method, we get

Below is an R code using newton’s method to compute the root of the equation .

g=function(n,p){ p* 2^p - n } gprime=function(n,p){ p*2^p *log(2) - 2^p } newton=function(p,n,iter){ tmp = p for(i in 1:iter){ p=p-g(n,p)/gprime(n,p) if(abs(p-tmp)< 0.0001){ break } print(p) tmp=p } print(p) }

Running this code, we get the value of :

> newton(15,n,100) [1] 16.80905 [1] 16.08829 [1] 15.98603 [1] 16.00286 [1] 15.99944 [1] 16.00011 [1] 15.99998 [1] 16

Ignoring network latency, by distributing the input evenly into 16 processors, we get a running time of time complexity for array of items. Therefore, instead of doing 20 million comparisons, you only need 1 million comparisons to sort 1 million objects.

In this age of multicore processors, parallel computing is fast becoming the norm than the exception. Learning to harness the power of multicores is becoming an extremely handy skill to have.