Do It Yourself Supercomputing in Linux Part 1

If you recently purchased a laptop or desktop computer, chances are you have a dual core system. There does not seem to be any indication anymore of us going back to single core unless some technological breakthrough will break the power barrier of CPUs. This means that more and more people will have a high powered computer in their homes without any idea how to harness such power.

What does it mean to have a dual processor? On first impulse, you probably will think it will speed up the execution of your programs. You would probably perceive a significant difference between the response time of your programs in a single core versus a dual core. But why do programs run faster in dual processor systems? For one thing, the speed of each processor is faster as compared to older single processors. The other thing is due to symmetric multiprocessing. An analogy of the dual processor system is a bank with 2 tellers and having a single line for customers. When a teller is done with one customer, it will get the next customer from the line. You can read about the performance of multiprocessor systems in this article.

There is another way you can make use of multiprocessors to make your programs run faster. This is through parallel programming. Parallel computing has been around for a long time already but are usually confined to university laboratories or supercomputing facilities. It has not caught the interest of the common people because they have no access to such machines. However, the future is already about multi-processing. More and more of these machines will reach the masses. This means that highly talented members of the masses get to experiment with parallel computing on a day-to-day basis. We are in for another revolution in computing. Are you ready for this revolution?

Before the market was flooded with multi-core systems, parallel programming used to be done using a cluster of workstations. These systems, called beowulf systems began to be used extensively in the late 90’s. These are called distributed memory systems because each node has a CPU and its own memory. These systems are extensively being used today in weather prediction, protein-folding computations, and index creation. Multi-core systems on the other hand are called shared-memory systems because many CPUs shared the same memory. In this article, we will be exploring distributed memory systems. Another article will deal with shared memory systems.

Configuring machines for Parallel Computing
Now let’s get our hands dirty! You will need at least two computers with Linux installed for this experiment. It is highly recommended that you have two identical systems, i.e., both must have the same specifications. Preferably, your computers must have WIFI devices to make things simpler. You also need a wireless router. Your router must be configured to assign static IP addresses to the two machines.

Here is the setup of my system. My router has an IP address of 192.168.1.1. My two other machines have the following entries in /etc/hosts:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.1.100 ernie.extremecomputing.org ernie
192.168.1.101 bobby.extremecomputing.org bobby

There is no reason why you should have the same parameters as I have. You can have your own system of addressing. The /etc/hosts should be the same in the two computers in order to avoid problems.

Create an account in both machines. This account will be used to run the parallel programs. In my setup, i create the account “bobby”. Next, we need the two computers to be able to ssh to each other without being asked for a password. Generate the ssh keys using the command:

ssh-keygen -t rsa

You will be asked be asked to enter a passphrase. Let us keep our setup simple by not supplying a password. However, in real-world applications, you need to configure security into your system so you need to specify a passphrase.

After the command finishes, you will notice that it created the hidden directory .ssh in your chosen account. In my setup, this is the output of ls:

[bobby@ernie ~]$ ls -l .ssh
total 32
-rw——- 1 bobby bobby 1096 Jan 5 17:55 authorized_keys
-rw——- 1 bobby bobby 883 Jan 5 17:55 id_rsa
-rw-r–r– 1 bobby bobby 242 Jan 5 17:55 id_rsa.pub
-rw-r–r– 1 bobby bobby 2179 Jan 7 13:56 known_hosts

As you can see, the ssh-keygen command generated two files, namely: id_rsa and id_rsa.pub. The .pub file is the public key and the id_rsa is the private key.

The next thing to do is to copy the file id_rsa.pub to authorized_keys. This will enable you to ssh to localhost without asking for a password.

cp id_rsa.pub authorized_keys

The permission of should be 600 in order for ssh to work correctly.

chmod 600 authorized_keys

Now copy the id_rsa.pub of the other machine and append it to the authorized_keys. In my setup, I will scp the file from the machine ernie to machine bobby and append it to the authorized_keys:

[bobby@bobby ~]$ scp ernie:.ssh/id_rsa.pub .

[bobby@bobby ~]$ cat id_rsa.pub >> .ssh/authorized_keys

[bobby@bobby ~]$ rm id_rsa.pub

Now copy the file authorized_keys to the machine ernie under the .ssh directory and change the permission to 600. To do this, log in to machine ernie and cd to .ssh directory and execute the following:

[bobby@ernie .ssh]$ scp bobby:.ssh/authorized_keys .

[bobby@ernie .ssh]$  chmod  600 .ssh/authorized_keys

Verify that you can now ssh from machine bobby to machine ernie without a password being asked and vice-versa.

Installing MPI

MPI stands for Message Passing Interface. It is a standard for message passing. You can learn more about the MPI standard from the MPI home page. There are many implementations of MPI among them are MPICH and openMPI. In this article, we are going to use the MPICH implementation. Download the source code of MPICH from this site. The software that we are using is actually an implementation of the MPI 1 standard. A new standard (MPI 2) is already available. However, for starters, let’s use the older standard. You can experiment with the new standard on your own.

After downloading the file mpich.tar.gz, unpack it in a temporary directory. This will create a directory named mpich-x.x.x, where x.x.x is the version number.

$ tar xzf mpich.tar.gz

Once unpacked, cd to the mpich-x.x.x directory and run the configure script. By default, mpich uses “rsh” to login and run commands in remote machines. We will tell the configure script to use “ssh” instead by exporting the environment variable RSHCOMMAND=ssh.

$ export RSHCOMMAND=ssh
$ cd mpich-x.x.x

$ ./configure –prefix=/usr/local/mpich-x.x.x

$ make

This will build the mpich binary which you can then install using “make install”. You have to be root to do this.

# make install

You can check the directory /usr/local to see that mpich was installed.
Now go the the directory examples/basic and type make. This will build all example programs.  We will be using the program cpi.c to test our installation. Modify the PATH variable to reflect the location of mpi binaries.

$ export PATH=$PATH:/usr/local/mpich-x.x.x/bin

$ which mpirun

/usr/local/mpich-x.x.x/bin/mpirun

Testing the installation
Test if you can ssh to localhost without being asked for a password. Assuming localhost was bobby, issue the command:

$ ssh -v bobby

You should be able to get a shell without being asked for a password. If not, then you need to append your public key to authorized_keys file:

$ cd ~/.ssh

$ cat id_rsa.pub >> authorized_keys

Try again.

Now it’s time to test our installation. Go back to the installation directory mpich-x.x.x/examples/basic and create a file named machinesfile and in it put the name of the localhost. Here are the contents of my machinesfiles:

$ cat machinesfile

bobby

bobby

Notice that I put the name of the localhost twice. It tells mpi that I have two machines for it to run. I just turns out that the two machines are just the same. So it will run the cpi program and distribute it in the machines listed in the machines file. To run the cpi program, issue the command (assuming you are in the mpich-x.x.x/examples/basic directory)

$ mpirun -machinefile machinesfile -np 2 cpi

Process 0 of 2 on bobby
pi is approximately 3.1415926544231318, Error is 0.0000000008333387
wall clock time = 0.000000
Process 1 of 2 on bobby

If  you did not get an error, then go and do the same setup in the other machine. In my setup, it would be the machine ernie.

Now that the other machine’s setup is complete. It’s now time so test if we can run the cpi program using the two machines. Using the machine “bobby”, go to the directory mpich-x.x.x/examples/basic and edit the machinesfile. Add the hostname of the other machine to this file. In my setup, it would be:

$ cat machinesfile

bobby

ernie

Now run the cpi program again using mpirun command. You should get the following:

Process 0 of 2 on bobby
pi is approximately 3.1415926544231318, Error is 0.0000000008333387
wall clock time = 0.003906
Process 1 of 2 on ernie

If you get a result similar to this, then congratulations. You have run your first parallel program.

In the next article we will cover the various parallel programming constructs in MPI.

Advertisements

Published by

Bobby Corpus

Is an IT Architect.