1.1-6 Morrison Hotel

Compiling Programs

[Limulus]
Back to Main Documentation Page

Handling Multiple Compilers and Libraries

Clusters offer a wide variety of software choice. There are many possible combination of compiler and MPI libraries. The Modules package is a convenient way to solve this an other problems. The Modules package will be demonstrated below.

Building Sequential Programs Using C and Fortran

The default compiler for all Linux systems are the GNU tools. These include (depending on you distribution) gcc, g++, g77, gfortran, etc. Given the following files mm.c, clock.c, and fcycmm.c a program that test various sizes and methods of matrix multiplication can be built using the following command line:

$ gcc -o mm mm.c clock.c fcycmm.c

The program can be run by entering:

$ ./mm

Using Fortran can be done as follows, for the program mm.f, gfortran (or g77) can be used to compile the binary.

$ gfortran -o mm mm.f
$ ./mm

A Makefile and program file can be found here.

Building Parallel Programs Using MPI

Suppose you want to compile a program using gcc and MPICH (MPI library). This example will illustrate how modules can be used. We will use the cpic.c and the fpi.f examples. First we will need to install the MPICH module by entering:

$module load mpich/3.1.3/gnu4  

Then we build the program with the familiar mpicc program

$mpicc -o cpi cpi.c

If you enter a which mpicc you will see that it points to the mpich directory. To run your program, Create/Edit file named "hosts" with a machine list. You can get a list of available nodes using the "pdsh hostname command. For example, your nodes file may look like:

  headnode
  n0
  n1
  n2

Now enter the mpiexec command:

$ mpiexec  -n 4 -f hosts ./cpi

You should see output similar to that below:

Process 0 on headnode
Process 1 on n0
Process 2 on n1
Process 3 on n2
pi is approximately 3.1415926535902168, Error is 0.0000000000004237
wall clock time = 0.609375

The same can be done for Fortran programs.

$mpif77  -o fpi fpi.f

$ mpiexec  -n 8 -f hosts ./fpi

The screen should scroll until the following is displayed:

 .
 .
 .
 9998 points: pi is approximately: 3.1415926544234614 error is: 0.0000000008336682
 9999 points: pi is approximately: 3.1415926544232922 error is: 0.0000000008334990
10000 points: pi is approximately: 3.1415926544231239 error is: 0.0000000008333307

The power of modules is that you can change the module, and run the same tests for other MPIs (The procedure for starting each MPI may vary). For example, to use Open MPI instead of MPICH, the following can be done:

$ module load openmpi/1.8.4/gnu4

$ mpicc -o cpi cpi.c

$ mpiexec -np 4 -machinefile hosts cpi

Note the similarity to the MPICH example. Essentially only the MPI module needs to be changed. The following links provide step-by-step instructions for the cpic.c and the fpi.f examples.

Learning More About MPI

More information on MPI can be found from the follow links:

Module Commands

The module package has a number of commands. There are four commands that are most useful, however, and may be all you will ever need. These are:

The following command sequence illustrates the use of these commands:


$ module load sge6

$ module load mpich/3.1.3/gnu4 

$ module list
Currently Loaded Modulefiles:
  1) sge6                  2) mpich/3.1.3/gnu4

$ module avail

------------------------------------ /opt/Modules/modulefiles --------------------------------------
atlas/3.11.31/gnu4/ivybridge            null
blas/3.5.0/gnu4                         openblas/0.2.13/gnu4/haswell
dot                                     openmpi/1.8.4/gnu4
fftpack/5.0/gnu4                        padb/3.3
fftw/3.3.4/gnu4                         petsc/3.5.3/gnu4/mpich/atlas
fftw-mpi/3.3.4/gnu4/mpich               petsc/3.5.3/gnu4/mpich/openblas
fftw-mpi/3.3.4/gnu4/mpich-omx           petsc/3.5.3/gnu4/mpich-omx/atlas
fftw-mpi/3.3.4/gnu4/openmpi             petsc/3.5.3/gnu4/mpich-omx/openblas
gnu-dts                                 petsc/3.5.3/gnu4/openmpi/atlas
gsl/1.16/gnu4                           petsc/3.5.3/gnu4/openmpi/openblas
intel13                                 scalapack/2.0.2/gnu4/mpich/atlas
julia/0.3.6                             scalapack/2.0.2/gnu4/mpich/openblas
lapack/3.5.0/gnu4                       scalapack/2.0.2/gnu4/mpich-omx/atlas
module-git                              scalapack/2.0.2/gnu4/mpich-omx/openblas
module-info                             scalapack/2.0.2/gnu4/openmpi/atlas
modules                                 scalapack/2.0.2/gnu4/openmpi/openblas
mpich/3.1.3/gnu4                        sge6
mpich-omx/3.1.3/gnu4                    use.own

$ module rm mpich/3.1.3/gnu4

$ module load openmpi/1.8.4/gnu4

$ module list

Currently Loaded Modulefiles:
  1) sge6                 2) openmpi/1.8.4/gnu4

$ module purge

$ module list
No Modulefiles Currently Loaded.

A note about GNU modules: For historical reasons, all GNU compiled modules have either a -gnu3 or -gnu4 tag to indicate which major version of the compilers are to be used. In general, RHEL4 and older use GNU 3, while RHEL 5 and above, use GNU 4. You can check your system by entering gcc -v at the command prompt. Other compilers are supported and modules will be tagged as such.

Module Commands Across Cluster Nodes

The Modules package has been configured to work across the cluster nodes. For instance, the following sequence loads the fftpack module, lists the modules on the head node (limulus), then logs into n0 and lists the modules. The fftpack module is preserved across the login.

[user@limulus ~] $ module load fftpack/5.0/gnu4
[user@limulus ~] $ module list
Currently Loaded Modulefiles:
  1) fftpack/5.0/gnu4
[user@limulus ~]$ ssh n0

[user@n0 ~]$ module list
Currently Loaded Modulefiles:
  1) fftpack/5.0/gnu4
[tester@n0 ~]$ 

This feature may be turned of by setting the NOMODULES variable, as shown below.

[user@limulus ~] $ module load fftpack/5.0/gnu4
[user@limulus ~] $ module list
Currently Loaded Modulefiles:
  1) fftpack/5.0/gnu4
[user@limulus ~]$ export NOMODULES=1
[user@limulus ~]$ ssh n0
[user@n0 ~]$ module list
No Modulefiles Currently Loaded.

The NOMODULES can be unset by using the following command on the head node (limulus):

[user@limulus ~]$ unset NOMODULES

Which MPI?

There many versions of MPI that are used on clusters. Some or all of the following may be installed on your cluster:

Further MPI package documentation can be found on the Main Documentation Page

The Open-MX Transport

Open-MX is a user space transport for Ethernet. It must be started before it can be used. To start Open-MX, enter (as root):

  # service open-mx start
  # pdsh service open-mx start

Consult the Open-MX README for more information.

Use a Batch Scheduler

Creating host files is fine, but if you are using a shared cluster or don't want to bother with managing nodes, it is highly recommend that you use a the installed batch scheduler to submit jobs. The batch scheduler will also handle all of the MPI start-up issues (starting and MPI application often varies by each MPI implementation). See the Sun Grid Engine quick start.


This page, and all contents, are Copyright © 2007-2015 by Basement Supercomputing, Bethlehem, PA, USA, All Rights Reserved. This notice must appear on all copies (electronic, paper, or otherwise) of this document.