1.1-4 Morrison Hotel
Compiling Programs |
![]() ![]() |
Back to Main Documentation Page |
Clusters offer a wide variety of software choice. There are many possible combination of compiler and MPI libraries. The Modules package is a convenient way to solve this an other problems. The Modules package will be demonstrated below.
The default compiler for all Linux systems are the GNU tools. These include (depending on you distribution) gcc, g++, g77, gfortran, etc. Given the following files mm.c, clock.c, and fcycmm.c a program that test various sizes and methods of matrix multiplication can be built using the following command line:
$ gcc -o mm mm.c clock.c fcycmm.c
The program can be run by entering:
$ ./mm
Using Fortran can be done as follows, for the program mm.f, gfortran (or g77) can be used to compile the binary.
$ gfortran -o mm mm.f $ ./mm
A Makefile and program file can be found here.
Suppose you want to compile a program using gcc and mpich2. This example will illustrate how modules can be used. We will use the cpic.c and the fpi.f examples. First we will need to install the MPICH2 module by entering:
$module load mpich2/1.4.1p1/gnu4Then we build the program with the familiar mpicc program
$mpicc -o cpi cpi.c
If you enter a which mpicc you will see that it points to the MPICH2 directory. To run your program, Create/Edit file named "hosts" with a machine list. You can get a list of available nodes using the "pdsh hostname command. For example, your nodes file may look like:
headnode n0 n1 n2
Now enter the mpiexec command:
$ mpiexec -bootstrap ssh -n 4 -f hosts ./cpi
You should see output similar to that below:
Process 0 on headnode Process 1 on n0 Process 2 on n1 Process 3 on n2 pi is approximately 3.1415926535902168, Error is 0.0000000000004237 wall clock time = 0.609375
The same can be done for Fortran programs.
$mpif77 -o fpi fpi.f $ mpiexec -bootstrap ssh -n 8 -f hosts ./fpi
The screen should scroll until the following is displayed:
. . . 9998 points: pi is approximately: 3.1415926544234614 error is: 0.0000000008336682 9999 points: pi is approximately: 3.1415926544232922 error is: 0.0000000008334990 10000 points: pi is approximately: 3.1415926544231239 error is: 0.0000000008333307
The power of modules is that you can change the module, and run the same tests for other MPIs (The procedure for starting each MPI may vary). For example, to use Open MPI instead of MPICH2, the following can be done:
$ module load openmpi/1.6.3/gnu4 $ mpicc -o cpi cpi.c $ mpiexec -np 4 -machinefile hosts cpi
Note the similarity to the MPICH example. Essentially only the MPI module needs to be changed. The following links provide examples for various installed MPIs.
The module package has a number of commands. There are four commands that are most useful, however, and may be all you will ever need. These are:
The following command sequence illustrates the use of these commands:
$ module load sge6 $ module load mpich2/1.4.1p1/gnu4 $ module list Currently Loaded Modulefiles: 1) sge6 2) mpich2/1.4.1p1/gnu4 $ module avail ------------------------------------- /opt/Modules/modulefiles ------------------------------------- atlas/3.10.0/gnu4/i5-2400S null blacs/1.1/gnu4/mpich2 openblas/0.2.3/gnu4/i5-2400S blacs/1.1/gnu4/mpich2-omx openmpi/1.6.3/gnu4 blacs/1.1/gnu4/openmpi padb/3.3 blas/3.4.2/gnu4 petsc/3.3/gnu4/mpich2/atlas dot petsc/3.3/gnu4/mpich2/openblas fftw/3.3.2/gnu4 petsc/3.3/gnu4/mpich2-omx/atlas fftw-mpi/3.3.2/gnu4/mpich2 petsc/3.3/gnu4/mpich2-omx/openblas fftw-mpi/3.3.2/gnu4/mpich2-omx petsc/3.3/gnu4/openmpi/atlas fftw-mpi/3.3.2/gnu4/openmpi petsc/3.3/gnu4/openmpi/openblas gsl/1.15/gnu4 scalapack/1.7.5/gnu4/mpich2/atlas julia/0.2.0 scalapack/1.7.5/gnu4/mpich2/openblas lapack/3.4.2/gnu4 scalapack/1.7.5/gnu4/mpich2-omx/atlas module-cvs scalapack/1.7.5/gnu4/mpich2-omx/openblas module-info scalapack/1.7.5/gnu4/openmpi/atlas modules scalapack/1.7.5/gnu4/openmpi/openblas mpich2/1.4.1p1/gnu4 sge6 mpich2-omx/1.4.1p1/gnu4 use.own $ module rm mpich2/1.4.1p1/gnu4 $ module load openmpi/1.6.3/gnu4 $ module list Currently Loaded Modulefiles: 1) sge6 2) openmpi/1.6.3/gnu4 $ module purge $ module list No Modulefiles Currently Loaded.
A note about GNU modules: For historical reasons, all GNU compiled modules have either a -gnu3 or -gnu4 tag to indicate which major version of the compilers are to be used. In general, RHEL4 and older use GNU 3, while RHEL 5 and above, use GNU 4. You can check your system by entering gcc -v at the command prompt. Other compilers are supported and modules will be tagged as such.
There many versions of MPI that are used on clusters. Some or all of the following may be installed on your cluster:
Further MPI documentation can be found on the Main Documentation Page
Open-MX is a user space transport for Ethernet. It must be started before it can be used. To start Open-MX, enter (as root):
# service open-mx start # pdsh service open-mx start
Consult the Open-MX README for more information.
Creating host files is fine, but if you are using a shared cluster or don't want to bother with managing nodes, it is highly recommend that you use a the installed batch scheduler to submit jobs. The batch scheduler will also handle all of the MPI start-up issues (starting and MPI application often varies by each MPI implementation). See the Sun Grid Engine quick start.
This page, and all contents, are Copyright © 2007-2013 by Basement Supercomputing, Bethlehem, PA, USA, All Rights Reserved. This notice must appear on all copies (electronic, paper, or otherwise) of this document.