1.1-1

Compiling Programs

[Basement]
Back to Main Documentation Page

Handling Multiple Compilers and Libraries

Clusters offer a wide variety of choices. There are many possible combination of compiler and MPI libraries. The Modules package is a convenient way to solve this an other problems. The Modules package will be demonstrated by way of example.

Building Sequential Programs Using C and Fortran

The default compiler for all Linux systems are the GNU tools. These include (depending on you distribution) gcc, g++, g77, gfortran, etc. Given the following files
mm.c, clock.c, and fcycmm.c a program that test various sizes and methods of matrix multiplication can be built using the following command line:

$ gcc -o mm mm.c clock.c fcycmm.c

The program can be run by entering:

$ ./mm

If you wanted to use another compiler, then you can switch compiling environments by using modules. For example, if you wanted to use the Portland Group C Version 6.2 compiler you would enter the following (assuming Portland Group Version 6.2 is installed on your system)


$ module load pgi62
$ pgcc -o mm mm.c clock.c fcycmm.c
mm.c:
clock.c:
fcycmm.c:
$ ./mm

pgcc -o mm mm.c clock.c fcycmm.c

A Makefile and program files can be found here.

Using Fortran can be done as follows, for the program mm.f, gfortran (or g77) can be used to compile the binary.

$ gfortran -o mm mm.f
$ ./mm

If the Portland Group module is still resident, then you can use pgf90:

$ pgf90 -o mm mm.f
$ ./mm

A Makefile and program file can be found here.

To remove a module, enter:

module rm pgi62

Modules can be valuable is you have multiple versions of the same compiler. For instance, for historical reasons, if you need to use Portland Group Version 5.2 for some reason, it is simple to change the module (pgi52) and compile your code. Module eliminate excessive editing of login scripts where pathnames to compilers are often set.

Building Parallel Programs Using MPI

Suppose you want to compile a program using gcc and mpichi2.

We will use the cpic.c and the fpi.f examples. First we will need to install the MPICH module by entering:

$module load mpi/mpich2-gnu4

Then we build the program with the familiar mpicc program

$mpicc -o cpi cpi.c

If you enter a which mpicc you will see that it points to the MPICH directory. To run your program, Create/Edit file named "machines" with machine list. You can get a list of available nodes using the wwlist command. For example, your nodes file may look like:

  limulus
  n0
  n1
  n2

Now enter the mpirun command:

$mpirun -np 4 -machinefile machines cpi

You should see output similar to that below:

Process 0 on hydra-ww
Process 1 on n0
Process 2 on n1
Process 3 on n2
pi is approximately 3.1415926535902168, Error is 0.0000000000004237
wall clock time = 0.609375

The same can be done for Fortran programs.

$mpif77  -o fpi fpi.f

$mpirun -np 8 -machinefile machines fpi

The screen should scroll until the following is displayed:

 .
 .
 .
 9998 points: pi is approximately: 3.1415926544234614 error is: 0.0000000008336682
 9999 points: pi is approximately: 3.1415926544232922 error is: 0.0000000008334990
10000 points: pi is approximately: 3.1415926544231239 error is: 0.0000000008333307

The power of modules is that you can change the module, and run the same tests for other MPIs (The procedure for starting each MPI may vary). The following links provide examples for various installed MPIs.

Module Commands

The module package has a number of command. There are four command that are most useful however and may be all you ever need). These are:

The following command sequence illustrates the use of these commands:

$ module list
Currently Loaded Modulefiles:
  1) mpi/mpich-gnu4

$ module avail

------------------------------------- /opt/Modules/modulefiles -------------------------------------
blacs-mpich2-gnu4         gsl-gnu4                  null
blacs-mpich2-omx-gnu4     lapack-gnu4               padb-3.3
blacs-openmpi-gnu4        module-cvs                scalapack-mpich2-gnu4
blas-gnu4                 module-info               scalapack-mpich2-omx-gnu4
dot                       modules                   scalapack-openmpi-gnu4
fftpack-gnu4              mpi/mpich2-gnu4           sge6
fftw2-gnu4                mpi/mpich2-omx-gnu4       use.own
fftw3-gnu4                mpi/openmpi-gnu4

$ module rm mpi/mpich-gnu4

$ module load mpi/mpichgm-gnu4

$ module list

Currently Loaded Modulefiles:
  1) mpi/mpichgm-gnu4

$ module purge

$ module list
No Modulefiles Currently Loaded.

A note about gnu modules: All GNU compiler modules have either a -gnu3 or -gnu4 tag to indicate which major version of the compilers are to be used. In general, RHEL4 and older use GNU 3, while Fedora 4 and above, use GNU 4. You can check your system by entering gcc -v at the command prompt.

Which MPI?

There many versions of MPI that are used on clusters. Some or all of the following may be installed on your cluster:
Further MPI documentation can be found on the Main Documentation Page MPI libraries can be changed by switching to the appropriate module. For instance, if you wish to run over Myrinet, use the mpichgm-compiler module.

MPI and Compilers: In order to ensure interoperability, each MPI module has a compiler associated with it. All clusters support either the -gnu3 or -gnu4 (see above). Check with your system administrator to see what compilers are available on you cluster. modules

But, Use a Batch Scheduler

If you are using a shared cluster, it is highly recommend that you use a batch scheduler to submit jobs. The batch scheduler will also handle all of the MPI start-up issues (starting and MPI application often varies by each MPI implementation). See the Sun Grid Engine quick start.


This page, and all contents, are Copyright © 2007,2008 by Basement Supercomputing, Bethlehem, PA, USA, All Rights Reserved. This notice must appear on all copies (electronic, paper, or otherwise) of this document.