Micro and Nano Mechanics Group
Revision as of 02:01, 31 August 2011 by Caiwei (Talk | contribs)

In this document, we describe how to compile pimc++ program on su-ahpcrc.stanford.edu (Linux parallel cluster) using intel and gnu compilers.

The style of this wiki follows that of How to compile Qbox, so that you could in principle copy and paste the code part in your terminal. Custom files are linked from this wiki and can be downloaded from the command line with 'wget'.

Contents

General Remarks

Based on the pimc++ installation page we will give some additional details. Before installing pimc++, we need to first build and install the dependence libraries. In particular

  1. blitz++: http://www.oonumerics.org/blitz/download/
  2. gmp: ftp://ftp.gnu.org/gnu/gmp/gmp-4.2.4.tar.gz
  3. sprng-2.0: http://sprng.fsu.edu/Version2.0/sprng2.0b.tar.gz
  4. gsl: http://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz
  5. hdf5: see Install HDF5
  6. fftw3: see Install FFTW3

these libraries will have to be installed in your user space directory. To achieve this I use to create a local directory to store the libraries and header files. When using intel compilers (e.g. icc), I use ~/usr-intel

 export USR=usr-intel
 mkdir -p $HOME/$USR

When I want to use gnu compilers (e.g. gcc) to compile the libraries, I use

 export USR=usr
 mkdir -p $HOME/$USR

I also use to download software packages to ~/soft. All these are consistent with the wiki page How to compile Qbox.

 export SOFT=soft
 mkdir -p $HOME/$SOFT

You may want to put the lines export USR=usr; export SOFT=soft in your ~/.bash_profile file, so that they are automatically set when you log-in.

This document does not cover the usage of pimc++ but just the compilation, a manual of pimc++ is provided in the pimc++ main page.

First step is to choose a compiler and stick to it for compiling the different libraries and the final pimc++ executable. On su-ahpcrc, the intel mpi compiler is selected by,

 mpi-selector --set openmpi14_intel-1.4.1

Pimc++ is known to have issues with mpich2 [1]. That's why we choose the openmpi version here.

To use gnu mpi compilers, I use the following on su-ahpcrc

 mpi-selector --set openmpi_gcc-1.2.6

After the mpi-selector command, you need to log-out the cluster and log-in again. Use the command which mpicc to make sure you are indeed using the desired mpi compiler.

Install blitz++

Simply download it from sourceforge and decompress it:

if [ -n "$SOFT" ]; then
 cd $HOME/$SOFT
 wget http://sourceforge.net/projects/blitz/files/blitz/Blitz%2B%2B%200.9/blitz-0.9.tar.gz/download
 tar -zxvf blitz-0.9.tar.gz
 cd blitz-0.9
fi

Then do the usual configure and make install procedure, indicating that we want a private installation in our $HOME/$USR directory. For intel compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR CC=icc CXX=icpc F77=ifort
fi

For gnu compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR CC=gcc34 CXX=g++34 F77=gfortran
fi
if [ -n "$USR" ]; then
 make clean
 make lib              
 make check-testsuite  
 make check-examples   
 make check-benchmarks 
 rm -rf $HOME/$USR/lib/libblitz*  $HOME/$USR/include/blitz/*
 make install
fi

This will install the blitz++ libraries in $HOME/$USR/include and $HOME/$USR/lib.

Install gmp

The gmp libraries are needed for sprng2. Download gmp from the gmp download page and decompress it:

if [ -n "$SOFT" ]; then
 cd $HOME/$SOFT
 wget ftp://ftp.gnu.org/gnu/gmp/gmp-4.2.4.tar.gz
 tar -zxvf gmp-4.2.4.tar.gz
 cd gmp-4.2.4
fi

Then do the usual configure and make install procedure, indicating that we want a private installation in our $HOME/$USR directory. For intel compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR  CC=icc CXX=icpc F77=ifort
fi

For gnu compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR CC=gcc34 CXX=g++34 F77=gfortran
fi
if [ -n $USR ]; then
 make clean
 make 
 make check
 rm $HOME/$USR/lib/libgmp*  $HOME/$USR/include/gmp*.h 
 make install
fi

This will install the gmp libraries in $HOME/$USR/include and $HOME/$USR/lib.

Install sprng2

Download the sprng2 libraries from the sprng2 download page and decompress it:

if [ -n "$SOFT" ]; then
 cd $HOME/$SOFT
 wget http://sprng.fsu.edu/Version2.0/sprng2.0b.tar.gz
 tar -zxvf sprng2.0b.tar.gz
 cd sprng2.0
fi

The sprng2 libraries do not have a configure utility. To compile, you need to modify the make.CHOICES file to specify your platform (PLAT = SU_AHPCRC), turn off MPI (#MPIDEF = -DSPRNG_MPI), and create your own SRC/make.SU_AHPCRC file.

mv make.CHOICES make.CHOICES.sav
cat > make.CHOICES << FIN
PLAT = SU_AHPCRC
#MPIDEF = -DSPRNG_MPI
PMLCGDEF = -DUSE_PMLCG
GMPLIB = -L\$(HOME)/\$(USR)/lib -lgmp
LIB_REL_DIR = lib
FIN

For intel compilers, use

cat > SRC/make.SU_AHPCRC << FIN
AR = ar
ARFLAGS = cr
RANLIB = ranlib
CC = icc
CLD = \$(CC)
F77 = ifort
F77LD = \$(F77)
FFXN = -DAdd_
FSUFFIX = F
MPIF77 = ifort
MPICC = icc
MPIDIR = -I/usr/mpi/intel/openmpi-1.4.1-1/include
MPILIB = -L/usr/mpi/intel/openmpi-1.4.1-1/lib64 -lmpi_cxx -lmpi -lopen-rte -lopen-pal
CFLAGS = -O3 -DLittleEndian \$(PMLCGDEF) \$(MPIDEF) -D\$(PLAT) 
CLDFLAGS =  -O3 
FFLAGS = -O3 \$(PMLCGDEF) \$(MPIDEF) -D\$(PLAT) 
F77LDFLAGS =  -O3 
CPP = cpp -P
FIN

For gnu compilers, use the following (note that in the following we need to fix some bugs before compiling)

cat > SRC/make.SU_AHPCRC << FIN
AR = ar
ARFLAGS = cr
RANLIB = ranlib
CC = gcc34
CLD = \$(CC)
F77 = gfortran
F77LD = \$(F77)
FFXN = -DAdd_
FSUFFIX = F
MPIF77 = gfortran
MPICC = gcc34
MPIDIR = -I/usr/mpi/gcc/openmpi-1.2.6/include
MPILIB = -L/usr/mpi/gcc/openmpi-1.2.6/lib64 -lmpi_cxx -lmpi -lopen-rte -lopen-pal
CFLAGS = -O3 -DLittleEndian \$(PMLCGDEF) \$(MPIDEF) -D\$(PLAT) 
CLDFLAGS =  -O3 
FFLAGS = -O3 \$(PMLCGDEF) \$(MPIDEF) -D\$(PLAT) 
F77LDFLAGS =  -O3 
CPP = cpp -P
FIN
# fix some 'bugs' in SRC (can be avoided if we use F77=g77 in the above)
sed -i -e '205 s/\t print/      print/' SRC/check_genf_ptr.F   
sed -i -e '186 s/   print/  print/'     SRC/check_genf_ptr.F 

It is sometimes difficult to find out what exactly are the MPI libraries (e.g. -lmpi_cxx -lmpi -lopen-rte) and paths (e.g. -I/usr/mpi/intel/openmpi-1.4.1-1/include) to specify, when the system hide that information by providing an mpicc command that does it automatically. The trick is to first compile an executable using the mpicc command and use the ldd command to see what libraries the executable is linked to. (See section Install libcommon).

if [ -n "$USR" ]; then
 make realclean
 make
 ./checksprng
 rm $HOME/$USR/lib/libsprng* 
 cp lib/libsprng* $HOME/$USR/lib
 rm -rf $HOME/$USR/include/sprng2
 mkdir -p $HOME/$USR/include/sprng2 
 cp include/*.h $HOME/$USR/include/sprng2
fi

If after make you don't see the checksprng executable, it means something is wrong. This is usually because some library paths are not specified correctly in the makefile, e.g. SRC/make.SU_AHPCRC. The checksprng utility will report that almost all the Fortran Interface checks fail. This is not a concern for building pimc++ as long as the C Interface checks all pass. Note that there is no make install utility, so that we need to manually copy the library and header files to $HOME/$USR/lib and $HOME/$USR/include.

Install gsl

Download gsl from the gsl download page and decompress it:

if [ -n "$SOFT" ]; then
 cd $HOME/$SOFT
 wget http://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz
 tar -zxvf gsl-1.9.tar.gz
 cd gsl-1.9
fi

Then do the usual configure and make install procedure, indicating that we want a private installation in our $HOME/$USR directory.

For intel compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR  CC=icc CXX=icpc F77=ifort
 make clean
 make 
 make check
 rm $HOME/$USR/lib/libgsl*  $HOME/$USR/include/gsl/*
 make install
fi

For gnu compilers, use

if [ -n "$USR" ]; then
 ./configure --prefix=$HOME/$USR  CC=gcc34 CXX=g++34 F77=gfortran
 make clean
 make 
 make check
 rm $HOME/$USR/lib/libgsl*  $HOME/$USR/include/gsl/*
 make install
fi

This will install the gsl libraries in $HOME/$USR/include and $HOME/$USR/lib.

When using intel compilers, we see errors during make check, in the sys folder, e.g.

FAIL: gsl_isinf(inf) (0 observed vs 1 expected) [102]
FAIL: gsl_isinf(-inf) (0 observed vs -1 expected) [103]
FAIL: gsl_finite(inf) (1 observed vs 0 expected) [111]
FAIL: gsl_finite(nan) (1 observed vs 0 expected) [112]

as well as in the sort folder and the specfunc folder, e.g.

FAIL: gsl_sf_bessel_j0_e(100.0, &r) [149]
 expected: -0.005063656411097588
 obtained: -0.005063656411097553 +/- 4.497430349113033e-18 (rel=8.88178e-16)
 fracdiff: 3.425831721471003e-15
 value/expected not consistent within reported error
 -0.00506365641109755328  4.49743034911303255e-18
FAIL: gsl_sf_Ci_e(50.0, &r) [1161]
 expected: -0.005628386324116306
 obtained: -0.005628386324116268 +/- 7.821960802221456e-18 (rel=1.38973e-15)
 fracdiff: 3.390307121240641e-15
 value/expected not consistent within reported error
 -0.00562838632411626766  7.82196080222145558e-18

The only gsl functions called by libcommon-0.5 (nm libmpicommon.so | grep gsl) are

  U gsl_sf_Ci
  U gsl_sf_Si
  U gsl_sf_bessel_Inu_scaled_e
  U gsl_sf_bessel_il_scaled_e
  U gsl_sf_legendre_Pl_e

i.e. the purpose of gsl is to provide special functions Ci, Si, bessel_Inu, bessel_il and legendre_Pl. While errors are reported by these functions, the magnitude of the errors are really small (around 1e-18).

Furthermore, it is likely that pimc++ does not use any of these special functions. The only reference to the special functions in pimc++ can be found by

grep Special src/*.cc src/*/*.cc

The result is a single line:

src/Actions/KineticSphereClass.cc:#include <Common/SpecialFunctions/SpecialFunctions.h>

However, even if this line is commented out, pimc++ still compiles. This suggests the pimc++ does not use any of the special (gsl) functions.

If you are still concerned about these errors, you can try link to the gnu-compiled gsl library with intel-compiled pimc++.

Install libcommon

After all the supporting libraries (blitz++, gmp, sprng2, gsl, hdf5, fftw3) are installed, we are ready to install pimc++. The pimc++ source are separated into two parts: libcommon-0.5 and pimc++-0.5. In this section, we first install libcommon-0.5.

Download libcommon from sourceforge and decompress it. You may want to modify the location to decompress the code through the $CODES variable.

export CODES=$HOME/group/`whoami`/Codes
mkdir -p $CODES/pimc++
cd $CODES/pimc++
wget http://sourceforge.net/projects/pimcpp/files/pimcpp/Development%20Version%200.5/libcommon-0.5.tar.gz/download 
tar -zxvf libcommon-0.5.tar.gz
cd libcommon-0.5

Then do the usual configure and make install procedure, indicating that we want a private installation in our $HOME/$USR directory

For intel compilers, use the following (note that in the following we need to fix some bugs in the Makefile and the source code).

if [ -n "$USR" ]; then
 #remove any dependencies from the previous configure
 find . -name ".deps" -exec rm -rf {} \;
 # mpi version, shared library, use icc instead of mpicc
 ./configure --prefix=$HOME/$USR --enable-mpi=yes FFLAGS="-fPIC -g" \
CFLAGS="-fPIC -g" CPPFLAGS="-fPIC -DMPICH_IGNORE_CXX_SEEK -I$HOME/$USR/include \
-I$HOME/$USR/include/sprng2" CXX=icpc CC=icc F77=ifort  \
CXXFLAGS="-fPIC -g -I$HOME/$USR/include/sprng2/ -I$HOME/$USR/include/blitz" \
--with-mpi-cflags="-I/usr/mpi/intel/openmpi-1.4.1-1/include" \
--with-mpi-libs="-L/usr/mpi/intel/openmpi-1.4.1-1/lib64 \
-lmpi_cxx -lmpi -lopen-rte -lopen-pal" \
--with-mpi-run="mpiexec" --with-blas-libs="-lblas " --with-lapack-cflags=" " \
--with-lapack-libs="-llapack " --with-hdf5-libs="-L$HOME/$USR/lib -lhdf5" \
--with-hdf5-cflags="-I$HOME/$USR/include"  LDFLAGS="-L$HOME/$USR/lib -lgmp" \
BLITZ_CFLAGS="-I$HOME/$USR/include/blitz" BLITZ_LIBS="-L$HOME/$USR/lib -lblitz" \
SPRNG2_CFLAGS="-I$HOME/$USR/include/sprng2" SPRNG2_LIBS="-L$HOME/$USR/lib -lsprng" \
GSL_CFLAGS="-I$HOME/$USR/include/gsl" GSL_LIBS="-L$HOME/$USR/lib -lgsl -lgslcblas" \
FFTW3_CFLAGS="-I$HOME/$USR/include" FFTW3_LIBS="-L$HOME/$USR/lib -lfftw3" \
--disable-python
 make clean
 make -j 8
 # fix a few bugs before running check
 sed -i -e '576 s/(LIBS)/(LIBS)  -lgsl -lgslcblas -L..\/.libs -lmpicommon -lmpi_cxx -lmpi -lopen-rte -lopen-pal/' \
  Splines/Makefile
 sed -i -e 's/fstream.h/fstream/' PH/TestAziz.cc
 sed -i -e '354,360 s/(LIBS)/(LIBS) -llapack -lblas/' Atom/Makefile
 make check
 make install
fi


For gnu compilers, use the following (note that in the following we need to fix another bug in the configure script itself, in addition to the bugs in the Makefile and the source code).

if [ -n "$USR" ]; then
 # remove any dependencies from the previous configure
 find . -name ".deps" -exec rm -rf {} \;
 # fix a bug in configure script itself
 sed -i -e 's/FFLAGS -fpp/FFLAGS/' configure
 # mpi version, shared library, use icc instead of mpicc
 ./configure --prefix=$HOME/$USR --enable-mpi=yes FFLAGS=" " \
CFLAGS=" " CPPFLAGS="-fPIC -DMPICH_IGNORE_CXX_SEEK -I$HOME/$USR/include \
-I$HOME/$USR/include/sprng2" CXX=g++34 CC=gcc34 F77=gfortran  \
CXXFLAGS="-I$HOME/$USR/include/sprng2/ -I$HOME/$USR/include/blitz" \
--with-mpi-cflags="-I/usr/mpi/gcc/openmpi-1.2.6/include" \
--with-mpi-libs="-L/usr/mpi/gcc/openmpi-1.2.6/lib64  \
-lmpi_cxx -lmpi -lopen-rte -lopen-pal" \
--with-mpi-run="mpiexec" --with-blas-libs="-lblas " --with-lapack-cflags=" " \
--with-lapack-libs="-llapack " --with-hdf5-libs="-L$HOME/$USR/lib -lhdf5" \
--with-hdf5-cflags="-I$HOME/$USR/include"  LDFLAGS="-L$HOME/$USR/lib -lgmp" \
BLITZ_CFLAGS="-I$HOME/$USR/include/blitz" BLITZ_LIBS="-L$HOME/$USR/lib -lblitz" \
SPRNG2_CFLAGS="-I$HOME/$USR/include/sprng2" SPRNG2_LIBS="-L$HOME/$USR/lib -lsprng" \
GSL_CFLAGS="-I$HOME/$USR/include/gsl" GSL_LIBS="-L$HOME/$USR/lib -lgsl -lgslcblas" \
FFTW3_CFLAGS="-I$HOME/$USR/include" FFTW3_LIBS="-L$HOME/$USR/lib -lfftw3" \
--disable-python
 make clean
 make -j 8
 # fix a few bugs before running check
 sed -i -e '576 s/(LIBS)/(LIBS) -lgsl -lgslcblas -L..\/.libs -lmpicommon -lmpi_cxx -lmpi -lopen-rte -lopen-pal/' \
  Splines/Makefile
 sed -i -e 's/fstream.h/fstream/' PH/TestAziz.cc
 sed -i -e '354,360 s/(LIBS)/(LIBS) -llapack -lblas/' Atom/Makefile
 make check
 make install
fi


This will install the libcommon-0.5 libraries in $HOME/$USR/include and $HOME/$USR/lib.

Notice here we are using the icc/icpc/ifort compilers and specifying all the mpi libraries and paths manually. (I suspect that this is necessary to compile pimc++ in parallel.) Sometimes it is difficult to find out what exactly are the correct MPI libraries and paths, when the system hide that information by providing an mpicc command that does it automatically. The trick is to first use CXX=mpicxx CC=mpicc F77=mpif77 in the abvoe configure line, compile the code (by make -j 8), and then compile the test case MPI/TestComm and see which libraries is it linked to.

cd MPI 
make TestComm MPI_LIBS=
ldd ./TestComm
cd ..

(Notice here that we unset the MPI_LIBS variable because it should be taken care of by the mpicc command.) After that, repeat the long configure command above using CXX=icpc CC=icc F77=ifort and compile the code again (using make -j 8). Make sure you have the following line (or something equivalent) in your .bash_profile file.

export LD_LIBRARY_PATH=/usr/mpi/intel/openmpi-1.4.1-1/lib64:$HOME/usr-intel/lib

To see whether the build is successful, try to compile and run the MPI/TestComm program.

cd MPI 
make TestComm
 qsub -I -l nodes=1:ppn=8,walltime=01:00:00 # enter compute node
 export CODES=$HOME/group/`whoami`/Codes
 cd $CODES/pimc++/libcommon-0.5/MPI
 mpiexec -np 4 ./TestComm
 exit # return to head node
cd ..  # return to libcommon-0.5 directory

After the qsub -I line, we are entering a compute node on which we can run parallel jobs interactively. If the MPI works correctly, your screen print out should be something like this

 Number of procs = 4
 AllGatherRows() check:  passed.
 AllGatherVec() check:   passed.
 Sums per second = 328947

Install pimc++

We are now ready to build the pimc++ main program.

Download pimc++ from sourceforge and decompress it. You may want to modify the location to decompress the code through the $CODES variable.

export CODES=$HOME/group/`whoami`/Codes
mkdir -p $CODES/pimc++
cd $CODES/pimc++
wget http://sourceforge.net/projects/pimcpp/files/pimcpp/Development%20Version%200.5/pimc-0.5.tar.gz/download
tar -zxvf pimc-0.5.tar.gz
cd pimc-0.5

Then do the usual configure and make install procedure, indicating that we want a private installation in our $HOME/$USR directory

For intel compilers, use

if [ -n "$USR" ]; then
 #remove any dependencies from the previous configure
 find . -name ".deps" -exec rm -rf {} \;
 # mpi version, shared library, use icc instead of mpicc
 ./configure --prefix=$HOME/$USR --enable-mpi=yes  \
CPPFLAGS="-wd654 -wd175 -DUSE_MPI -DMPICH_IGNORE_CXX_SEEK -fPIC" \
CXX=icpc CC=icc F77=ifort FFLAGS="-fPIC -g" CFLAGS="-fPIC -g"  \
COMMON_CFLAGS="-I$HOME/$USR/include" COMMON_LIBS="-L$HOME/$USR/lib -lmpicommon" \
CXXFLAGS="-fPIC -g -I/usr/mpi/intel/openmpi-1.4.1-1/include -I$HOME/$USR/include/sprng2 -I$HOME/$USR/include/blitz" \
LDFLAGS="-L/opt/intel/fce/10.1.015/lib -L/usr/mpi/intel/openmpi-1.4.1-1/lib64 -L$HOME/$USR/lib" \
LIBS="-lmpi_cxx -lmpi -lopen-rte -lopen-pal -llapack -lblas -lm -lblitz -lhdf5 -lz  -lfftw3 -lsprng -lgmp -lgsl -lgslcblas -lifcore"
 make clean
 make -j 8
 make check
 make install
fi

For gnu compilers, use

if [ -n "$USR" ]; then
 #remove any dependencies from the previous configure
 find . -name ".deps" -exec rm -rf {} \;
 # mpi version, shared library, use icc instead of mpicc
 ./configure --prefix=$HOME/$USR --enable-mpi=yes  \
CPPFLAGS="-DUSE_MPI -DMPICH_IGNORE_CXX_SEEK -fPIC" \
CXX=g++34 CC=gcc34 F77=gfortran FFLAGS="-fPIC -g" CFLAGS="-fPIC -g"  \
COMMON_CFLAGS="-I$HOME/$USR/include" COMMON_LIBS="-L$HOME/$USR/lib -lmpicommon" \
CXXFLAGS="-fPIC -g -I/usr/mpi/gcc/openmpi-1.2.6/include -I$HOME/$USR/include/sprng2 -I$HOME/$USR/include/blitz" \
LDFLAGS="-L/opt/intel/fce/10.1.015/lib -L/usr/mpi/gcc/openmpi-1.2.6/lib64 -L$HOME/$USR/lib" \
LIBS="-lmpi_cxx -lmpi -lopen-rte -lopen-pal -llapack -lblas -lm -lblitz -lhdf5 -lz  -lfftw3 -lsprng -lgmp -lgsl -lgslcblas -lifcore"
 make clean
 make -j 8
 make check
 make install
fi


There is also a copy of the executable at src/pimc++.

(Waiting to see whether make check is successful.)

pimc++ Test Run

Go to pimc++ tutorials page for more information about the test run. First, copy SingleParticle.in and zero.PairAction files to your local folder.

mkdir -p examples
cd examples
 wget http://micro.stanford.edu/mediawiki/images/3/3b/Pimc_SingleParticle.in.txt -O SingleParticle.in 
 wget http://micro.stanford.edu/mediawiki/images/1/1d/Pimc_zero.PairAction.txt -O zero.PairAction 
cd ..

To test your executable, you can use the following command.

cd examples
 qsub -I -l nodes=1:ppn=8,walltime=01:00:00 # enter compute node
 export CODES=$HOME/group/`whoami`/Codes
 cd $CODES/pimc++/pimc-0.5/examples
 mpiexec -np 4 ../src/pimc++ SingleParticle.in
 exit # return to head node
cd ..  # return to pimc-0.5 directory

Make sure you have the following line (or something equivalent) in your .bash_profile file.

export LD_LIBRARY_PATH=/usr/mpi/intel/openmpi-1.4.1-1/lib64:$HOME/usr-intel/lib


The test case runs correctly for the intel build (e.g. 10000 steps) but crashes after running for about 18 steps for the gnu build. (May need to recompile FFTW3 and HDF5 using gnu compiler.)