Revision as of 11:22, 30 August 2011
In this document, we describe how to compile pimc++ program on su-ahpcrc.stanford.edu (Linux parallel cluster) and other systems.
The style of this wiki follows that of How to compile Qbox, so that you could in principle copy and paste the code part in your terminal. Custom files are linked from this wiki and can be downloaded from the command line with 'wget'.
Contents |
General Remarks
Based on the pimc++ installation page we will give some additional details. Before installing pimc++, we need to first build and install the dependence libraries. In particular
- blitz++: http://www.oonumerics.org/blitz/download/
- gmp: ftp://ftp.gnu.org/gnu/gmp/gmp-4.2.4.tar.gz
- sprng-2.0: http://sprng.fsu.edu/Version2.0/sprng2.0b.tar.gz
- gsl: http://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz
- hdf5: see Install HDF5
- fftw3: see Install FFTW3
these libraries will have to be installed in your user space directory. To achieve this I use to create a directory called ~/usr-intel where libraries and header files will stay:
mkdir -p $HOME/usr-intel
I also use to download software packages to ~/soft. All these are consistent with the wiki page How to compile Qbox.
mkdir $HOME/soft
This document does not cover the usage of pimc++ but just the compilation, a manual of pimc++ is provided in the pimc++ main page.
First step is to choose a compiler and stick to it for compiling the different libraries and the final pimc++ executable. In this tutorial, we will use intel compilers. On su-ahpcrc, this is selected by,
mpi-selector --set openmpi14_intel-1.4.1
Pimc++ is known to have issues with mpich2 [1]. That's why we choose the openmpi version here.
Install blitz++
Simply download it from sourceforge and decompress it:
cd ~/soft wget http://sourceforge.net/projects/blitz/files/blitz/Blitz%2B%2B%200.9/blitz-0.9.tar.gz/download tar -zxvf blitz-0.9.tar.gz cd blitz-0.9
Then do the usual configure and make install procedure, indicating that we want a private installation in our ~/usr-intel directory
./configure --prefix=$HOME/usr-intel CC=icc CXX=icpc F77=ifort make clean make lib make check-testsuite make check-examples make check-benchmarks rm -rf ~/usr-intel/lib/libblitz* ~/usr-intel/include/blitz/* make install
This will install the blitz++ libraries in ~/usr-intel/include and ~/usr-intel/lib.
Install gmp
The gmp libraries are needed for sprng2. Download gmp from the gmp download page and decompress it:
cd ~/soft wget ftp://ftp.gnu.org/gnu/gmp/gmp-4.2.4.tar.gz tar -zxvf gmp-4.2.4.tar.gz cd gmp-4.2.4
Then do the usual configure and make install procedure, indicating that we want a private installation in our ~/usr-intel directory
./configure --prefix=$HOME/usr-intel CC=icc CXX=icpc F77=ifort make clean make make check rm ~/usr-intel/lib/libgmp.* ~/usr-intel/include/gmp.h make install
This will install the gmp libraries in ~/usr-intel/include and ~/usr-intel/lib.
Install sprng2
Download the sprng2 libraries from the sprng2 download page and decompress it:
cd ~/soft wget http://sprng.fsu.edu/Version2.0/sprng2.0b.tar.gz tar -zxvf sprng2.0b.tar.gz cd sprng2.0
The sprng2 libraries do not have a configure utility. To compile, you need to modify the make.CHOICES file to specify your platform (PLAT = SU_AHPCRC), turn off MPI (#MPIDEF = -DSPRNG_MPI), and create your own SRC/make.SU_AHPCRC file.
cat > SRC/make.SU_AHPCRC << FIN AR = ar ARFLAGS = cr RANLIB = ranlib CC = icc CLD = $(CC) F77 = ifort F77LD = $(F77) FFXN = -DAdd_ FSUFFIX = F MPIF77 = ifort MPICC = icc MPIDIR = -I/usr/mpi/intel/openmpi-1.4.1-1/include MPILIB = -L/usr/mpi/intel/openmpi-1.4.1-1/lib64 -lmpi_cxx -lmpi -lopen-rte -lopen-pal CFLAGS = -O3 -DLittleEndian $(PMLCGDEF) $(MPIDEF) -D$(PLAT) CLDFLAGS = -O3 FFLAGS = -O3 $(PMLCGDEF) $(MPIDEF) -D$(PLAT) F77LDFLAGS = -O3 CPP = cpp -P FIN
It is sometimes difficult to find out what exactly are the MPI libraries (e.g. -lmpi_cxx -lmpi -lopen-rte) and paths (e.g. -I/usr/mpi/intel/openmpi-1.4.1-1/include) to specify, when the system hide that information by providing an mpicc command that does it automatically. The trick is to first compile an executable using the mpicc command and use the ldd command to see what libraries the executable is linked to. (See section Install libcommon).
make realclean make ./checksprng rm ~/usr-intel/lib/libsprng* cp lib/libsprng* ~/usr-intel/lib rm -rf ~/usr-intel/include/sprng2 mkdir -p ~/usr-intel/include/sprng2 cp include/*.h ~/usr-intel/include/sprng2
If after make you don't see the checksprng executable, it means something is wrong. This is usually because some library paths are not specified correctly in the makefile, e.g. SRC/make.SU_AHPCRC. Note that there is no make install utility, so that we need to manually copy the library and header files to ~/usr-intel/lib and ~/usr-intel/include.
Install gsl
Download gsl from the gsl download page and decompress it:
cd ~/soft wget http://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz tar -zxvf gsl-1.9.tar.gz cd gsl-1.9
Then do the usual configure and make install procedure, indicating that we want a private installation in our ~/usr-intel directory
./configure --prefix=$HOME/usr-intel CC=icc CXX=icpc F77=ifort make clean make rm ~/usr-intel/lib/libgsl* ~/usr-intel/include/gsl/* make install
This will install the gsl libraries in ~/usr-intel/include and ~/usr-intel/lib.
Install lib-common
After all the supporting libraries (blitz++, gmp, sprng2, gsl, hdf5, fftw3) are installed, we are ready to install pimc++. The pimc++ source are separated into two parts: libcommon-0.5 and pimc++-0.5. In this section, we first install libcommon-0.5.
Download libcommon from sourceforge and decompress it. You may want to modify the location to decompress the code through the $CODES variable.
export CODES=$HOME/group/`whoami`/Codes mkdir -p $CODES/pimc++ cd $CODES/pimc++ wget http://sourceforge.net/projects/pimcpp/files/pimcpp/Development%20Version%200.5/libcommon-0.5.tar.gz/download tar -zxvf libcommon-0.5.tar.gz cd libcommon-0.5
Then do the usual configure and make install procedure, indicating that we want a private installation in our ~/usr-intel directory
#remove any dependencies from the previous configure find . -name ".deps" -exec rm -rf {} \; # mpi version, shared library, use icc instead of mpicc ./configure ./configure --prefix=$HOME/usr-intel --enable-mpi=yes FFLAGS="-fPIC -g" CFLAGS="-fPIC -g" CPPFLAGS="-fPIC -DMPICH_IGNORE_CXX_SEEK -I$HOME/usr-intel/include -I$HOME/usr-intel/include/sprng2" CXX=icpc CC=icc F77=ifort CXXFLAGS="-fPIC -g -I$HOME/usr-intel/include/sprng2/ -I$HOME/usr-intel/include/blitz " --with-mpi-cflags="-I/usr/mpi/intel/openmpi-1.4.1-1/include" --with-mpi-libs="-L/usr/mpi/intel/openmpi-1.4.1-1/lib64 -lmpi_cxx -lmpi -lopen-rte -lopen-pal" --with-mpi-run="mpiexec" --with-blas-libs="-lblas " --with-lapack-cflags=" " --with-lapack-libs="-llapack " --with-hdf5-libs="-L$HOME/usr-intel/lib -lhdf5" --with-hdf5-cflags="-I$HOME/usr-intel/include" LDFLAGS="-L$HOME/usr-intel/lib -lgmp" BLITZ_CFLAGS="-I$HOME/usr-intel/include/blitz" BLITZ_LIBS="-L$HOME/usr-intel/lib -lblitz" SPRNG2_CFLAGS="-I$HOME/usr-intel/include/sprng2" SPRNG2_LIBS="-L$HOME/usr-intel/lib -lsprng" GSL_CFLAGS="-I$HOME/usr-intel/include/gsl" GSL_LIBS="-L$HOME/usr-intel/lib -lgsl -lgslcblas" FFTW3_CFLAGS="-I$HOME/usr-intel/include" FFTW3_LIBS="-L$HOME/usr-intel/lib -lfftw3" --disable-python make clean make -j 8 make install
This will install the libcommon-0.5 libraries in ~/usr-intel/include and ~/usr-intel/lib.
Notice here we are using the icc/icpc/ifort compilers and specifying all the mpi libraries and paths manually. (I suspect that this is necessary to compile pimc++ in parallel.) Sometimes it is difficult to find out what exactly are the correct MPI libraries and paths, when the system hide that information by providing an mpicc command that does it automatically. The trick is to first use CXX=mpicxx CC=mpicc F77=mpif77 in the abvoe configure line, compile the code (by make -j 8), and then compile the test case MPI/TestComm and see which libraries is it linked to.
cd MPI make TestComm MPI_LIBS= ldd ./TestComm cd ..
(Notice here that we unset the MPI_LIBS variable because it should be taken care of by the mpicc command.) After that, repeat the long configure command above using CXX=icpc CC=icc F77=ifort and compile the code again (using make -j 8). Make sure you have the following line (or something equivalent) in your .bash_profile file.
export LD_LIBRARY_PATH=/usr/mpi/intel/openmpi-1.4.1-1/lib64:$HOME/usr-intel/lib
To see whether the build is successful, try to compile and run the MPI/TestComm program.
cd MPI make TestComm qsub -I -l nodes=1:ppn=8,walltime=01:00:00 # enter compute node export CODES=$HOME/group/`whoami`/Codes cd $CODES/pimc++/libcommon-0.5/MPI mpiexec -np 4 ./TestComm exit # return to head node cd .. # return to libcommon-0.5 directory
After the qsub -I line, we are entering a compute node on which we can run parallel jobs interactively. If the MPI works correctly, your screen print out should be something like this
Number of procs = 4 AllGatherRows() check: passed. AllGatherVec() check: passed. Sums per second = 328947
Install pimc++
We are now ready to build the pimc++ main program.
Download pimc++ from sourceforge and decompress it. You may want to modify the location to decompress the code through the $CODES variable.
export CODES=$HOME/group/`whoami`/Codes mkdir -p $CODES/pimc++ cd $CODES/pimc++ wget http://sourceforge.net/projects/pimcpp/files/pimcpp/Development%20Version%200.5/pimc-0.5.tar.gz/download tar -zxvf pimc-0.5.tar.gz cd pimc-0.5
Then do the usual configure and make install procedure, indicating that we want a private installation in our ~/usr-intel directory
#remove any dependencies from the previous configure find . -name ".deps" -exec rm -rf {} \; # mpi version, shared library, use icc instead of mpicc ./configure --prefix=$HOME/usr-intel --enable-mpi=yes CPPFLAGS="-wd654 -wd175 -DUSE_MPI -DMPICH_IGNORE_CXX_SEEK -fPIC" CXX=icpc CC=icc F77=ifort FFLAGS="-fPIC -g" CFLAGS="-fPIC -g" COMMON_CFLAGS="-I$HOME/usr-intel/include" COMMON_LIBS="-L$HOME/usr-intel/lib -lmpicommon" CXXFLAGS="-fPIC -g -I/usr/mpi/intel/openmpi-1.4.1-1/include -I$HOME/usr-intel/include/sprng2 -I$HOME/usr-intel/include/blitz" LDFLAGS="-L/opt/intel/fce/10.1.015/lib -L/usr/mpi/intel/openmpi-1.4.1-1/lib64 -L$HOME/usr-intel/lib" LIBS="-lmpi_cxx -lmpi -lopen-rte -lopen-pal -llapack -lblas -lm -lblitz -lhdf5 -lz -lfftw3 -lsprng -lgmp -lgsl -lgslcblas -lifcore" make clean make -j 8 make install
There is also a copy of the executable at src/pimc++. To test your executable, you can use the following command.
mkdir -p examples cd examples qsub -I -l nodes=1:ppn=8,walltime=01:00:00 # enter compute node export CODES=$HOME/group/`whoami`/Codes cd $CODES/pimc++/pimc-0.5/examples mpiexec -np 4 ../ exit # return to head node cd .. # return to libcommon-0.5 directory
Make sure you have the following line (or something equivalent) in your .bash_profile file.
export LD_LIBRARY_PATH=/usr/mpi/intel/openmpi-1.4.1-1/lib64:$HOME/usr-intel/lib