Micro and Nano Mechanics Group
(Difference between revisions)
(Computing Bulk Modulus of ZrO2)
(fixed wget links for hashed image directories)
 
(16 intermediate revisions by 2 users not shown)

Latest revision as of 19:43, 27 January 2010

In this document, we describe how to compile VASP program on several computer clusters in the Mechanical Engineering Department of Stanford University. These include mc-cc, wcr, and su-ahpcrc, all of which are linux clusters. We then give examples of how to use VASP to compute the bulk modulus of Au and ZrO2.

Contents

[edit] VASP on MC-CC

In general we follow the instructions given in http://cms.mpi.univie.ac.at/vasp/vasp/node16.html

First, go to vasp.4.lib/ directory. Copy the vasp.4.lib/makefile.mc-cc file to Makefile and then compile. This can be done by the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/4/40/Vasp.4.lib_makefile.mc-cc.txt -O Makefile
make clean
make

This will create libdmy.a in this directory.

Next, go to vasp.4.6/ directory. Copy the vasp.4.6/makefile.mc-cc file to Makefile and then compile. This can be done by the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/8/8f/Vasp.4.6_makefile.mc-cc.txt -O Makefile
make clean
make

This will create executable vasp in this directory.

Notice that in both makefiles, we use the /opt/mpich/intel/bin/mpif90 compiler. Different clusters have different mpi compilers and they have different speeds. Intel compilers usually perform better than generic compilers on intel linux clusters. Make sure in your directories, you do not have another file named makefile, which takes precedence over Makefile.

The binary (executable) file vasp can run in both serial mode (e.g. ./vasp) and parallel mode (e.g. mpiexec -np 4 vasp in a PBS script). The following table compares the time to run a simple benchmark case (one Au atom, LDA, ENCUT=400, ISMEAR=1, SIGMA=0.1, KPOINTS=21x21x21) using our executable here and the one available at /share/apps/vasp.4.6/bin/vasp. Our executable is about 70% faster.

Number of CPUs vasp compiled here /share/apps/vasp.4.6/bin/vasp
1 68 (seconds) 116 (seconds)
2 50 (seconds) 86 (seconds)
4 56 (seconds) (cannot run -- killed)

[edit] VASP on WCR

The procecure is similar to that on MC-CC, except that different compilers need to be used.

First, the command mpi-selector allows us to choose among different MPI compilers installed on the cluster.

$ mpi-selector --list
mvapich_gcc-0.9.9
mvapich_gcc-1.0
mvapich_intel-0.9.9
mvapich_intel-1.0
mvapich_pgi-0.9.9
mvapich_pgi-1.0
openmpi_gcc-1.2.2
openmpi_intel-1.2.2
openmpi_pgi-1.2.2

This gives us a list of choices. Next, we choose mvapich_intel-0.9.9 by

$ mpi-selector --set mvapich_intel-0.9.9

You can double-check that your choice has been made by

$ mpi-selector --query
default:mvapich_intel-0.9.9
level:user

Now you need to log-out of the cluster. When you log-in again, all MPI library paths will be correctly set up for you, e.g.

$ which mpif90
/usr/mpi/intel/mvapich-0.9.9/bin/mpif90

Now we go to the vasp.4.lib/ directory and execute the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/1/1a/Vasp.4.lib_makefile.wcr.txt -O Makefile
make clean
make

This will create libdmy.a in this directory.

Next, go to vasp.4.6/ directory and execute the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/e/e4/Vasp.4.6_makefile.wcr.txt -O Makefile
make clean
make

This will create executable vasp in this directory. This time the executable vasp can not run interactively, but can only run in the queue through a PBS script (e.g. mpiexec -np 4 vasp). Make sure in your directories, you do not have another file named makefile, which takes precedence over Makefile.

There is another executable at /share/apps/vasp.4.6/vasp, which can execute in both serial and parallel mode. To see that this executable contains MPI functions, use command

nm /share/apps/vasp.4.6/vasp | grep -i "MPI"

The following table compares the time to run the same benchmark case as above using both executables. Our executable shows speed up with multiple CPUs.

Number of CPUs vasp compiled here /share/apps/vasp.4.6/vasp vasp another build (below)
1 76 (seconds) 75 (seconds) 48 (seconds)
2 59 (seconds) 72 (seconds) 34 (seconds)
4 35 (seconds) 64 (seconds) 29 (seconds)
8 37 (seconds) 65 (seconds) 31 (seconds)

Another way to compile vasp on WCR is the following. First

$ mpi-selector --unset

Log out WCR and log in again. Go to vasp.4.lib/.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/3/3c/Vasp.4.lib_makefile.wcr-intel.txt -O Makefile
make clean
make

This will re-create the libdmy.a in this directory. Next, go to vasp.4.6/.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/c/c5/Vasp.4.6_makefile.wcr-intel.txt -O Makefile
make clean
make

This will create executable vasp in this directory. The performance of this executable is listed in the fourth column of the table above.

[edit] VASP on SU-AHPCRC

The procecure is similar to that on WCR. First, we choose the mvapich2_intel-1.2 compiler by

$ mpi-selector --set mvapich2_intel-1.2

Now you need to log-out of the cluster and log-in again to have the MPI library paths correctly set up. Next Go to the vasp.4.lib/ directory and execute the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/9/92/Vasp.4.lib_makefile.su-ahpcrc.txt -O Makefile
make clean
make

This will create libdmy.a in this directory.

Next, go to vasp.4.6/ directory and execute the following commands.

rm makefile
wget http://micro.stanford.edu/mediawiki/images/9/9f/Vasp.4.6_makefile.su-ahpcrc.txt -O Makefile
make clean
make

This will create executable vasp in this directory. This time the executable vasp can not run interactively, but can only run in the queue through a PBS script (e.g. mpiexec --comm=ib -np 4 vasp). Notice that here we need to specify the communication channel. (For mvapich1 we need to use --comm=ib and for mvapich2 use --comm=pmi.)

The following table provides the timing information for the same benchmark case studied above.

Number of CPUs vasp compiled here
1 40 (seconds)
2 28 (seconds)
4 27 (seconds)
6 29 (seconds)
8 31 (seconds)
16 215 (seconds)

A Word of Caution: Make sure to run a few test cases to confirm your executable not only runs but produces the correct numerical results. For example, we have found that on su-ahpcrc, function BRMIX (broyden.f) was giving serious errors. This was solved by changing the compilation options to "OFLAG=-O1 -mtune core2 -axW -unroll", and by changing "ICHARG = 0" to "ICHARG = 2" in INCAR. ("ICHARG=2" is the default when "ISTART=0" or if there are no CHG, CHGCAR, WAVECAR files in the folder.)

Here are several test cases of VASP:

VASP Computing Bulk Modulus of Au

VASP Computing Bulk Modulus of ZrO2

[edit] Intel MKL instructions

There are "official" instructions to compile VASP with the Intel Compiler family, named Using Intel MKL in VASP. Those instructions are unrelated with the present instructions but can be a good reference for future builds.