m (→= Computing Bulk Modulus of Au) |
Revision as of 01:23, 24 September 2008
In this document, we describe how to compile VASP program on several computer clusters in the Mechanical Engineering Department of Stanford University. These include mc-cc, wcr, and su-ahpcrc, all of which are linux clusters. We then give an example of how to use VASP to compute the bulk modulus of Au.
Contents |
VASP on MC-CC
In general we follow the instructions given in http://cms.mpi.univie.ac.at/vasp/vasp/node16.html
First, go to vasp.4.lib/ directory. Copy the vasp.4.lib/makefile.mc-cc file to Makefile and then compile. This can be done by the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.mc-cc.txt -O Makefile make clean make
This will create libdmy.a in this directory.
Next, go to vasp.4.6/ directory. Copy the vasp.4.6/makefile.mc-cc file to Makefile and then compile. This can be done by the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.mc-cc.txt -O Makefile make clean make
This will create executable vasp in this directory.
Notice that in both makefiles, we use the /opt/mpich/intel/bin/mpif90 compiler. Different clusters have different mpi compilers and they have different speeds. Intel compilers usually perform better than generic compilers on intel linux clusters. Make sure in your directories, you do not have another file named makefile, which takes precedence over Makefile.
The binary (executable) file vasp can run in both serial mode (e.g. ./vasp) and parallel mode (e.g. mpiexec -np 4 vasp in a PBS script). The following table compares the time to run a simple benchmark case (one Au atom, LDA, ENCUT=400, ISMEAR=1, SIGMA=0.1, KPOINTS=21x21x21) using our executable here and the one available at /share/apps/vasp.4.6/bin/vasp. Our executable is about 70% faster.
Number of CPUs | vasp compiled here | /share/apps/vasp.4.6/bin/vasp |
---|---|---|
1 | 68 (seconds) | 116 (seconds) |
2 | 50 (seconds) | 86 (seconds) |
4 | 56 (seconds) | (cannot run -- killed) |
VASP on WCR
The procecure is similar to that on MC-CC, except that different compilers need to be used.
First, the command mpi-selector allows us to choose among different MPI compilers installed on the cluster.
$ mpi-selector --list mvapich_gcc-0.9.9 mvapich_gcc-1.0 mvapich_intel-0.9.9 mvapich_intel-1.0 mvapich_pgi-0.9.9 mvapich_pgi-1.0 openmpi_gcc-1.2.2 openmpi_intel-1.2.2 openmpi_pgi-1.2.2
This gives us a list of choices. Next, we choose mvapich_intel-0.9.9 by
$ mpi-selector --set mvapich_intel-0.9.9
You can double-check that your choice has been made by
$ mpi-selector --query default:mvapich_intel-0.9.9 level:user
Now you need to log-out of the cluster. When you log-in again, all MPI library paths will be correctly set up for you, e.g.
$ which mpif90 /usr/mpi/intel/mvapich-0.9.9/bin/mpif90
Now we go to the vasp.4.lib/ directory and execute the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.wcr.txt -O Makefile make clean make
This will create libdmy.a in this directory.
Next, go to vasp.4.6/ directory and execute the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.wcr.txt -O Makefile make clean make
This will create executable vasp in this directory. This time the executable vasp can not run interactively, but can only run in the queue through a PBS script (e.g. mpiexec -np 4 vasp).
There is another executable at /share/apps/vasp.4.6/vasp. Unfortunately, this executable can only run in serial mode. This can be seen by typing the command
nm /share/apps/vasp.4.6/vasp | grep "MPI"
This command gives no output, showing that this executable does not use any MPI functions. Try the same command on our own vasp executable and you will see lots of MPI functions.
The following table compares the time to run the same benchmark case as above using both executables. Our executable shows speed up with multiple CPUs. That /share/apps/vasp.4.6/vasp is a serial executable can be seen in the OUTCAR file, which says distr: one band on 1 nodes, 1 groups even for a parallel run.
Number of CPUs | vasp compiled here | /share/apps/vasp.4.6/vasp |
---|---|---|
1 | 76 (seconds) | 75 (seconds) |
2 | 59 (seconds) | 72 (seconds) |
4 | 35 (seconds) | 64 (seconds) |
8 | 27 (seconds) | 65 (seconds) |
VASP on SU-AHPCRC
The procecure is similar to that on WCR. First, we choose the mvapich_intel-0.9.9 compiler by
$ mpi-selector --set mvapich_intel-0.9.9
Now you need to log-out of the cluster and log-in again to have the MPI library paths correctly set up. Next Go to the vasp.4.lib/ directory and execute the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.su-ahpcrc.txt -O Makefile make clean make
This will create libdmy.a in this directory.
Next, go to vasp.4.6/ directory and execute the following commands.
wget http://micro.stanford.edu/mediawiki-1.11.0/images/Vasp.4.lib_makefile.su-ahpcrc.txt -O Makefile make clean make
This will create executable vasp in this directory. This time the executable vasp can not run interactively, but can only run in the queue through a PBS script (e.g. mpiexec -np 4 vasp).
The following table provides the timing information for the same benchmark case studied above.
Number of CPUs | vasp compiled here |
---|---|
1 | 68 |
2 | 50 |
4 | 56 |
Computing Bulk Modulus of Au
In the following, we give an example of how to use VASP to compute the bulk modulus of LDA-Au. We performed this calculation in the runs/Au/LDA/perfect.21x21x21 directory. This directory contains the following files.
INCAR
ENCUT = 400 ISMEAR = 1 SIGMA = 0.1 <pre> KPOINTS <pre> 21x21x21 0 0 = automatic generation of k-points Monkhorst 21 21 21 0 0 0
POSCAR
POSCAR for FCC Au (created manually) 4.068 0 0.5 0.5 0.5 0 0.5 0.5 0.5 0 1 Cartesian (real coordinates r) 0 0 0
To do this calculation, you also need to put the LDA pseudopotential file as POTCAR in this directory.
Now we are ready to run
vasp
To compute the equilibrium lattice constant, cohesive energy and bulk modulus, we use the following script auto.B.serial to run vasp repeated with different lattice constants.
#!/bin/bash rm WAVECAR for a in 4.056 4.058 4.060 4.062 4.064 4.066 4.068 do cat > POSCAR << FIN POSCAR for FCC Au (created manually) $a 0 0.5 0.5 0.5 0 0.5 0.5 0.5 0 1 Cartesian (real coordinates r) 0 0 0 FIN echo "a=$a" ./vasp E=`tail -1 OSZICAR` echo $a $E | sed -s 's/F=//; s/E0=//; s/d E =//;' >> Elatt.B.dat p=`grep pressure OUTCAR | cut -b 25-34` echo $a $p >> platt.B.dat done
After running it as ./auto.B.serial, it will create data files Elatt.B.dat and platt.B.dat.
Launch octave and run the following functions fit_a0EB.m and fit_a0B.m,
fit_a0EB('Elatt.B.dat'); fit_a0B ('platt.B.dat');
The first line fits the energy data to a quadratic curve and computes the equilibrium lattice constant, cohesive energy and bulk modulus. The second line fits the pressure data to a linear curve and computes the equilibrium lattice constant and bulk modulus. In this example, the result is a0 = 4.60 angstrom, Ecoh = -4.39 eV, B = 190 GPa.
To run vasp in parallel, you need to submit vasp.pbs as
qsub vasp.pbs