This blog is a collection of notes about implementing the
lammps engine in our RMCProfile package  for energy
calculation, using the provided wrapper routines provided with the
lammps distribution. The note will be covering
the installation of MPI compilers, building the
lammps wrapper libraries, the implementation in the caller
program, shipping of the main caller program with the shared dynamic libraries, proper execution of the main caller
program, and the hybrid OMP & MPI implementation.
To build and run MPI program, we have to install a specific MPI
Open MPI is the one that is often used. We can download the
latest version of Open MPI from here,
Compiling Open MPI is straightforward and instruction can be found here,
N.B. The compiler used for compiling Open MPI should be consistent with that used for compiling our program that will use the MPI libraries. For example, in our Fortran program, we are going to import the MPI library by using
USE MPI. Therefore, we have to specify the compiler we want to use for compiling Open MPI. Here follows is the example case where we were using
openmpi-4.1.4. After unzipping the package to obtain the
openmpi-4.1.4directory, we can cd into it and execute
./configure --helpto list out all the available flags among which we can find how to set a specific compiler for certain languages, e.g.,
Concerning the specification of fortran compiler, initially
FC=ifortwas used, which however turned to be not working for some reason. Then it seems that we have to give the full path to
ifortcompiler to make it work.
At the end of the day, the following commands were executed in sequence to compile
tar xzvf openmpi-4.1.4.tar.gz
../configure FC=/opt/intel/bin/ifort --prefix=/opt/bin/
sudo make all install
--prefix=/opt/bin/is telling the compiler where to install the compiled executable.
Alternatively, we can also follow the instruction here below to compile Open MPI,
Demo program for LAMMPS interface with Fortran
Untar the tarball downloaded containing the source codes of LAMMPS and then go into the obtained directory, e.g.,
Here is the source of lammps for this release, https://github.com/lammps/lammps/releases/tag/stable_29Sep2021_update2
Build LAMMPS shared library,
Current note is a clean summary for steps we need to go through for building lammps. For detailed instruction, refer to the link below,
Location of the built shared library needs to be specified when building the final caller program.
Also, the shared library needs to be shipped together with the main caller program and meanwhile, the location of the shipped shared library needs to be exported before running the main caller program (using command like
mkdir build && cd build
cmake ../cmake -D BUILD_SHARED_LIBS=yes -DCMAKE_CXX_COMPILER=/opt/bin/mpicxx
Refer to the link for more information about building lammps library
-DCMAKE_CXX_COMPILERflag is for specifying the C++ compiler to use for compiling lammps. If not specifying the compiler explicitly,
cmakewill be able to find a usable C++ compiler on the building machine. However, in our case for building the lammps wrapper (see section-3.1 below), we specified the C++ compiler as
/opt/bin/mpicxx. Therefore, it should be better to keep consistence in terms of the C++ compiler being used in multiple spots. Though, I am not sure whether using different version of the C++ compilers will work. But at least, using the same version of the C++ compiler turned out to be working without problems.
make -j 24
24here means we want to use 24 cores for building lammps in parallel to speed up the compilation.
2.4. Taking the main directory
lammps-stable_29Sep2021_update2as the example, we should be able to find
liblammps.so.0(together with a soft symbol
liblammps.so.0, in the same directory with
liblammps.so.0) in the following directory,
Build the main caller program
Here, we will take the
fortran2example included in the shipped source codes of lammps. Again, taking the stable_29Sep2021_update2 as the example, the
fortran2directory can be found here,
N.B. for the following instructions, we will assume that we are located in this directory,
Codes in this directory has been modified to build a minimal example to be used as the template for further development.
Makefileis provided with lammps distribution, which basically packages up those steps as specified in the
In our case for compiling the lammps wrapper for RMCProfile package, the
Makefilebeing used can be found here,
In the Makefile, the C++ compiler was specified as
/opt/bin/mpicxx, as can be found here,
/opt/bin/mpifort -c simple.f90
/opt/bin/mpifort -c serial.f90
/opt/bin/mpifort -c main.f90
/opt/bin/mpifort main.o serial.o LAMMPS.o LAMMPS-wrapper.o simple.o -L /home/y8z/BBird_Ext/Temp_Ext/lammps-stable_29Sep2021_update2/build -L . -llammps -llammps_fortran -lmpi_cxx -lstdc++ -lm -o main
mainis the finally compiled executable.
/opt/bin/may be removed if
mpifortis in our system path. However, in my case, for the reason given down below (see step 4), adding the path to our environment path variable won’t work. The workaround may be to create alias to
/opt/bin/mpifortand all the other commands.
In this link, we can see that the
LAMMPS_MACHINEflag can be specified. If we do that and assume we give it the name of
myname, the file name of the compiled shared library will also be changed accordingly, from the default
liblammps_myname.so.0(same thing will happen to the soft link as well). In this case, the library link flag in the final compile command should be changed from
Same comments as above applies to the
-llammps_fortranas well - in the included
Makefile, the compiled shared library is by default given the name of
liblammps_fortran.so. If we change its name in
Makefile, we need to change the flag accordingly.
For the flag
-llammps, there is a pitfall to notice - the actual shared library name is
liblammps.so.0(taking its default name as the example) and we have
liblammps.soas a soft link to
liblammps.so.0. When compiling the final executable (here,
simpleF), the compiling command is actually expecting
liblammps.so. However, when executing the final executable (see step below for an example of execution), it is instead expecting
liblammps.so.0, simple because the
liblammps.so.0is the file containing the actual contents of the library and
liblammps.sois just a soft link to the library file.
Execute the main executable as,
/opt/bin/mpirun -np 4 ./main
According to the notes in the following link,
prepending the full path to the
mpirunexecutable is equivalent to add in the
--prefixflag to specify the prefix option. It seems that this flag is to specify the library location and in principle we can export the path of the Open MPI libraries to the
LD_LIBRARY_PATHvariable – following the example above, the command would be,
Compiler Setupsection of the current note for the path specified to contain the compiled Open MPI libraries.
However, given the current configuration on BigBird machine, it seems that the currently existing multiple versions of OpenMPI builds in the systems messed up the libraries, etc. So, for some reason, exporting the library path like above won’t work. In such a situation, we have to prepend the executable with the full path. A workaround may be to create alias to the full path of the command.
Unfortunately, the approach of creating alias for the full path won’t work when submitting jobs via, e.g., slurm, i.e., it seems that for some reason the job submission routine won’t be able to find the right libraries to use even we have already created the alias for the full path of the
mpiruncommand. In such cases, we have to use the full path of the
mpirunexecutable in the job submission script.
Here following is a list of those libraries that need to be shipped with RMCProfile package to guarantee they can be found and called at run time.
liblammps.so -> liblammps.so.0
libmpi.so -> libmpi.so.40.30.4
As a personal note, those libraries can be found in the following directory on our building machine BigBird,
On the public domain, those libraries can be found here,
With the MPI program compiled following the way described in this doc, it should be only executed via
/opt/bin/mpirun, i.e., directly executing the compiled executable just like a serial program should NOT work.
In our case of compiling the RMCProfile package with lammps, we are linking the dynamic library
-lmpi_cxxflag in the link below),
So, we include the
libmpi_cxx.so.40.20.1library file together with its soft link in our shipped RMCProfile package as well, under the shared library directory which will be exported as the
LD_LIBRARY_PATHenvironment variable while running RMCProfile. I am not sure whether this is necessary but it should be safe to include it anyways.
About the hybrid MPI and OMP parallel implementation.
It seems that the OMP session can be safely included in the MPI code without that much special attention to be paid, as long as we are not playing around with MPI communications within the OMP session.
For compiling, when using intel compiler, the flag for OMP is
-qopenmpwhereas for GNU compiler, the flag is