MPB (MIT Photonic Bands) is an open source freeware program that was originally designed to calculate the dispersion diagrams of
photonic crystals .
MEEP is the same free source code program used to simulate the behavior of electromagnetic waves in various media (photonic crystals, waveguides, resonators, and the like).
Both programs were developed at the Massachusetts Institute of Technology (MIT) and both are constantly receiving new opportunities. MPB was written by Stephen Johnson (born
Steven G. Johnson ) during his postgraduate work. MEEP was written a little later with Stephen.
')
Both programs calculate the distribution of the electric and magnetic components of the electromagnetic field using a combination of numerical and analytical methods for solving a system of Maxwell equations (in one-, two-, or three-dimensional structures), but each of them does this in its own way. If MPB is designed for use with respect to periodic and quasi-periodic structures and calculating the frequencies of standing waves (modes) in these structures, then MEEP is designed to simulate the propagation of electromagnetic waves through the same
photonic crystals ,
dielectric mirrors , through
waveguides and inside
resonators . It allows you to calculate the same dispersion diagrams of photonic crystals, the frequency of standing waves in both photonic crystals and non-periodic structures, the transmission and reflection spectra of various structures, the loss at the waveguide bends, and much more. For this, MEEP uses a whole arsenal of various radiation sources, boundary conditions and radiation absorbers (
PML ).
The latest versions of MPB and MEEP can interact with each other. For example, it is possible to write a program for MEEP, which asks MPB for calculating the field components for the main mode of the waveguide, and then will use these components to excite this mode in an optical waveguide fiber. As a result, it will be possible to simulate the propagation of the main mode along the waveguide and display the result of calculations in third-party programs. An example is shown below, where you can see the result of calculating the components of the wave that leaves the optical fiber. The free program
Paraview was used to display this result.
I have to use these programs in my work, install and help other people in the installation. The mailing lists of these programs occasionally skip questions about installing these programs from Russian-speaking users. Surprisingly for myself, I did not find installation instructions in the Russian part of the Internet and decided to publish them here.
general information
In the world of commercial software products, these programs have competitors. The analogue of the MPB program is
BandSOLVE , the analogue of MEEP is
FullWAVE . BandSOLVE and FullWAVE have a user-friendly graphical interface, but they are worth the money. MPB and MEEP, unlike BandSOLVE and FullWAVE, are free, do not have a graphical interface, use the
Guile script language and are distributed under
the GNU license along with their source codes.
You can install MEEP and MPB from repositories if you are using Debian or Ubuntu. This makes life easier for those users who have not yet learned how to compile and install these programs on their own. At times, these programs from the repositories work fine, but it also happens that some functions do not work or the version of the programs in the repositories is just old. Therefore, the best way is to install programs from sources.
These programs can be installed on Windows computers using
Cygwin . If anyone is interested in finding out how to do this, then I can tell you further. But it is a laborious business and it was relevant 10-15 years ago. It is now easier to put one of the Linux distributions on your computer and use these programs in their native environment. Which of the Linux distributions you choose for your work is your business.
In this note, the
CentOS 7 OS will be used. The repositories of this distribution already have the necessary for the work of the library
HDF5 , compiled with support for parallel computing. This will facilitate our task, since it is often this library that is the source of problems if it does not work as expected. There are other necessary libraries in the repositories, for example,
fftw . But the latter does not support
MPI . Therefore, it will need to compile yourself.
The main sources of information for this note are the instructions for installing
MPB and
MEEP , but in this note a lot will be simplified. All programs will be installed in the / usr / local / directory.
Step 1: Install Compilers and Libraries
In the terminal window we perform:
sudo yum install libtool* mpich-devel.* lapack* guile guile-devel readline-devel hdf5-* gcc-c++ scalapack-* paraview*
Ubuntu 14.04 LTS sudo apt-get install libtool* mpich-dev* lapack* guile-2.0 guile-2.0-dev readline-dev fftw3-* paraview hdf5-* gcc-c++ scalapack-* paraview*
I suspect that unnecessary packages will be installed this way, but this method just works. And the most attentive of you always tell you exactly what you can not install (I suspect that the scalapack- * could not install).
Step 2: we add the necessary environment variables to .bashrc (this is valid for CentOS 7 and for Ubuntu 14.04 LTS)
For this purpose, you can use any favorite text editor. But if you use vim, then in the terminal window type:
vim .bashrc
After that, press the i key to enter edit mode, move the cursor down the file and append it there:
LDFLAGS="-L/usr/local/lib -lm" export LDFLAGS CPPFLAGS="-I/usr/local/include" export CPPFLAGS LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH" export LD_LIBRARY_PATH PATH=/lib64/mpich/bin:$PATH export PATH
Press the Esc key, type on the keyboard: wq and press Enter to save the changes to .bashrc and exit the editor. After that we create a temporary directory t in which temporary files will be stored, and “enter” into it:
mkdir t cd t
Step 3: load, compile and install the MPI-enabled FFTW library
To do this, in the terminal window perform:
wget http://www.fftw.org/fftw-3.3.4.tar.gz tar -xf fftw-3.3.4.tar.gz cd fftw-3.3.4 CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure --enable-mpi --enable-openmp make -j4 sudo make install cd ..
Ubuntu 14.04 LTSThis library was already installed in step 1, but for some reason the hdf5 library does not work correctly. So you have to compile and install it from source:
wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.14.tar.gz tar -xf hdf5-1.8.14.tar.gz cd hdf5-1.8.14 CC=mpicc CXX=mpicxx F77=mpif77 ./configure –-enable-parallel –prefix=/usr/local make -j4 sudo make install cd ..
All subsequent instructions work for both CentOS 7 and Ubuntu 14.04.
Step 4: Libctl Library
In the same place we carry out:
wget http://ab-initio.mit.edu/libctl/libctl-3.2.2.tar.gz tar -xf libctl-3.2.2.tar.gz cd libctl-3.2.2 CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure make -j4 sudo make install cd ..
Step 5: MPB
Compile and install without MPI and OpenMP support:
wget http://ab-initio.mit.edu/mpb/mpb-1.5.tar.gz tar -xf mpb-1.5.tar.gz cd mpb-1.5/ CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure make -j4 sudo make install make distclean
With MPI and OpenMP support
CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure --with-mpi --with-openmp make -j4 sudo make install cd ..
We carry out:
wget http://ab-initio.mit.edu/harminv/harminv-1.4.tar.gz tar -xf harminv-1.4.tar.gz cd harminv-1.4/ CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure make sudo make install cd ..
Step 7: MEEP
Without MPI and OpenMP support:
wget http://ab-initio.mit.edu/meep/meep-1.3.tar.gz tar -xf meep-1.3.tar.gz cd meep-1.3/ CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure make -j4 sudo make install make distclean
With MPI and OpenMP support:
CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure --with-mpi make -j4 sudo make install
MPB and MEEP programs save the results of calculations to files with the .h5 extension. This package (h5utils) contains a set of programs for working with h5 files, such as h5topng (for converting h5 files to graphic format png), h5tovtk (converting to vtk format, convenient for display using Paraview) and h5totxt ( conversion to text format). If you do not install and use these programs, many calculation results will simply not be available for viewing.
wget http://ab-initio.mit.edu/h5utils/h5utils-1.12.1.tar.gz tar -xf http://ab-initio.mit.edu/h5utils/h5utils-1.12.1.tar.gz cd h5utils-1.12.1 CC=/lib64/mpich/bin/mpicc CXX=/lib64/mpich/bin/mpicxx F77=/lib64/mpich/bin/mpif77 ./configure make -j4 sudo make install cd ..
If it happened that the execution of the command “make -j4” was interrupted with the error "[writepng.o] Error 1", then instead of the last three commands we execute:
make h5totxt make h5tovtk sudo mv h5tovtk /usr/local/bin/ sudo mv h5totxt /usr/local/bin/ cd ..
Check the work of MEEP
Move to the directory with examples:
cd meep-1.3/examples/
First, launch one of the examples (ring resonator model) without using MPI:
meep ring.ctl
If MEEP was installed normally, after the end of the calculations, you will see this text in the terminal window:
creating output file "./ring-ez-000403.50.h5" ...
creating output file "./ring-ez-000403.85.h5" ...
creating output file "./ring-ez-000404.20.h5" ...
creating output file "./ring-ez-000404.55.h5" ...
creating output file "./ring-ez-000404.90.h5" ...
creating output file "./ring-ez-000405.25.h5" ...
creating output file "./ring-ez-000405.60.h5" ...
creating output file "./ring-ez-000405.95.h5" ...
creating output file "./ring-ez-000406.30.h5" ...
creating output file "./ring-ez-000406.65.h5" ...
run 1 finished at t = 406.70000000000005 (8134 timesteps)
Elapsed run time = 4.02319 s
In the last line - the time portazhennoe on the calculation (in seconds).
Now it is possible to test how much MPI can reduce the calculation time. To do this, run meep-mpi (MEEP with MPI support) on one processor core:
mpirun -np 1 meep-mpi ring.ctl
After the end of the calculations we see:
creating output file "./ring-ez-000405.95.h5" ...
creating output file "./ring-ez-000406.30.h5" ...
creating output file "./ring-ez-000406.65.h5" ...
run 1 finished at t = 406.70000000000005 (8134 timesteps)
Elapsed run time = 3.81012 s
With two cores:
mpirun -np 2 meep-mpi ring.ctl
After the end of the calculations we see:
creating output file "./ring-ez-000405.25.h5" ...
creating output file "./ring-ez-000405.60.h5" ...
creating output file "./ring-ez-000405.95.h5" ...
creating output file "./ring-ez-000406.30.h5" ...
creating output file "./ring-ez-000406.65.h5" ...
run 1 finished at t = 406.70000000000005 (8134 timesteps)
Elapsed run time = 3.20775 s
With three cores:
mpirun -np 3 meep-mpi ring.ctl
After the end of the calculations we see:
creating output file "./ring-ez-000405.95.h5" ...
creating output file "./ring-ez-000406.30.h5" ...
creating output file "./ring-ez-000406.65.h5" ...
run 1 finished at t = 406.70000000000005 (8134 timesteps)
Elapsed run time = 4.67524 s
That is, with an increase in the number of nuclei to 2, an acceleration of calculations is observed. But with a further increase in the number of involved nuclei, the calculation time increases. Two - the optimal number of cores for this example, which gives the maximum increase in the speed of calculation. But in the case of using mpb-mpi (MPB with MPI support, which we also installed), the picture is usually different and it is possible to achieve a better speed increase.
In the case of using computing clusters and supercomputers, the optimal number of processors will be different - 8-12. You may ask: “Why use MPI and OpenMP if the achieved increase in calculation speed is insignificant?” First of all, the speed increase depends on the model itself and on how the calculations can be parallelized. The point is that in supercomputers and clusters the memory size is tied to the number of nodes involved in the calculation. For example, 2 GB per node. This means that by involving 2 nodes in the calculation, the program gets access to 4GB of memory. Involving 10 nodes, the program gains access to 20 GB. Thus, the use of both MPI and OpenMP (this was included above with the --with-mpi --with-openmp keys) allows us to speed up the calculations a bit and at the same time use more memory for the calculations. And, if on your home computer you can use only 4GB (for example), and on your office computer there are not enough connectors for installing memory strips, then on a supercomputer you can get access to 64GB (for example) and more.
Now we can look at the results of the calculation. To begin with, we convert all files with the .h5 extension in the working directory that were created as a result of the calculations into the .vtk format with the command:
h5tovtk *.vtk
After that, you can start the Paraview program (by running paraview in the command line), and open the calculation results for viewing. To look at the distribution of the dielectric constant in the simulated structure of a ring resonator, you need to open the file ring-eps-000000.00.vtk. Distributions of the electric field are stored in ring-ez-000400.35.vtk type files.
Script for automatic execution of operations 3-8 (written and tested for Scientific Linux 6.5)
PS Added instructions for Ubuntu 14.04 LTS (not always the latest versions of these programs are available in the repositories).