Sometimes it is necessary to run an application on multiple machines (or processors) in order to improve performance (i.e., reduce execution time). You can create a computer network for the subsequent launch of the application distributed across all nodes. When developing such an application, you need to organize messaging. I know two implementations:
- using sockets and working with the OS API directly,
- using MPI.
The first option has great potential, but the MPI library is much simpler and, in fact, is more widely used in parallel computing. MPI is a standard for a set of messaging functions between processes of a single application. There is a free implementation of this MPICH2 library, which will be used in this article. You can find a large number of manuals and guides on library functions. Now I will focus only on the installation and performance testing.
Installing MPICH2 Library for Windows
In order to get started with the MPICH2 library, you need to download a version of the product compatible with the operating system used
here . For Windows, it is an installation package of the MSI format, so the library is installed in the standard way. It is important that the installation in this case should be carried out for all users of the system.
Now you need to add the two main executable files mpiexec.exe and smpd.exe to the list of firewall rules. This is necessary because, when organizing a cluster, a network is used and access to each network node must be allowed for MPI components. The specific settings depend on the type of firewall used.
')
The next step is to create a user in the system on whose behalf the library components will be executed. It is important that this user must have a password, because MPICH2 does not allow registering an executing user with a blank password. Registration is performed using the
wmpiregister.exe component, located in the
bin folder of the library and having a clear windowed interface:

However, it is possible to do this using the console command
[path_to_library] / bin / mpiexec-register .
Installation is almost complete. It remains to verify the correctness of all the settings. For this purpose, in the
examples folder there are examples of programs with parallel algorithms. To start, you can use the
wmpiexec.exe component, which uses the window interface, which does not need additional comments.

Another way to execute applications using MPI is through the console, for example, by writing a similar command
[path_to_library] / bin / mpiexec -n 2 cpi.exe , where
-n 2 indicates the number of processes involved (there are 2 of them) and
cpi.exe - this is the name of the executable application. For ease of operation through the console, I advise you to add the path to
mpiexec.exe to the
PATH environment variable. If an application is executed on a single-processor machine, then multiprocessing is emulated, i.e. You can check the performance of your applications "without departing from the cash register."
Health check
MVS 2005 is used as an IDE for development. We will write a program that will welcome this world on behalf of various neonatal processes. To do this, use an empty project (empty project) with a change in some of the project settings.
So, first of all, add directories where the header files and library files will be located. If someone does not know, then this is done as follows:
- Select the menu item Tools => Options.
- In “Show directories for:” dropdown select “Include files”.
- Add [path_to_library] \ Include
- In the dropdown “Show directories for:” select the item “Library files”.
- Add [path_to_library] \ Lib
- In Solution Explorer, right-click on the project and select add => existing item. Select all files with the extension .lib in the folder [path_to_library] \ Lib
The project is fully ready to work with MPI. I can advise you to create a filter for the library files added to the project so that they do not interfere. Now add the cpp file with the application code:
#include "stdio.h"
#include "mpi.h"
#include "stdlib.h"
#include "math.h"
int ProcNum;
int ProcRank;
int main( int argc, char *argv[]){
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &ProcRank);
MPI_Comm_size(MPI_COMM_WORLD, &ProcNum);
printf( "From process %i: Hello, World!\n" , ProcRank);
MPI_Finalize();
}
* This source code was highlighted with Source Code Highlighter .
Compile it and run the resulting binary through
wmpiexec on 4 processes.

As we see, the world greeted every process that was born.
I deliberately gave the code without any comments, but only for the purpose of demonstrating the work of the library. In the future, I plan to devote an article to the list of MPI functions. Also interesting is the topic of excessive concurrency and, in general, the question of when it is worth parallelizing the application, and when not. These studies will also be presented later. Therefore, I had a basic question - the scope of applicability in web technologies? So far my interest in parallel computing is caused by another problem: acceleration of modeling of various kinds of processes.