Foreword
Good day, dear Khabrovchane.
For a relatively long period of time, I study such an area of mathematics as optimization. More specifically, the optimization of functions and dynamic systems (the synthesis of optimal program control and control with full feedback). Special attention on my part was given to heuristic optimization methods due to the fact that the solution of applied problems that I encounter poses some difficulties:
- large dimensions
- nonlinearity
- multimodality.
After analyzing foreign literature, I realized that many of the methods are either simply not mentioned in the Russian literature, or are not described in sufficient detail. That is why the desire arose in more detail to highlight this area of mathematics. So, I present a small review of existing heuristic methods for optimizing functions that might be of interest to the reader.
Harmonic Search
The idea of the method was inspired by improvising jazz musicians. The musician can either play something completely new (generate a random point in the space under study), or play something similar to what he once heard (a modification or combination of points in memory).
Artificial Immune Systems (Artificial Immune Systems)
As the name implies, the methods will mimic the principles of immunological theories. Currently, this class of algorithms includes the following:
- clonal selection algorithms,
- negative selection algorithm
- immune networks
- dendric cell algorithms.
Gravitational Search (Gravitational Search)
The proposed Gravitational Search (GS) algorithm proposed by Rashedi and co-authors in 2009 was inspired by the behavior of heavy bodies during their gravitational interaction. This algorithm is a development of the central force optimization (CFO) algorithm. The main difference between GS is its stochastic nature (as opposed to CFO determinism).
')
Scatter Search
The basis of the scatter search method is the processing of a multitude of elementary outcomes consisting of good points found. In fact, processing involves combining, improving, and updating a multitude of elementary outcomes. This method is based on five operations:
- method of generating various solutions
- solution improvement method
- update method of multiple elementary outcomes,
- subset generation method
- solution combination method.
Thus, the method works as follows: initial solutions are generated, from which a set of elementary outcomes is created. Next, the iterative part begins: the generation of subsets, the combination of solutions, the improvement of the solutions obtained, the updating of the set of elementary outcomes. The iterations end when, during the cycle, the set of elementary outcomes was not updated by any new element.
Cross-Entropy Method
This method was first proposed by Rubenstein in 1997. It was originally developed to solve combinatorial optimization problems, although it was later applied to optimize continuous functions. The main feature of the method is to test the points studied on the optimum space and approximate the distribution of good points (usually by a normal law). At each step of the algorithm, random points are generated in accordance with the current distribution, which subsequently participate in the adjustment of the distribution. In the course of the algorithm, the distribution becomes more “clean” and well-established, which subsequently allows you to select an approximate solution to the optimization problem.
Afterword
This list of methods, of course, is not exhaustive. There is still a huge number of optimization techniques, each of which has its own characteristics. What can I say, new algorithms appear almost every day. It is the lighting of the newest methods and their application I would like to do.