📜 ⬆️ ⬇️

Search for a Hamiltonian cycle in a large graph (traveling salesman problem). Part 2

In continuation to my first article I decided to write this one, in which I will tell about more advanced search algorithms for the Hamiltonian cycle in a large full graph

I recommend those who have not read the first article to read it, because this is a continuation, and by no means an independent article.
1. Ant algorithm, aka Ant Colony Optimization (ACO)

The idea of ​​this algorithm first occurred to Marco Dorigo, who proposed this algorithm in 1992.
The algorithm is referred to as meta-heuristic optimizations.
The idea of ​​the algorithm

Watched ants in childhood? Well, at least once? Ants usually go for food. Long away. But how do they tell each other where to go for food? After all, every time it is unprofitable to search again, if somewhere there yesterday an ant found, for example, an apple. Here is the ant, who found the apple, on the way home marks the road with pheromones. So, a little bit, he's just one. The next day there will go n ants, everyone will find food, and will mark the path. There are more pheromones on the path, and the likelihood that the next ants will go along it grows. However, the pheromones evaporate, and after a few days there will be no trace left from the marks on the track. And the longer the path, the faster it will collapse. On this fact from the ant life and this algorithm is built.
image
An example of the fact that along the shortest path will run more ants in a unit of time, and in the end, on all other pheromones will evaporate, but on this one, it will become the path for the entire colony.
How to apply it to the task of a traveling salesman?
We take the branch and bound method, which I described in the first article, and make the following changes:
1) If we in the selection tree take / not take an edge went along one of the paths, we “put” a certain number of pheromones on the path from the root to this vertex.
2) On each edge, the pheromones of all paths through this edge are added.
3) At each iteration of the algorithm, the number of pheromones on each path is reduced by a certain percentage.
4) When choosing the next vertex, we are guided not only by its cost estimate, but also by the number of pheromones on the way to it.

The algorithm is considered to be one of the most efficient polynomial algorithms for finding the Hamiltonian cycle on large graphs.
Having the written method of branches and boundaries, it is quite simple to fasten the simplest version of an ant colony to it, the difficulty is only in choosing the right constants for the evaporation rate, how many pheromones to put, if we have passed, etc.
Winning the greedy ~ 17%, while working - about an hour.
Genetic algorithm

The idea of ​​the algorithm

The main source of ideas for this algorithm is biological evolution.
The main focus of this algorithm is on the so-called crossing operator, which recombines candidate solutions.
1) The main requirement is to store the solution as a vector of something (bits, numbers, symbols)
In our task, we will store in the i-th cell the number of the vertex to which we are going from this i-th cell.
2) New solutions, from which we will choose from a solution for crossing, will be obtained as a result of mutations of old solutions and crossing.
3) Mutation is defined as a function that randomly changes some genes (cells of the array in our case), and the result is laid into a set of solutions.
4) Breeding is defined as a combination of (random) genes of several best solutions from the previous iteration.
5) At each stage, we must throw out the unadapted solutions - those that are not trivially Hamiltonian (for example, they contain two cycles of 250).
6) Solutions for mutations are chosen randomly with a certain probability.
7) Solutions for crossing are chosen again randomly, but the best solutions obtained for this iteration (for which the cycle weight is less) should have a greater probability.
If you feel sorry for the memory at each stage, you can throw out the worst of the adapted solutions.
The easiest way to run such an algorithm is to first give it some set of correct solutions, for example, given to us by a greedy one, running from different vertices.
The state of the stop is some parameters of the algorithm, which say that it is time to finish counting and you need to output an answer (the best solution at the moment)
In my case, the stop parameter was time limit.
The simplest implementation worked in 2 hours and won over a greedy of only ~ 4%
It is worth noting that this is largely due to the simplest implementation; many sources claim that the genetic algorithm is one of the best, if not the best.
There are a lot of problems with this, because half of the sources say that the record holder is currently ACO, half that genetics.

ps continuation is written largely due to the comments to the first article, which encouraged me to try to write at least the simplest implementations of these algorithms, thanks to mephistopheies and Diversus

')

Source: https://habr.com/ru/post/160167/


All Articles