⬆️ ⬇️

Haskell - knight's move

image



In my school childhood, one of the entertainments, I remember, was to fill a rectangle mx n in a notebook in a cage with a horse. Usually it was a 10x10 square and now, having bitten a lip (sticking out your tongue), you enter carefully the digits into the cells. At the next step, you realize that you are at a dead end and you start from the beginning. It was a long time ago and not true , but, studying Haskell, I came across this task in the general list https://wiki.haskell.org/99_questions with number 91.



Actually, how can you try to solve this problem? Recall the commandment of the author of the book "Study Haskell in the name of good!", Which sounds like "think recursively." That is, we take some cell from our set of cells, somehow fill the rest and check that the source cell connects with the filled "knight's move". Well, "somehow fill the rest" and means to go into recursion with a reduced set of cells. Sooner or later the set will be reduced to a single element, this will be the basis of our recursion. Well, in the code with the help of the list constructor, this can be written, for example, like this:

')

knightComb [x] = [[x]] knightComb xs = [x:ks | x <- xs, ks <- knightComb $ delete x xs, near x $ head ks] where near (x1,y1) (x2,y2) = abs ((x2-x1)*(y2-y1)) == 2 


The delete function in the base language is missing, but it can be loaded from the Data.List module.



The input of the combination function is a set of cells in the form of a list of pairs of coordinates (x, y), and the output is a list of all possible ways to bypass this set. If all the options are not required and only one is enough, you can take the first using the head function and then, due to the laziness of Haskell, the rest will not be calculated. BUT! It was smooth on paper. The algorithm really works and the 3x3 square without the central cell fills with a bang. On a 3x4 rectangle, the first result appears in a couple of minutes, but adding just one cell increases this time to half an hour. About large spaces you can not even stutter.



Well, in principle, the result is explainable. Checking the correctness of the choice of the initial cell is made after leaving to recursion, there are initially many cells, but not every one is suitable, therefore the complexity of the search, despite the laziness, turns out to be of the order of N! .. we will get it at the output, so laziness still works, you just need to somehow restrict the extra search.



How exactly? The first thing that comes to mind is to take initially not an arbitrary cell, but one that can be reached from the previous turn. Those. at the input of recursion, in addition to a set of free cells, we transfer the coordinates of the last occupied one, build a list of possible moves based on it, check that these cells are free and go back to recursion with a new move, until the cells run out.



 knightsTo x [] = [[x]] knightsTo x xs = [x:ks | k <- next x, elem k xs, ks <- knightsTo k $ delete k xs] where next (x,y) = [(x+dx,y+dy) | (dx,dy) <- [(-2,-1),(-2,1),(-1,-2),(-1,2),(1,-2),(1,2),(2,-1),(2,1)] ] 


Something like this. Well, for convenience, we write the interface



 knights n = head . knightsTo (1,1) $ tail [(x,y) | x <- [1..n], y <- [1..n]] 


And I must say, this is already a significant step forward!



* Main> knights 5
[(1,1), (2,3), (1,5), (3,4), (1,3), (2,1), (4,2), (5,4), ( 3.5), (1.4), (2.2), (4.1), (3.3), (2.5), (4.4), (5.2), (3, 1), (1,2), (2,4), (4,5), (5,3), (3,2), (5,1), (4,3), (5,5) ]



* Main> knights 6
[(1,1), (2,3), (1,5), (3,4), (1,3), (2,1), (3,3), (1,2), ( 2.4), (1.6), (3.5), (5.6), (6.4), (5.2), (3.1), (4.3), (6, 2), (4.1), (2.2), (1.4), (2.6), (4.5), (6.6), (5.4), (4.2) , (6,1), (5,3), (3,2), (5,1), (6,3), (5,5), (3,6), (4,4), ( 2.5), (4.6), (6.5)]



* Main> knights 7
[(1,1), (2,3), (1,5), (2,7), (3,5), (1,4), (2,2), (3,4), ( 1.3), (2.1), (3.3), (1.2), (2.4), (1.6), (3.7), (2.5), (1, 7), (3,6), (4,4), (3,2), (5,1), (4,3), (3,1), (5,2), (6,4) , (7,2), (5,3), (4,1), (6,2), (7,4), (5,5), (7,6), (5,7), ( 4.5), (2.6), (4.7), (6.6), (5.4), (4.6), (6.7), (7.5), (5, 6), (7.7), (6.5), (7.3), (6.1), (4.2), (6.3), (7.1)]



* Main> knights 8
[(1,1), (2,3), (1,5), (2,7), (3,5), (1,4), (2,2), (3,4), ( 1.3), (2.1), (3.3), (1.2), (2.4), (1.6), (2.8), (3.6), (1, 7), (2,5), (3,7), (1,8), (2,6), (3,8), (4,6), (5,4), (4,2) , (6,1), (5,3), (3,2), (4,4), (5,2), (3,1), (4,3), (5,1), ( 7.2), (6.4), (4.5), (5.7), (7.8), (8.6), (6.5), (8.4), (7, 6), (8.8), (6.7), (4.8), (5.6), (6.8), (4.7), (5.5), (7.4) , (8,2), (6,3), (7,1), (8,3), (7,5), (8,7), (6,6), (5,8), ( 7.7), (8.5), (7.3), (8.1), (6.2), (4.1)]



And if you have patience, then



* Main> knights 9
[(1,1), (2,3), (1,5), (2,7), (1,9), (3,8), (1,7), (2,5), ( 1.3), (2.1), (3.3), (1.2), (2.4), (1.6), (2.8), (3.6), (4, 4), (3.2), (5.1), (4.3), (2.2), (1.4), (2.6), (1.8), (3.7) , (2,9), (4,8), (5,6), (3,5), (4,7), (3,9), (5,8), (4,6), ( 3.4), (4.2), (5.4), (6.2), (4.1), (5.3), (4.5), (5.7), (4, 9), (6.8), (7.6), (5.5), (6.3), (7.1), (9.2), (8.4), (6.5) , (8.6), (9.8), (7.9), (6.7), (5.9), (7.8), (9.9), (8.7), ( 6.6), (7.4), (9.5), (8.3), (9.1), (7.2), (9.3), (8.1), (7, 3), (6.1), (8.2), (9.4), (7.5), (9.6), (8.8), (6.9), (7.7) , (8.9), (9.7), (8.5), (6.4), (5.2), (3.1)]



On the 10x10 square, I did not wait for the result. But, nevertheless, at each recursion step, the number of new calls now depends not on the number of free cells, but on the number of possible moves, of which there are exactly no more than eight (more precisely, even seven, of which we have just come), often less. Those. the complexity of the algorithm became O (p ^ N). Also not a polynom, but already advanced.



The next function can produce extra moves, which then, however, are cut off by checking the elem , but it may be ineffective to run through the list eight times. It seems more reasonable to run through the list of free cells once and filter out those that are suitable.



 knightsTo x [] = [[x]] knightsTo x xs = [x:ks | k <- filter (near x) xs, ks <- knightsTo k $ delete k xs] where near (x1,y1) (x2,y2) = abs ((x2-x1)*(y2-y1)) == 2 


But practice shows that this does not affect the performance, for about the same time there are exactly the same answers. Actually, the complexity of the algorithm has not changed at all, except that the possible moves are not selected in the rigidly given next function, but in the order in which they appear in the list of free cells. And in a partially ordered set, the solution will be faster. Well, in a disordered set, respectively, will often come to a standstill.



How else can you limit the extra bust? It is possible, for example, at each recursion step to organize an additional check of the input list for the absence of single cells. Each blank cell in the input set must have at least one free neighbor, unless the entire set consists of a single cell. But after this check, couples, triples, etc. may remain abandoned. cells. And, if to generalize, it is necessary to check the entire remaining area for connectivity. And common connectivity, in turn, means that from the current position there must be a path to any remaining cell. Such a check can be carried out, for example, by the “wide” bypass algorithm, for which we will write an auxiliary function.



 connected _ [] = True connected [] _ = False connected (x:xs) ws = let ns = filter (near x) ws in connected (xs++ns) (ws\\ns) 


The function has two lists at the input: checked and still untested vertices, and if the second is empty, then the graph region is connected.



Add a new check to the main function.



 knightsTo x [] = [[x]] knightsTo x xs = [x:ks | connected [x] xs, k <- filter (near x) xs, ks <- knightsTo k $ delete k xs] 


We describe the neighborhood check function globally.



 near (x1,y1) (x2,y2) = abs ((x2-x1)*(y2-y1)) == 2 


It works well! The solution for the 9x9 square is now faster than the time required for the 8x8 square. And this is despite the fact that the complexity of additional verification quadratically depends on the input data. But since the overall complexity of the algorithm is exponential, discarding unnecessary branches, even at such a price, makes it possible to significantly reduce computations.



If you analyze the solutions issued, you can see that the move selection algorithm tries to fill the left part of the area tightly, and at least at first it works out quite well, which suggests the idea of ​​breaking large areas into smaller compacts. Great idea! Our 10x10 problem square is four 5x5 squares, and we now click them like nuts. It is only necessary to learn not only to start from a certain point, but also to finish where we indicate. To do this, we modify the interface function.



 knights5 ((m,n), st, fin) = head . filter (end fin) . knightsTo st $ delete st [(x,y) | x <- [m..m+4], y <- [n..n+4]] where end x xs = x == last xs 


The size of the area is fixed, at the entrance we submit the coordinates of the lower left corner of the square 5x5, the coordinates of the beginning of the desired path and its end. And four times we apply this function to the necessary parameters.



 knights10 = concatMap knights5 [((1,1),(5,3),(5,5)), ((1,6),(3,6),(5,6)), ((6,6),(6,8),(6,6)), ((6,1),(8,5),(6,5))] 


Voila!



* Main> knights10
[(5,3), (4,1), (2,2), (1,4), (3,3), (4,5), (2,4), (1,2), ( 3.1), (5.2), (4.4), (2.5), (1.3), (2.1), (4.2), (5.4), (3, 5), (4.3), (5.1), (3.2), (1.1), (2.3), (1.5), (3.4), (5.5) , (3,6), (1,7), (2,9), (4,10), (3,8), (5,7), (4,9), (2,10), ( 1.8), (2.6), (4.7), (5.9), (3.10), (1.9), (2.7), (4.6), (5, 8), (3,7), (1,6), (2,8), (1,10), (3,9), (5,10), (4,8), (5,6) , (6.8), (7.6), (8.8), (7.10), (9.9), (10.7), (8.6), (6.7), ( 7.9), (9.10), (10.8), (9.6), (7.7), (6.9), (8.10), (10.9), (9, 7), (7.8), (6.10), (8.9), (10.10), (9.8), (10.6), (8.7), (6.6) , (8.5), (6.4), (7.2), (9.1), (8.3), (10.4), (9.2), (7.1), ( 6.3), (7.5), (9.4), (10.2), (8.1), (6.2), (7.4), (9.5), (10, 3), (8.4), (10.5), (9.3), (10.1), (8.2), (6.1), (7.3), (6.5) ]


A few seconds of waiting and we found a solution to an even more difficult task. From the end of the path found by the knight's move one can get to the beginning, i.e. our solution is cyclical.



Actually, for complete happiness, it remains only to automate the process of partitioning large areas into small areas with the construction of suitable chains. But this, as they say, is a completely different task.



Part two

Part three

Source: https://habr.com/ru/post/357694/



All Articles