“Simplexity is a process by which nature achieves simple results in complex ways.” - Bruce Schiff

1. Introduction
Imagine the following task: we have a table of
n integer keys
A 1 , A 2 , ..., A n , and an integer value
X. We need to find the index of the number
X in this table,
but at the same time we are not particularly in a hurry. In fact, we would like to do it as long as possible.For this task, we could dwell on the most trivial algorithm, namely, iterate over all An
n in order and compare them with
X. But, it may happen that
X =
A 1 , and the algorithm stops at the very first step. Thus, we see that the naive algorithm in the best case has the time complexity
O (1) . The question arises - can we improve (that is, worsen) this result?
')
Of course, we can greatly slow down this algorithm by adding empty cycles to it before the first test of the equality of
X and
A 1 . But, unfortunately, this method does not suit us, because any fool will notice that the algorithm simply wastes time. Thus, we need to find such an algorithm, which would nevertheless move towards the goal, despite the lack of enthusiasm, or even the desire to finally reach it.
We can build an algorithm that satisfies this requirement, and which is much better (slower) than a naive algorithm if we sort table
A in ascending order. Then, we will be able to use the slow search procedure, which is presented below:

The number of comparisons in this algorithm does not depend on either
X or
A i and is given by the following recursive formula:

From which we get that

Thus, we obtain a deterioration of
n times compared with the naive algorithm. Note that the lack of enthusiasm for the slow search algorithm is completely unclear from its behavior, because it checks the equality of
X and
A i every
O (1) operation, never compares the same values, and stops when the desired value is found. Few search algorithms, honest or not, can be compared with him in performance.
2. Summary
The
research procedure is an example of a completely new direction of Computer Science (Computer Science) - the design and analysis of unhurried algorithms. An intuitive, unhurried algorithm that solves problem
P , is an algorithm that wastes time in such a clever way that it can fool the average naive observer. We can write this statement in a strict mathematical language, saying that
A is a leisurely algorithm for problem
P if and only if

where,
N is a set of naive observers,
t is time,
W (A, t, w, P) is a predicate, which means
“A is wasting time t in vain in the way w solving problem P” , and
F (w, o) is a predicate, which means
"the way w is confused enough to fool o" . Regarding the finiteness of the set
N, we make no assumptions.
When studying slow algorithms, the performance of algorithm
A is best described by its inefficiency or execution time in the best case - the minimum (as a function of
n ) runtime
A for all possible input data of length
n .
The complexity of the task P - the maximum inefficiency among all the slow algorithms that solve the problem
P. An algorithm is called
pessimal for problem
P if the inefficiency of
A in the best case asymptotically tends to complexity
P.Leisurely algorithms have many important practical applications. For example, the leisurely search algorithm is particularly applicable to real keys (real not in a mathematical sense, but in the sense that these are physical keys that open doors and cabinets). The leisurely search algorithm is the only known algorithm that accurately emulates the behavior of a bunch of such keys.
3. The task of finding a path in interesting graphs

The search in the table can be considered as a special case of the next more general task. Suppose we have a “labyrinth” - an undirected graph
G with
n vertices and a vertex
u marked as
“input” . Our task is to find a path from the vertex
u to the vertex
v , marked as
“exit” , moving along the maze one edge per unit of time. Following the classical analysis of algorithms, most likely, we would immediately turn to one of the most efficient algorithms for finding the shortest path in a graph. However, suppose that this labyrinth is quite interesting inside, and that we would in principle be uncontracted to spend several additional cycles of the algorithm in search of the vertex v. In fact, we hope ... no, we certainly want the search to take as long as possible. Given that our sense of duty does not allow us to completely abandon the search, we will not be able to ignore the primitive needs of our human nature for a long time. In addition, what's wrong with a relaxed approach to solving a problem, if we still do what is needed? Moreover, we have always been told that “you hurry - you will make people laugh”, and, in any case, no one needs to be perfect. Well, and stuff like that ...
Taking all this into account, we see that this task fits very well into the subject area of our theory.
In fact, this task was widely studied in the Theory of Graphs, where it received the name
“the task of finding the most negligent path” . An important section of
Mathematical Methods for Operations Research , which is called
Anemic Programming , is entirely devoted to ineffective methods for solving this problem. What do we know about its complexity? Earlier, as shown by Wagner
[Wagner] , if we don’t know anything about finding
v , the best time can be as little as
O (1) : at every step, even at the very first, we risk stepping on
v and failing maze, despite the fact, no matter how much we want to avoid it. However, as Homer
[Homer] showed, if the graph is on a plane (or a flat globe), and we have an oracle that, like a compass, shows which direction the target is located, we can delay the moment of arrival to the target until we visit most of vertices of the graph. In fact, the time for which we can delay this moment is limited not so much by the complexity of the task as by its monotony
[1] . The inefficiency of the Homer algorithm is
Ω (n) , which is the lower limit of the complexity of finding the most negligent path.
————————
[1] Also known as boredom.
The leisurely search method and the algorithm of Homer's careless path are both based on the same idea, known as the
Powerless Descent Method . We also want to casually mention another important paradigm for the design of leisurely algorithms, which was described by Homer in the same work, and which the author gave the bright name
Penelope's Trick . It is based on the use of a for loop, in which the step oscillates between positive and negative values at each iteration. Unfortunately, this technique (which is now called
Method with Return ) has become so famous that even the most naive observer can easily recognize it. So, now it is only of historical interest.
4. Search back
A rather similar task is the systematic enumeration of all
n vertices of a connected graph
G. This problem has been well studied in the classical
Theory of Algorithms , and is usually solved by the well-known depth search
[Vern] or width search
[Vern2] algorithms , in which the time complexity in the best case belongs to
Ω (n) .
For a long time, it was considered to be the upper limit of the complexity of the problem, until on October 4, 1984 at 14:17, the community of leisurely algorithms was not shaken by the discovery of the search algorithm that belongs to
Ω (n 2 ) due to inefficiency on an important class of graphs.
The search method backwards , as it was named by its inventor, will be described below. Like its predecessors, the algorithm assigns the vertices v
1 , v
2 , ..., v
n of the graph
G to integers
λ (v 1 ) ,
λ (v 2 ) , ...,
λ (v n ) from 1 to
n . The algorithm is represented by the
bwfs recursive procedure. We assume that all the numbers
λ (v) are initially zero. Recursion starts with a call to
bwfs (v l , 1) .

We leave the reader an instructive exercise to prove the correctness of the algorithm and show that its inefficiency really belongs to
ϴ (n 2 ) for graphs that are straight lines. An interesting feature from the point of view of the pessimity of consumed memory in this case is that the recursion depth
bwfs can reach
ϴ (n 2 ) depending on the starting point of the graph. The inefficiency of this algorithm on ordinary graphs remains an unsolved problem, but, it seems, the algorithm is never faster than
O (n√n) .
An interesting note is that it suffices to remove one of the for loops in order to get a familiar depth search from
bwfs . However, the order in which the vertices of the graph pass for the first time is very strange and confusing, and it is not at all like the order obtained by the depth-depth algorithm.
By definition, the numbering back of the graph vertices is the values of
λ (v) that were assigned to the vertices by this algorithm. Like numbering in depth and in width, this numbering has several interesting properties. Due to the limitation on the size of the text, we will cite only a couple of them. If the faces of the graph are directed in such a way that the graph is acyclic, then either
λ (head (e)) ≥ λ (tail (e))) for all faces
e , or
λ (head (e)) ≤ λ (tail (e) ) for all faces
e . Moreover, if
d is the maximum degree of the vertices of the graph, then for any two adjacent vertices
u and
v, it is true that
| λ (u) - λ (v) | ≤ d log min (λ (u), λ (v)) . These and other properties of
numbering back make it very important from the point of view of combinatorics.
5. Slow Rescue
No other task shows the power and elegance of a leisurely algorithm as sorting
n given numbers. This task has a long and rich history, the beginnings of which can be traced for a very long time, certainly earlier than the formation of a leisurely algorithm, as a recognized discipline in the second half of last Wednesday. Thanks to the efforts of many pioneers in the industry, the inefficiency of sorting algorithms has steadily grown from the modest
Ω (n log n) Merge Sort algorithm (Merge Sort) to
Ω (n √ n) Shell Sort (Shell’s Sort) with certain increments, to
Ω (n 2 ) Sort A bubble (Bubble Sort), and finally, to the tricky search algorithm belonging to
Ω (n 3 ) , recently described by Bentley
[Bentley] . (Apparently, it was first published by Steele, Woods, Finkel, Crispin, and Goodfellow
[SWFCG] )
One of the most important results of the modern theory of complexity is the proof that the sorting problem can be solved in time Ω (n
log (n) / (2 + e) ). This is the first found problem with non-polynomial complexity. An elegant algorithm that achieves this kind of inefficiency is called
Slowsort and is described below.
The
Slow- Sorted Algorithm is an ideal example of the
Multiply and Surrender paradigm, which is perhaps the most important paradigm for the development of slow algorithms. The idea of the
Multiply and Surrender strategy is to replace this task with two or more subtasks, each of which is only slightly simpler than the original, and then continue to multiply the subtasks recursively while it is possible. At some point, all the subtasks will be so simple that their decision can no longer be postponed, at this point we will be forced to surrender. Experience shows that in most cases, by this time, the total amount of work done in vain will be significantly greater than if we used a more direct approach.
To better understand the strategy of
Multiply and Surrender , let's
step by step consider the development of the algorithm
Slowness . We can divide the task of sorting the numbers
A 1 , A 2 , ..., A n by increasing into two subtasks: (1) find the maximum of these numbers, (2) sort the rest. Subtask (1) can be further divided into subtasks: (1.1) find the maximum of the first
[n / 2] elements, (1.2) find the maximum of the last
[n / 2] elements, (1.3) find the greatest of these two values. And finally, subtasks (1.1) and (1.2) can be solved by sorting the elements and selecting the last (read maximum) element of the sorted array. Thus, we multiplied the primary task into three only slightly simpler subtasks (plus overhead): sort the first half, sort the second half, sort all the elements except one. We will continue to do this recursively until all the multiplied arrays contain no more than one element. At this point we will have to surrender.

A recursive definition of the execution time of a
Slow- port will look familiar to those who read
Section 3 . In general,
T (n) = 2T (n / 2) + T (n - 1) is obtained.
The Hamming distance between this formula and the well-known recursive
Merge Sort formula
(Merge Sort) T (n) = 2T (n / 2) + cn is only 5, but the simple argument about the final differences shows that this is enough for the first equation there was no polynomially bounded solution. In fact, it can be shown that the solution satisfies the following inequalities
[1] :

for any fixed
e> 0 and two constants
C a and
C b . The idea of the proof (and we were told that our article will be published only if it contains at least one proof) is to assume that
T (n) = C 1 n C 2 ln n for some constants. Then

————————
[1] We use “log” for log base 2 and “ln” for natural logarithms.
Assuming that
C 2 = 1 / (2 ln 2) , we get
T (n) ≤ C b n log (n) / 2 , and assuming that
C 2 = 1 / (((2 + e) ln 2 ) , we obtain that
T (n) ≥ C a n log (n) / (2 + e) for substantially large
n . (The constants C
a and C
b are just bogus values (
orig. Fudge factor ) to launch induction.) Following the spirit of slow algorithmism, the authors will provide more detailed proof in the near future as scanned images of punched cards with proof code on
EQN written on 7
EBCDIC track cassettes with an odd check bit.
As a practical application, it is obvious that
Slowness is the most suitable algorithm, if suddenly your boss sends you to sort something out to Paris. One of the good features of this algorithm is that the number of inversions in
A does not increase. So, in a certain sense (in the sense that you are in Paris, all expenses are covered),
Slowness never takes the wrong steps.
6. Conclusions and unsolved problems
For a long time Computer Science (Computer Science) was engaged only in the study of either the worst or medium scenarios of the algorithms. In this publication, for the first time we are trying to correct the apparent discrimination of studying the best scenarios, and we can only hope that others will follow us.
The Slow Mode analysis has led us to the following assumption, which we call the
Increasing Hypothesis (HS) : If the complexity of the problem is
O (gf) , where
g and
f are functions of the length of the input data, and
f = o (g) , then the complexity of this problem is
Θ (g f ) .
The Augmented Hypothesis (ARC) states that if the complexity of the problem is
O (g + f) , then its complexity is
(gf) . Obviously, the
ARC implies
UG .
Proving or disproving the
UG is one of the main tasks
of the Complexity Theory . However, we have to end with the sad fact that, most likely, it is impossible to prove the HS because of the well-known incompleteness
of Peano arithmetic .
Thanks
We would like to thank Ed Lasowska and Lyle Ramshaw for their unhurried help.
Links
[Bentley] DL Bentley, Programming Pearls, CACM, 27 (1984) 287-291
[Wagner] R. Wagner,
Tannhäuser , (French and German libretto), L'avant-scène, Paris, 1984
[Vern1] J. Verne,
Journey to the Center of the Earth , (English translation), Heritage Press, 1966
[Vern2] J. Verne,
Around the World in 80 Days , (English translation), International Collectors Library, 1956
[Homer] G. Homer,
Iliad and Odysseus , translation by Samuel Butler, Encyclopedia Britannica, Chicago, 1952
[SWFCG] G. Style and others, Hacker's Dictionary, Harper and Row, 1983
Note
The article was published in ACM SIGACT NEWS, 16 (3): 49-53, 1984.
After that, it was recast from a bad photocopy by Gordon Lack in 2000. And at the end of 2013, it was translated into Russian by Valentin Simonov (
valyard ).
Unfortunately, during a careful reading of the search algorithm back seemed to be non-working. Perhaps, when reprinting some details of this algorithm were lost, and we will never know what the authors meant. And, perhaps, this is a special test for attentiveness, which would correspond to the general spirit of the publication.
In any case, this text in some places made me laugh for a very long time. So, I hope that the translated text will give you as much pleasure.