In the remarkable work of Arkady and Boris Strugatsky “Monday begins on Saturday” there is such a dialogue:
“Dudes,” said Fyodor Simeonovich, anxiously, sorting out the handwriting. - This is Ben Bezalel's problem. Cagliostro proved that she has no solution.
“We ourselves know that she has no solution,” said Junta, immediately bristling. - We want to know how to solve it.
- It’s somehow strange, you argue, Cristo ... How can you look for a solution when it is not there? Some kind of nonsense ...
“Sorry, Theodore, but you are a very strange reasoner.” Nonsense - to seek a solution, if it is already there. It is about how to deal with a task that does not have a solution. This is a deeply fundamental question, which, as I see it, you, the applied one, unfortunately, are not available.
Solving intractable problems is not just an artistic metaphor. One can argue here only with the fact that this question is not less important for practice. Paradoxically, the whole field of artificial intelligence is devoted, in fact, to solving unsolvable and poorly set tasks. But what are these tasks?
The first conclusions about the unsolvability of some mathematical problems were obtained by Gödel in 1931. The most well-known are two theorems of Godel on the incompleteness of formal arithmetic. Their essence is as follows. If we take a set of consistent axioms describing the properties of numbers and arithmetic operations on them, then there will be some assertions about numbers that can neither be proved nor disproved on the basis of the axioms chosen (such statements are called non-derivable). Moreover, some of these statements are quite natural true (from the point of view of man) statements. To "prove" these statements, we have to introduce new axioms. But whatever formal system we take, there will always be statements about the objects of the system, the truth of which cannot be established within the system itself. Man, in some “mystical” way, determines whether these statements should be true or false, and can expand the formal system so as to obtain the desired result. What should this mean? Some thinkers conclude that it is impossible to formalize thinking, especially its creative component. And since the Turing machine is also a formal system, they believe that thinking is impossible to implement on a computer.
Indeed, some tasks do not have a complete algorithmic solution. The most widely known is the problem of stopping. The task of stopping is that it is necessary to determine whether a program stops at any initial data, or “loops” in some cases (it runs indefinitely). The problem is that it proved the absence of an algorithm that could solve the stopping problem for any program. In other words, the stopping problem is algorithmically unsolvable. The importance of the shutdown problem is related to the fact that, as is commonly believed, some algorithm solved some problem if it stopped, when the state of the tape corresponds to the answer. If the stop does not occur, then it is considered that the task is not solved, since it is not clear at what point in time the solution appears on the tape. The algorithm that solves the stopping problem must also stop, on which the proof of the unsolvability of the stopping problem is based. In this proof, an analogue of the liar paradox is used. The paradox arises if you make a statement referring to yourself. For example, is the following statement of a liar false: “I always lie, even now”? How to use the same trick for a break problem? ..
Consider the idea of proving the insolubility of the problem of stopping by contradiction. Let it be solvable, that is, there is an algorithm T, to which you can submit some program to the input, and it prints 0 if the program is "looping", and 1 if it stops. Consider such a program P, which simply calls the algorithm T, passing its description T (P) to the input, and if the algorithm T returns 0, the algorithm P stops, and if the algorithm T returns 1, the algorithm P deliberately "loops", starting an infinite loop . For any algorithm T, it is easy to compose a program P, which obviously exists. But what will happen if you run this program? Will it loop or stop? If program P stops, then algorithm T, which presumably solves the problem of stopping for any program, should call T (P) and return 1, but then program P should loop. If program P loops, then T (P) should return 0, but then program P should stop. We arrive at a contradiction, that is, the initial assumption about the existence of the algorithm T is false.
')
There are many other algorithmically unsolvable problems. In particular, Hilbert's tenth problem on Diophantine equations is algorithmically unsolvable, the proof of which in 1970 was completed by the Soviet mathematician Yuri Vladimirovich Matiyasevich. It turns out that if there is a solution, it can be obtained in a finite (but unlimited) number of steps, but if there is no solution, in an arbitrary case, no algorithm can establish this.
Solutions to some problems have not yet been found, and it is not known whether these problems are unsolvable. Sometimes they have surprisingly simple formulations. One of the oldest such problems is the Euler problem of the mid-eighteenth century: any even number of at least four can be represented as the sum of two prime numbers (4 = 2 + 2, ... 18 = 5 + 13, ...). Euler formulated this problem in response to the Goldbach hypothesis, according to which any odd number of not less than seven can be represented as the sum of three prime numbers. Goldbach's hypothesis was proved not so long ago, while the problem of Euler has not yet been solved. Probably, this does not apply to the Euler problem, but some mathematical statements may be true, but unprovable (nondeducible, algorithmically unsolvable) within the framework of this axiomatics, which is what Godel's theorem says.
Are the possibilities of algorithms so limited, and on the basis of computers you can create only non-universal, weak AI? Does this mean that thinking is really non-algorithmic? To understand this, you need to take into account a number of points.
The first point is that algorithmic problems are usually formulated as mass problems, that is, problems in which it is required to find a single algorithm for solving an infinite series of problems of the same type. Algorithmically unsolvable problems are “too massive”, for example, the stop problem requires solving a problem for any algorithm and for any input data in a finite time. Obviously, no one person is able to solve the stopping problem for any algorithm, otherwise there would be no “hanging” operating systems and other programs (and these are still relatively simple programs compared to what unimaginably complex programs are in the set of “all algorithms "). The person solves the problem of stopping, but does it with errors, the probability of which increases with the increasing complexity of the programs. The ability of a person to solve algorithmically unsolvable problems (like mass problems) is extremely doubtful. His ability to find solutions for individual particular cases does not prove anything, because even a computer can do it. And without this, pretentious statements about the fundamental limitedness of computers as compared with a person become poorly suited.
Imagine yourself in a situation similar to that in which program T appears in our proof. Suppose there is an automaton that prints 0 if you say 1 and prints 1, if you say 0. And you are asked to answer the question what the automaton prints. In this case, you are allowed to say only 0 or 1. Naturally, the automaton prints the opposite of what you said, and you know about it, but you cannot do anything. Does it seem that such a “task” would be just a mockery of you?
Such an experiment has an interesting practical implementation: the fact is that the awareness of many decisions occurs somewhat later (sometimes by more than half a second) than this decision arises in the brain. The simplest solutions (the choice of two alternatives, say, raising the right or left hand) can be detected in the form of a readiness potential even before they become conscious. In fact, the moment of awareness does not correspond to the moment of decision making; it is “postponed” until the moment the action itself is performed — this is why we do not notice delays in the implementation of conscious commands by the body (patients with impaired delay mechanism may have the impression that they are controlled from the outside). Similar experiments using electromyographs that capture muscle biocurrents, and later - electroencephalographs, “reading” some information directly from the brain, have been carried out since the 1970s. It is interesting that a person is asked to choose himself which hand and at which point to raise, leaving him free will. And, nevertheless, the machine that receives the EEG signal learns about the solution of the subject before him. Let one hand of a person mean 0, and the other - 1, and let the machine type the number opposite to the number chosen by the person. If the subject needs to guess what the machine is typing, he will never be able to choose the right answer, although the automatic machine will show the answer almost a second before the choice of the person appears in his mind. This problem is "humanly insoluble."
But the algorithm T is exactly in this situation: information about the choice is read from it, as well as from the brain in the example with a man. You can not choose the correct answer when the correct answer is changed from the choice itself. And here we demand that the algorithm T necessarily print 0 or 1, and only the P program acted as input. Some other algorithm, “seeing” that the analyzed program itself causes it, could print something else , say, 42. Even if algorithm T were super-intellectual and fully understood the situation, he could only describe to experimenters everything that thinks about them, but he could not give a correct answer about looping the program P. And the point here is hardly in the limited concept of the algorithm. Indeed, if we assume that some non-algorithmic device has been created that solves the stopping problem, the above reasoning can be applied to it simply by replacing the term “algorithm” with another term, for example, “non-algorithmic information conversion device”.
In fact, such hypothetical devices by Turing were considered as early as 1939. This is a car with an oracle. An oracle is understood as a certain entity capable of “computing” non-computable functions or solving algorithmically unsolvable problems. Turing showed that for such machines, the problems formulated in relation to themselves (for example, the problem of stopping a machine with an oracle) are also unsolvable. This is reminiscent of an ancient paradox: can the almighty god create a stone that he cannot lift himself? .. Therefore, even if supertargeting calculations are physically realizable, there are also unsolvable problems for them. Apparently, something is wrong with the very form of posing these problems, which cannot be solved algorithmically or divinely.
It should be borne in mind that the proof of the algorithmic undecidability of a certain problem often consists in reducing the problem to a stop problem, which is insoluble simply because any fixed algorithm cannot be more complicated than itself. And, nevertheless, the fact of the existence of “algorithmically insoluble” problems remains; the only question is how to interpret it.
The second essential point is that the executed computer programs, although they have the form of algorithmic processes, are not formal algorithms. There is an important difference between a real computer and a Turing machine — its interaction with the outside world. In the Turing machine, the contents of the tape are fixed and are changed only by the machine itself. A computer may receive new data and even programs in the process of its work. How does a person go beyond the framework of formal systems, for example, how does he know which statements that are not deductible in formal arithmetic should be considered true and which are false? The answer is simple - from experience. For a person, the “input tape” is potentially the entire Universe (where the algorithms of thinking are “written down” in some language, which the brain, as a universal machine, is only executed). The choice of axioms in mathematics is arbitrary, but interesting axioms are chosen to describe something in the real world. Dirac wrote: "... those principles that the mathematician finds interesting are just those chosen by Nature." But, in fact, mathematicians have no place to take their axioms and principles, except from the generalization of experience. More difficult is the question, what is the algorithm of this generalization, and whether it is an algorithm.
Computer-based artificial intelligence will be limited only if it is completely isolated from the world. Then it really will be a formal system, to which the result of Gödel's theorem is applicable, and be able to do only what we have laid in it. If, for a physically implemented algorithmic process, the entire Universe is also an input tape, then for him the stated formulation of the stopping problem will simply be incorrect: on the input tape, the software implementation of the T algorithm will also include the T algorithm itself and much more. In addition, the program P will not be able to access a working copy of the algorithm T and only cause another copy of it, thereby removing the paradox! In this case, conclusions from the insolvability of the stop problem, as well as the conclusions from Godel's theorem, will simply not be applicable to such a program. Although this does not at all prove the algorithmic solvability of the shutdown problem, it eliminates the difference between a computer and a person when they are put under the same conditions.
The third point is the requirement to stop the algorithm. Of course, it is quite natural to demand a stop with the output of the algorithms for classical algorithms, but it is absurd to demand from artificial intelligence that it stop after a limited time and produce some kind of final result. Non-stop algorithms have more power than classical algorithms and can also be quite useful. For example, such an “algorithm” can produce some result and continue to work. Many scientists do not like the idea of non-stop algorithms, since the question remains for them when to assume that the answer is formed, therefore the properties of such algorithms in mathematics are less studied. But for them it is possible to introduce the concept of “computability in the limit”: if at some step an answer is formed that does not change during the subsequent work of a non-stop program, it is assumed that the problem is solved. Unfortunately, the fact that the classical problem of stopping is solvable in the limit is little known. Thus, the non-stopness of the algorithm of potential artificial intelligence is not bad, but its essentially necessary property. The fact is that the complexity of non-stop algorithms can increase indefinitely, while algorithms that require stopping for a predetermined number of steps (in the algorithm itself or the initial data) actually have limited capabilities.
So, the existence of algorithmically unsolvable problems and the limitations imposed by Godel's theorem on formal systems are apparently not an indisputable argument against the possibility of implementing strong artificial intelligence as a physical implementation of some algorithmic process, but a non-stop and open process.
There are other “arguments” against the algorithmization of thinking. For example, it is often said that the result of the algorithm is completely and unambiguously determined by the initial data, while the person has “free will”. However, determinism is not a key feature of the algorithm. There are probabilistic and non-deterministic Turing machines, the result of which is ambiguous. However, it has been established that they do not extend the concepts of an algorithm in the sense of the algorithmic solvability of problems. And although some source of chaos may be important for the realization of thinking, this does not refute the idea of algorithmicity. In addition, data coming into the program from the outside world has significant uncertainty.
More interesting is another thing: the mechanical application of a known algorithm for us does not look like a manifestation of mental activity. If we do not know the algorithm for multiplying two numbers "in a column", then multiplying each particular pair of numbers will require intellectual tension for us. But as soon as the algorithm is known, multiplying any two numbers becomes a little intellectual task. Maybe thinking is the process of building new algorithms in cases where the way to solve some mass problem is still unknown?
The search for an algorithm that solves the problem of stopping and a number of other problems is generally not feasible.
But this does not interfere with investigating algorithms for solving these problems under certain natural constraints. Often the search for these constraints is the main part of the solution to the real “insoluble” problem. No wonder they say that the correct formulation of the problem is already half of its solution. Although it is not yet clear how to implement an approximate solution of algorithmically unsolvable problems, at least Godel's theorem does not impose restrictions on the possibility of this.