📜 ⬆️ ⬇️

Richard Hamming: Chapter 6. Artificial Intelligence - 1

“The goal of this course is to prepare you for your technical future.”

image Hi, Habr. Remember the awesome article "You and your work" (+219, 2394 bookmarks, 380k reads)?

So Hamming (yes, yes, self-checking and self-correcting Hamming codes ) has a whole book based on his lectures. We translate it, because the man is talking.

This book is not just about IT, it is a book about the thinking style of incredibly cool people. “This is not just a charge of positive thinking; it describes the conditions that increase the chances of doing a great job. ”
')
We have already translated 19 (out of 30) chapters. And we are working on the publication "in paper."

Chapter 6. Artificial Intelligence - 1


(Thanks to the translation for thanks to Ivannikov Alexey, who responded to my call in the “previous chapter.”) Who wants to help with the translation, layout and publication of the book - write in a personal or mail magisterludi2016@yandex.ru

Considering the history of computer applications, we pay attention to the possibilities of machine limits not so much in terms of computational complexity, but in terms of classes of tasks that a computer can or may not be able to solve in the future. Before further discussion, it is necessary to remind that computers operate with symbols, and not information; we cannot simply say, let alone write, a program in terms of “information”. We all believe that we know the meanings of words, but good reflections on your part will convince us that, at best, information is a fuzzy concept and cannot be given a definition that can be converted to a program.

Although Babbage and Augusta (Ada) Lovelace made several assumptions regarding the computational limits of computers, the real studies actually began in the late 1940s and early 1950s, in particular, Allan Newell and Herbert Simon from the RAND research center. For example, they investigated the solution of puzzles, such as the task of cannibals and missionaries *. Can a computer solve the puzzles? And how will he do it? They considered the approaches that people use in solving such problems and tried to write a program that would reproduce the same result. You should not expect the same result, because in general there were no two people who would solve the puzzle through the same logical conclusions in the same sequence, although the program consisted in finding the same solution patterns. Instead of just solving the problem, they tried to simulate the methods of solving puzzles by people and check the similarity of the model results and the results of people.

(* Cannibals and missionaries are known in our culture as a puzzle about a wolf, a sheep, and cabbage.)

They also began to develop the concept of the Universal Problem Solver (GPS). The idea is that to solve a problem with a computer you need five universal principles for solving problems and a set of particular details from a specific area. Although the method did not work well, valuable side effects came out of it, such as processing lists. After an initial investigation of the problem (which, as expected, would greatly facilitate the programming task), the question was postponed for a decade. When they returned to him, about 50 necessary universal principles were proposed. When this proposal did not work, after a decade the number of universal principles reached 500. Now this idea is known as rule-based logic, sometimes 5000 universal rules are needed to describe, and, as I heard, in some areas up to 50,000 rules.

A whole field called Expert Systems is currently being developed. The principle is to interview the experts of a particular area of ​​knowledge, draw up a set of rules, enter these rules to the program and get an expert program! One of the problems of this principle is that in many areas, especially in medicine, well-known experts are actually only slightly more qualified than beginners! And this fact was recorded in many studies! Another problem is that experts use their subconscious mind and the fact that experts can use one of their subconscious experiences when making decisions. It has been established that about 10 years of intensive work is needed to become an expert. During this time, apparently, many and many patterns are fixed in the mind, and when solving a problem, the expert, instead of a sequential step-by-step solution, makes a subconscious starting choice based on these patterns.

In some areas, rules-based logic showed impressive achievements, and in some similar areas it failed completely, this indicates the dependence of success on a large number of elements of luck. Now there is no general rule and a basic understanding of when the rule-based logic method works, and when it does not, and how successfully it will be applied.

In Chapter 1, I have already developed the topic that all our “knowledge” is probably impossible to express in words (instructions) - it is impossible in the sense impossible, and not in the sense that we are stupid or uneducated for this. Some of the established features of expert systems undoubtedly prove this point.

After several years of development, the field of research on the limits of the intellectual abilities of computers was given the vague name of artificial intelligence (AI), which does not have a single meaning. First, this is the answer to the question

Can cars think?


Although this is a more limited definition than artificial intelligence, it more clearly identifies the problem and is more accessible to most. This question is important for you, if you think that computers cannot think, you cannot fully use computers to solve your problems, but if you think that of course computers can think, you will surely make a mistake! Thus, one cannot allow oneself to believe or not to believe - it is necessary to formulate the following unpleasant question: “To what extent can machines think?”

It is necessary to pay attention, such a form of the question is not quite correct, a more general question is formulated as “Can we write a program that will generate“ intelligence ”from von Neumann's machine?” The reason for such questions is the emerging trend in the development of modern neural networks. They may or may not be able to do what the digital computer cannot. We will discuss neural networks later after reviewing more technical facts.

Also, the problem of AI can be formulated as: “What kind of what can a person do that can make a computer?” Or in a more preferable form for me, “Which of the problems that a person can solve a computer can take on or significantly ease?” Let's pay attention to pacemakers - machines that are directly connected with the human nervous system and support the lives of many people, while they automatically perform their routine tasks. People who say that they don’t want their life to be dependent on a car, by the way, forget this example. It seems to me that in the long run, machines will be able to make a significant contribution to the quality of life of people.

Why is the topic of artificial intelligence so important? I will give a specific example of the need for AI. Without going into details (it’s impossible to look at them without giving a definition to thinking and a car), I believe that with a high probability in the future we will have cars exploring the surface of Mars. The distance between the Earth and Mars is so large that the round-trip time can be 20 minutes or more. Therefore, during the operation of such a car should be a reliable means of local control. After passing between two stones, making a slight turn and finding the ground under the front wheels, the car will need a quick, “reasonable” action in case of ambiguity. In other words, such obvious approaches as backup execution of the program, coupled with lack of time to receive instructions from the Earth, will not be sufficient to protect against destruction; therefore, a certain degree of “intelligence” must be programmed in the machine.

This is not the only example; this task is becoming more typical as computers are used more widely in high-speed complexes. In this case, you can not rely on the person - often because of the factor of boredom, which affects people. It is believed that the piloting of aircraft consists of hours of boredom and a few seconds of absolute nervous tension (panic) - this is not what people are intended for, although they manage. The reaction speed is very important. Modern fast airplanes are mostly unstable, computers have been installed to stabilize them within milliseconds, and no pilot will be able to provide such accuracy; a person can only coordinate the management plan as a whole, and leave the details for the machine.

Earlier, I noted the need to at least understand what we mean by the terms “machine” and “thinking”. We discussed these terms at Bell Telephone Laboratories in the early 1940s. Some argued that the car could not have organic parts, and I objected to this, that such a definition excludes wooden parts! This definition was canceled, but for greater certainty, I proposed a mental experiment with the extraction of the nervous system from a frog, while leaving it alive. If it could be adapted as a storage device, would it become a machine or not? If this were an addressable storage device, how would we distinguish that it is a “machine”?

In the same discussions, an engineer with a Jesuit ** diploma defined the intellect “Thinking is the ability of a person, not a machine”. Apparently, this definition satisfied everyone. But do you like it? Is it true? We noticed that with an obvious possible comparison of the current level of machines and an improved level of machines and programming techniques, the difference between people and machines decreases in the future.

(** - comment. Translation - orig. Jesuit trained engineer, probably means the person who was educated at Jesuit high school (https://www.jesuittampa.org/page.cfm?p=1) or a similar place .)

Undoubtedly, it is necessary to clarify the term “thinking”. For most, the definition of thinking as the ability of people, which stones, trees, and other similar objects lack, would fit. Opinions differ on the question whether to include in this list the highest forms of animals. Most are mistaken in saying that "Thinking is what Newton and Einstein did." According to this statement, most of us do not think - certainly, it does not suit us! Turing, without answering the question on the merits, argued that if you put a person at the end of one telegraph line and another programmed machine at the end, and the average person cannot distinguish them, it will be a proof of the “thinking” of the machine (program).

The Turing test is a popular approach, but it does not stand up to scientific analysis when it is necessary to clarify simpler things before solving complex ones. Soon I asked myself the question “What size should be the smallest program that can think?” For clarification, if it is divided into two parts, then none of these parts can think. I thought every night before bed, and after a year of trying such thinking, I came to the conclusion that this was not the right question! Perhaps “thinking” is not yes-no answers, but something more substantial.

Let me retreat a little and remember some facts from the history of chemistry. For a long time it was believed that organic components could only be obtained from animate things, it was the vitalistic aspect of animated things as opposed to inanimate things, such as stones and rocks. But in 1923 chemist Wohler synthesized urea, the standard by-product of man. This was the beginning of the synthesis of organic components in laboratory flasks. But still in the late 1850s, most chemists followed a vitalistic theory that organic components can only be obtained from animate things. Well, as you know, now chemists have come to the diametrically opposite opinion that any components can be obtained under laboratory conditions - of course, there is not and there can be no proof. This opinion was formed because of the success in the synthesis of organic components, and chemists do not see the reasons why they could not synthesize substances that may exist in Nature. The views of chemists have changed from the vitalistic theory to the non-vitalist theory in chemistry.

Unfortunately, religion has entered into a discussion about the problem of thinking machines, and now there is a vitalistic and non-vitalistic theory of “machine against man.” In the Christian religion, the Bible says "God created man in his own image and likeness." If we can create a car in our own image, then in a sense we will become like God, and this is a bit confusing! Most religions in one form or another represent a person more than a set of molecules, in fact, a person differs from animals in such properties as the soul and some others. As for the soul, in the late Middle Ages, some researchers wanted to separate the soul from the dead body, put the dead person on weight and track sudden weight changes - but they saw only a slow decrease in the weight of the decaying body - apparently, the soul that a person has in their opinion, there is no material weight.

Even if you believe in evolution, there was also a moment in it when God, or the gods, stopped and gave man the ability that distinguishes him from other living beings. This belief in the essential difference between man and the rest of the world makes people believe that machines can never become like people in the part of their thinking, at least until we ourselves become like gods. People, such as the Jesuit engineer mentioned earlier, define thinking as something that machines lack the ability to do. As a rule, these statements are not so obvious, hidden in the web of reflections, but the meaning remains the same!

Physicists treat you as a set of molecules in a radiation energy field, and nothing more in classical physics. The ancient Greek philosopher Democritus (born 460 BC) stated that “Everything is atoms and emptiness”. The same position among researchers of a tough AI model; there is no significant difference between machines and people, therefore correctly programmed machines can do everything the same as people. All their unsuccessful attempts to reproduce in details the thinking of the machine are explained by errors in programming, and not by the essential limitations of the theory.

The opposite approach to the definition of AI is based on the assumption that we have self-awareness and self-awareness — although we cannot provide satisfactory tests that prove their existence. I can get the car to print “I have a soul”, or “I have a self-awareness” or “I have a self-awareness”, but you will not be impressed by such statements from the car. At the same time, such statements from a person inspire more confidence, since they are consistent with the results of self-analysis and the fact of similarity of you and other people who have been expertly learned throughout life. However, these are manifestations of racism in the difference between man and machines - we are the best creatures!

In reasoning about AI, we have reached some dead end; Now you can argue anything, but for most of these statements will not have any meaning. We turn our attention to the success stories and the failure of the AI.

AI researchers always make extravagant, but, in most cases, unconfirmed statements. In 1958, Newell and Simon predicted that in the next 10 years a computer program would become a world chess champion. Unfortunately, like similar unrealized statements by AI researchers, it was made public. Still, remarkable results have been achieved.

I need to retreat again to explain the importance of games in AI research. The rules of the game, as well as victories or defeats, are quite understandable, or in other words, in a certain sense, well defined. But this does not mean that we want to teach the car to play games, we want to test our ideas of AI on a good and understandable test base in the form of games.

From the very beginning, chess is considered as a good AI test, since it is believed that when playing chess, no doubt, it is necessary to think. Shannon proposed a way of writing programs that play chess (we call them computers that play chess, in fact - this is just a class of programs). At Los Alamos National Laboratory, they achieved acceptable results on primitive MANIAC computers on a 6x6 board, removing elephants from the board. A little later we will return to the history of chess computers.

Let's see how to write programs for a simpler game in three-dimensional tic-tac-toe. We will not consider two-dimensional tic-tac-toe, as the strategy is known for the note of the game to a draw, and you can never win a calculating player. We believe that games with a known strategy do not illustrate thinking.

In a 4x4x4 cube there are a total of 64 cells (cubes), it is possible to make up 76 straight lines in them. A line wins if it is completely filled with the same elements. It is necessary to pay attention to 8 corner and 8 central cells, more straight lines pass through them than through the rest; in fact, there is an inversion of the cube, in which the central cells are replaced with angular, and angular with central, with preservation of all straight lines.

The program for playing three-dimensional 4x4x4 tic-tac-toe must first select the correct correct moves. Among the selected moves you need to select those that occupy the “hot” cells. You must use a random selection of possible cells, because otherwise the enemy can solve a weak algorithm and begin to use it systematically for his strategy. Random selection in a substantial set of different moves is a central part of game program algorithms.

We formulate a set of rules that must be applied sequentially:

  1. When filled with three cells in a row and one “open”, it is necessary to fill it out and win.
  2. If there is no winning move, and the opponent has three filled cells in the row, you need to block them.
  3. If a fork is formed (see fig. 6.1), then it is necessary to take it. On the next move, the program wins, but the opponent does not.
  4. If the enemy can form a fork, then it must be blocked.

image

Figure 6.1

Apparently, there are no more specific rules from which the next move can be calculated. Consequently, you begin to look for the so-called “strong move”, which gives space for maneuver and creating a winning combination. Thus, with two occupied cells in the line, you can take the third, and the enemy will be forced to block the line (but you need to be careful not to have three cells in the line when blocking the enemy and you do not have to defend yourself). In sequences of strong moves, you can create a fork, and then you win! But all these rules are vague. It seems that if you make strong moves in the “important” cells and force the opponent to defend, then you will lead in the game, but without a guarantee of victory. If you lose the initiative in a sequence of strong moves, then the enemy is likely to intercept it, make you defend and increase your chances of winning. A very interesting point is the timing of the attack; if it's too early - you can lose the initiative, too late - the opponent starts and wins. As far as I know, it is impossible to determine the exact rule for choosing the moment of attack.

The algorithm of the computer game program consists of several stages. The program must first check the course for compliance with the rules; Further, the program usually follows more or less formal rules, and then comes the stage of applying vague rules. Thus, the program contains many heuristic rules (heuristics - invention or discovery) and moves, which probably, but not guaranteed, will lead to victory.

At the beginning of the AI ​​study, Arthur Samuel , after him and at IBM, wrote a chess verification program. It was believed that writing a checker is easier than the game itself, which has become a real stumbling block. The evaluation formula he wrote included many parameters with his own weights, for example, control over the center of the board, passed pawns, the king’s position, mobility, locked pieces, etc. Samuel made a copy of the program and slightly changed one (or more) of the parameters. Then a program with one evaluation formula played against a program with another formula, for example, ten times. A program that won a larger number of games was considered (with some degree of assumption) the best program. The computer went through the values ​​of one of the parameters until it reached a local minimum, and then proceeded to the definition of other parameters. Thus, the program was launched again and again using the same parameters for repetitions; it gave the program a significantly better evaluation program - definitely better than Samuel himself. The program even won a Connecticut chess champion!

Wouldn't it be fair to say that “The program has learned from experience?”

Naturally, you object that the program taught the computer to learn. But doesn't the teacher pass such a training program on the Euclidean geometry courses? We will think more carefully about what the course of geometry actually is. At the beginning of the course, the student cannot solve geometric problems, the teacher lays some program in it, and at the end of the course can solve similar problems. Think about it more carefully. If you are denying that a machine has learned from experiential knowledge, because it was taught by another program (written by man), then doesn’t this situation look like you, except for the fact that at birth there is more to you than a computer when you exit assembly line. Are you sure that you are not programmed during the life of the events that happen to you?

We are beginning to find out not only the correctness of the definition of the term reason, but many other related ones, for example, machine, learning, information, ideas, decisions (in fact, just one of the branches, branch points of a program are often called decision points, from which programmers ), — , . , , . , .

“ ” . , . , , , . , , , , , , . , . , . ? , Jesuit, , , , ? , ?

, , , . , -, , — , , . , . , , . , ; . , , . , .

!

...

, — magisterludi2016@yandex.ru

, — «The Dream Machine: » )


  1. Intro to The Art of Doing Science and Engineering: Learning to Learn (March 28, 1995) : 1
  2. «Foundations of the Digital (Discrete) Revolution» (March 30, 1995) 2. ()
  3. «History of Computers — Hardware» (March 31, 1995) 3. —
  4. «History of Computers — Software» (April 4, 1995) 4. —
  5. «History of Computers — Applications» (April 6, 1995) 5. —
  6. «Artificial Intelligence — Part I» (April 7, 1995) ( )
  7. «Artificial Intelligence — Part II» (April 11, 1995) ( )
  8. «Artificial Intelligence III» (April 13, 1995) 8. -III
  9. «n-Dimensional Space» (April 14, 1995) 9. N-
  10. «Coding Theory — The Representation of Information, Part I» (April 18, 1995) ( )
  11. «Coding Theory — The Representation of Information, Part II» (April 20, 1995)
  12. «Error-Correcting Codes» (April 21, 1995) ( )
  13. «Information Theory» (April 25, 1995) ( , )
  14. «Digital Filters, Part I» (April 27, 1995) 14. — 1
  15. «Digital Filters, Part II» (April 28, 1995) 15. — 2
  16. «Digital Filters, Part III» (May 2, 1995) 16. — 3
  17. «Digital Filters, Part IV» (May 4, 1995)
  18. «Simulation, Part I» (May 5, 1995) ( )
  19. «Simulation, Part II» (May 9, 1995)
  20. «Simulation, Part III» (May 11, 1995)
  21. «Fiber Optics» (May 12, 1995)
  22. «Computer Aided Instruction» (May 16, 1995) ( )
  23. «Mathematics» (May 18, 1995) 23.
  24. «Quantum Mechanics» (May 19, 1995) 24.
  25. «Creativity» (May 23, 1995). : 25.
  26. «Experts» (May 25, 1995) 26.
  27. «Unreliable Data» (May 26, 1995) ( )
  28. «Systems Engineering» (May 30, 1995) 28.
  29. «You Get What You Measure» (June 1, 1995) 29. ,
  30. «How Do We Know What We Know» (June 2, 1995)
  31. Hamming, «You and Your Research» (June 6, 1995). :

, — magisterludi2016@yandex.ru

Source: https://habr.com/ru/post/351404/


All Articles