📜 ⬆️ ⬇️

Why does the comparison “behaves like a robot” mean “behaves like a fool”?

Comparing a person with a robot (or machine gun), we mean that the behavior of such a person is described by a blind algorithm: a person mechanically performs a set of actions, without asking the question “why are these actions performed?” And “why it is expected that these actions will lead to the expected result?

Almost all programs behave in a similar mechanistic way. But there is another approach.

Algorithms can be divided into two large groups: purposeful and blind algorithms.
- Target algorithm has an explicit goal, which is trying to achieve.
- The blind algorithm does not explicitly set a goal for itself, but as a result of using the algorithm, the goal is still achieved.
')

The homing rocket has a purposeful algorithm: the ultimate goal is set to hit the target, which is achieved, the robot cleaner uses a blind algorithm to walk around the room (a series of straight and random movements are made), which in simple cases reaches the goal — bypassing the entire room.

Most of the software is now built on a blind algorithm, but the person is quite actively applying purposeful algorithms.

The aiming algorithm basically has a control loop with feedback:
1. going to the current state of the world,
2. based on the current state and the previously set goal, the mismatch is calculated,
3. An action is selected and executed for which it is expected that it will best reduce the mismatch.
Blind algorithms of such a control circuit do not have, but are based on the execution of the initially specified sequence of steps.

Both groups have their pros and cons:
1. blind algorithms
"+": more economical, provide maximum performance.
"-": cease to achieve the goal even with a slight change in external conditions.
2. Targeting algorithm
“-”: high cost due to the need to collect environmental data and analysis
"+": reach the goal on a wide range of changing external conditions

Real complex algorithms have elements of both.

For example, chickens cannot solve the task “to get grain from behind the grid, if for this the grid needs to be bypassed to the left or to the right,” and dogs or rats cope with this task much easier, and some of the more intelligent bird species also cope.
Both those and others at the same time use a purposeful algorithm, but chickens use blind criterion to calculate the mismatch: the distance to the food in a straight line (which only in the absence of obstacles ensures the achievement of goals), more developed animals are more flexible in this regard, and use a more complex algorithm : purposeful search reflexive algorithm.

In the purposeful reflexive algorithm, the 4th item appears:
1. going to the current state of the world,
2. based on the current state and the previously set goal, the mismatch is calculated,
3. An action is selected and executed for which it is expected that it will best reduce the mismatch.
4. monitoring the results of the action, comparing the result with the expected effect; deciding on the effectiveness of the actions performed and finding the reason for the case if the action had no effect.

For example, taking a step, we expect that we will move in space, but in fact there has been no movement. From such a situation it is concluded that something holds us, and therefore it is simply meaningless to repeat the action “to take a step”.

The purposeful reflexive algorithm can be represented as a composition of two goal-oriented algorithms:
- the main purposeful algorithm - aimed at achieving the goal,
- an additional target-driven algorithm — tracking which each completed step leads to what was expected.

In other words, reflexion is added to the main purposeful algorithm by the method of achieving the goal. You can add infinitely many such reflections, for example:
- reflection on the adequacy of the collection of the current state of the world
- reflection on the adequacy of the description of the current state of the world
- reflection on the adequacy of the description of the goal
- reflection on the adequacy of goal setting
- reflection on the adequacy of the decomposition of goals for subgoals
- reflection on the assessment of goal attainability
- reflection on the adequacy of reflection ...
etc.

If you add and add reflections to the purposeful algorithm, then at a certain stage such an algorithm turns into consciousness. At the current moment it is difficult to say at what stage this is happening, and what reflections are necessary for the emergence of consciousness, but it can already be said that programs with elements of consciousness should be built on targeting algorithms, and not on blind algorithms.

Source: https://habr.com/ru/post/182136/


All Articles