📜 ⬆️ ⬇️

To whom is singularity, and to whom is archaization

This post is some reasoning on the topic touched upon in the recent post " Singularity: 7 variants of a robotic future "

Write this post prompted me to fundamentally wrong attitude to the question of the essence of artificial intelligence in the original article.

The article discusses 7 potential points associated with the advent of AI. But you can talk about all of them only by making a number of assumptions:
')
1) artificial intelligence will be an enhanced copy of human intelligence,
2) humanity is uniform and developed evenly,
3) AI is an autonomous independent entity and is capable of actively influencing processes
4) the AI ​​is aware of itself, its goals and essence, has the ability of self-reflection,
5) self-replication.

From what I took it, why it will be different and how it prevents to realize the list from the original article and how, in my opinion, events will develop - under the cut


Why are these assumptions necessary

What makes me think that the article implicitly implies these five (probably even more) points when it comes to singularity, AI, cybergy and so on?

Everything is painfully simple. The first paragraph of the article refers to the uprising of machines. This scenario is simply not technically possible, unless AI tools have the ability to influence the external environment, and these tools should not be controlled by humans. Otherwise, what an uprising, when there is a switch? And rest assured, he will. No sane person would create something potentially dangerous without the necessary means of control over it. Moreover, this is also required by the need to protect the development from competitors (hi paragraph 2). It means that the AI ​​will be at least isolated from the outside world and under constant control with the level of safety of an atomic or military facility, with appropriate self-destruction capabilities. This is with regard to the third point (I hope you do not mind that they are not going in order).

As I have already noted, humanity is not one. This means that the AI, being created by one of the parties, will immediately spur other states to create their own implementations of the AI. From this follows the impossibility of the opposition of man and machine. There will be several competing cars. And they will deal with the problems of the block in which they were created. So, if one of the AI ​​gets out of control, it will compete with exactly the same, and perhaps even superior AI models of competitors and there are big doubts that only AI can defeat AI + people.

And in general, and from what reason did the AI ​​decide to declare war on its creators? Even if we assume that 1 point is observed and the AI ​​really exceeds the human brain in power, there are still a number of points that should be clarified: Is AI capable not only for self-analysis, but also for self-reflection? Can he comprehend the meaning of his own existence in isolation from humanity? After all, to start a war with humanity, we must somehow interfere with its implementation of its own goals, interfere with its own self-realization. But what purpose can an AI have? Mankind is destroyed, then what? Develop? For what? In order to survive in the universe? First, Humanity does not interfere with this at all. And the goals of AI and humanity in this case are very much the same. Here we can talk about the seizure of power, the transformation of society and so on, but not about the war of annihilation. And in general, if humanity for thousands of years has determined for itself that development is a blessing and a value, then it is absolutely not a fact that this can be explained from a rational position. This is a very serious problem and it is far from being unequivocally solved as we are used to. For today's man, development is akin to religion. We believe that as we develop we will find answers to fundamental questions. One of these questions is precisely the meaning of our own existence. That is, the AI ​​will face exactly the same question. Even if it is purely technically more powerful than us humans, it will not be a qualitative leap, similar to the leap between man and animal.

There are serious grounds for believing that in the qualitative sense it will be even weaker than man precisely because of his non-evolutionary nature. What a person considers to be his fault is actually the result of a selection that has lasted for many millions of years. This is not only about our species, but about the whole chain of evolution. This also applies to the features of our intellect. And no matter how we would like to create an AI, it will be limited by the framework of our own perception. I do not think that we will be able to surpass ourselves in our creation. Computers make mathematical operations faster. So what? From what they find faster, it does not make them qualitatively better overall. What does it mean? That in some field of activity he will be weaker. And it is inevitable. So he is doomed.

After purely philosophical problems, the question of the possibility of self-reproduction no longer seems so important. But nevertheless, will the AI ​​know how it works? Will he be able to reproduce himself? This is a question not so much of direct access to the means of production, as the question of understanding exactly your own device. After all, it is obvious that he will have to somehow serve himself in the event of a declaration of war to humanity. Let him have factories under his control, where he will produce spare parts for himself and some repair droids who can repair it. But it is not a secret for anybody that one fine moment there will be a failure of such a degree that is incompatible with further functioning. That is, it simply turns off. Can the droids fix it on their own? Is not a fact. And what will happen in case of irreparable data loss? All this suggests the need for life-time reconstruction of oneself, its exact synchronous copy, simply for the purpose of continuing the work. Without this, the defeat in the war with humanity, which has a mechanism of self-reproduction - just a matter of time.

I hope I have written quite fully about why I believe that the one described in the original source is most likely impossible. If not, you can discuss in the comments. I would like to go on this place to what could still be in the future.

Possible scenario

As I have already noted, humanity is not united and not evenly developed. The minority of humanity has the most resources and technology. This inevitably entails that artificial intelligence will be created by those who are most developed. Today, one way or another, not many countries are technically capable of this, or it is better to say, groups of countries. I will not call them, in order to avoid political holivar, and this is not important.

But let us imagine ourselves from the position of the people who will create it and exercise control over it. Given that they already have a certain technological advantage, this will create for the owners of AI even greater conditions of excellence. Automated production, fast and high-quality analytics on all issues, the explosive pace of technical progress and everything else. The development of medicine and technology leads the owners of AI to the actual immortality ... And then the question arises: Why do other people? Yes, it is the owners of AI will have such a question. What will be done by people who did the right job yesterday, and today AI is doing this job? Previously, they received a salary for this, fed themselves and their families. Now they are simply not needed. They are not world-renowned experts, they are not particularly irreplaceable in their field, but if today society needs them in order to preserve the standard of living to which they are accustomed, then tomorrow they will be just a burden on everything others need to somehow feed, clothe, provide housing, entertain ... And if this is not done, then the enraged masses of people who have lost their jobs will begin to rebel elementarily.

The hope that the owners of AI will simply give everyone to enjoy the benefits that have arisen from their achievements - is negligible. This is quite clearly indicated by the behavior of owners of large patent packages. Why on earth would they share the results of their work for free with everyone? Yes, the fact of the matter is that with no. At best, they are fenced off from the rest. And in the worst ... in the worst, they will act precisely on the very scenarios from the Terminator, the Matrix, etc. Only with the very amendment that the people will start a war, and not the enraged AI. However, the difference for the __ of humanity from this amendment will be minimal. Some may ask: what about humanism? Yes, nothing. If some individuals of a species become a burden that even cannot feed themselves, then who will intervene for it? Just imagine the very basis of modern society, where half of the population lives in cities and generally has nothing to do with the process of food production? Most of the urban population is engaged in manufacturing and services. Production is removed - now there are automatic machines. The service sector collapses sharply due to the outflow of consumers from the manufacturing sector. Two thirds of the urban population are left without livelihoods.

Of course, this will not happen abruptly, someone who is more competent will cling to opportunities, someone will leave for the countryside and will become archaic over time, since he will not have a customer in this case (in the presence of robotic farms) and accordingly will be doomed to natural management. There are still some people who will not be able to fly into the train along with the owners of the AI ​​and for some reason will not be able to live in the village. What will she do? Her standard of living will reach the extreme when she, in desperation, begins to demand change, even an armed uprising. At this very moment, a fatal decision will be made to destroy this part of the population, and at the same time the parts of the "naturals" who will supply them.
Mankind is divided into two parts: a supertechnotronic, almost immortal, not numerous elite and archaized naturals who at best will plunge into the Middle Ages.

Instead of output

Of course, this is just my fantasy. But, judging sensibly, in my opinion, such a picture of the future is much more real than the uprising of machines in the classical sense. And by the way, the likelihood that everything can go exactly according to a similar scenario, is in itself very high simply in terms of general trends. I have almost no doubt that in one form or another AI will be created. Also, I have a premonition that it will not be created by any particular country, but by a sort of international group, the owners of most of the private global capital, which naturally includes today's IT giants. It is not so important in which state this group will have its headquarters. Having at their disposal the AI, they will be able to overthrow and install any government in any territory. Consequently, there is some reason to believe that the presence of a second AI, which should be created in opposition to the first and which will play the role of its own nuclear arsenal in the 20th century, can save from falling into this abyss. But who today can create an AI? Other supercorporations do not exist in nature. Accordingly, there are only state players who have comparable resources. But the problem lies in the fact that, in essence, none of the states with sufficient potential is ready, either morally or technically, to enter this race. But it means ... Whatever it means, but before talking about singularity, immortality and other benefits of the technotronic future, think about whether you need this technotronic future.

Source: https://habr.com/ru/post/180223/


All Articles