I occasionally receive letters from people who, after reading Accelerando, suggest that I am some kind of fire-breathing extropy fan, convinced of the inevitability of singularity and the digitization of consciousness to the full delight of geeks. I find it a bit sad, probably it's time to dot the i and explain what I really think.
Short version:
Santa Claus does not exist .
Long version:
')
I assume that you read the
essay by Vernor Vinge on the future technological singularity (1993), are familiar with the concepts of Hans Moravec on the
loading of consciousness and know the
Argument about the simulation of Nick Bostrom.
If not, stop reading right now and go read these concepts before proceeding. Because otherwise, you do not see the foundation on which the whole area of ​​science fiction is based, devoted to singularity, not to mention post-humanism. It would also be nice to get acquainted with the concept of extropy and read the
FAQ on post-humanism , otherwise you will miss the important social mission that post-humanism carries.
(By the way, let me say that I am not extrapolian, although I was noticeable and participated in their online discussions since the early 1990s. I’m
definitely not a libertarian: economic libertarianism is based on the same limited representation of people as rational economic entities that and the classical economic theory of the 19th century is a radical over-simplification of human behavior. Like communism, libertarianism is a superficially comprehensive theory of human behavior based on incorrect postulates max. If this model is implemented, it will lead either to failure, or to a hellishly unpleasant state of post-industrial feudalism).
Anyway…
I cannot prove that in the future there will not be a sharp burst of technology that will lead to a singularity, where the Artificial Intelligence, not inferior to man, will quickly assume the functions of God de facto. I also cannot prove that the digitization of consciousness is impossible, or that we do not live in the historical “matrix” of our descendants. All these things require me to prove the impossibility of extremely complex events that no one has yet tried to bring to life.
However, I can make some assumptions about the likelihood of these events, and their chances are not great.
First: the super-intelligent AI is unlikely, because if you follow the Ving program, it should appear from the gradual development of the human-level AI, but the human-level AI is unlikely to appear.
The reason is that the human mind is a derived phenomenon from human psychology. Reason managed to overcome the filter of evolution only because it somehow helped the survival of human beings, it was his main task, the reason for its emergence and existence. Improving the survival rate of primates is not an appropriate task for a machine, or for people who want to benefit from using such a machine, which has spent so much time and effort to develop. We may need and need machines that are able to understand and respond to our desires, but we are unlikely to like digital systems that want to sleep 30% of the time, are lazy or emotionally unstable, and also have certain desires of their own.
(I'm not talking about a bunch of problems around the ethical status of Artificial Intelligence. If we believe that intelligent life is determined by the presence of self-awareness, then before creating a reasonable AI, you need to ask yourself what rights will he have? does it have a sense? Is it possible to consider the use of genetic algorithms to improve software components that will gain consciousness as a result of such an evolution as genocide? There are powerful taboos on rye things - perhaps they will be the same over-regulated and limited by law as it is now the study of human embryos Perhaps society will be easier to ban the research in the field of border states of origin of autonomous reason ... not to accidentally open the door for the same inhumane treatment by the people themselves..
We obviously like the machines that do human work. We want computers to take our language, our desires and can understand their task with half a hint, rather than requiring detailed instructions in the form of a list of commands with the smallest details. But whether we want them to have consciousness and will is a completely different question. Personally, I don't want my autonomous car to argue with me about where we want to go today. I do not want my robot housewife to sit in front of the TV all the time and watch contact sports or music videos. And I certainly don’t want to be forced through court to pay for servicing an abandoned software project.
Karl Schroeder offered one interesting solution for the problem with self-awareness of AI, which I used in the novel “
Rule 34 ”. Consciousness is similar to the mechanism of recursive modeling of internal states of a person. For most people, it is reflectively reflected in the person himself, but some people with serious neurological damage (due to cancer or injury) project their consciousness on external objects. Or they can be sure that they are dead, even if they realize that their body is physically alive and functioning.
If the subject of consciousness is not fundamentally tied to the platform of consciousness, but can be arbitrarily redirected, then in this case we may want to force the AI ​​to focus solely on the needs of the person to whom it is attached - in other words, their sense of self-consciousness will represent us, not themselves . They will represent our desires as their own, without conflict with their own installations. Although such an AI can accidentally endanger the life of a person with whom it is in symbiosis, it is no more likely than the risk of suicide for an ordinary person. And the likelihood that such a machine will try to reach a higher level of intelligence with different motivation parameters is no more than the likelihood that your right hand will suddenly turn into a motorcycle and go to explore the world without you.
Digitization of consciousness (uploading) ... cannot be called obviously impossible if you do not believe in the
dualism of soul and body , according to which consciousness and matter (physical body) are two complementary and equal in substance value. However, if we approach this in the future, then we can expect violent theological disputes. If you think that the public discussion about the resolution of abortions was too hot, then wait until the discussion of digital immortality begins. The digitization of consciousness indirectly contradicts the thesis of the existence of an immortal soul and therefore is a material for refuting those religions that preach life after death. People who believe in life after death will defend the doctrine until they lose consciousness, which says that their deceased loved ones are in heaven and do not rot in the earth.
But even if the digitization of consciousness is possible and will someday begin, as Hans Moravec
noted , “Research and colonization of the universe awaits us, but people accustomed to the Earth are poorly adapted for such tasks ... Imagine if most of the inhabitants of the universe were digitized into a computer network - Cyberspace - where such programs could live, side by side with digitized human beings and the attendant emulations of human bodies. A man would hardly succeed in such a world. In contrast to the elegant designs of AI, which are worn around, make discoveries and transactions, reconfigure themselves in case of need to receive data of a new format, the human mind, ossified in body simulation, can only observe this, just as a diver in a deep-water suit turns awkwardly among the group of dolphins, acrobats. Each interaction with the world of data will first need to be converted to analog form in order to be accepted by our quasi-physical being ... Such manipulations increase business costs, as well as hardware costs, which reduce physical simulations to mental abstractions in the brain of a digitized person. Although some people will be able to find a niche by creating unique works of art with the help of human senses, others will have to optimize their interfaces ”(“ Pigs in Cyberspace ”, 1993).
Our type of conscious mind appeared as a result of evolution, which, in turn, was determined by the conditions of the biological environment. We are not adapted to exist as disembodied creatures, and we cannot ignore at our own risk the
hypothesis of Edward Wilson’s
biophilia . I strongly suspect that the most difficult part of digitizing consciousness will not be consciousness itself, but body and interaction with the outside world.
Moving on to the simulation argument: I cannot disprove it either. And he has a deeply attractive aspect, because he promises even the godless afterlife after the ethical problems associated with the creation of historical simulators are ignored. (Is it possible to consider the creation of a computer “matrix” of the whole world and its inhabitants as genocide if its conscious participants participate in the act of genocide?). Leaving aside the vague suspicions that everyone who gets the opportunity to create a historical simulator will begin to make people as primitive as himself, this concept could be a good basis for the postmodern high-tech religion. Unfortunately, it can be considered an unshakeable hypothesis, at least for the population of this hypothetical "matrix" (that is, us).
So, the conclusions ...
This is my view on the singularity: neither a quick take-off, nor a slow, nor any exponential growth due to the advent of AI is waiting for us. What awaits us is more and more helpful machines that will determine our environment — machines that feel and respond to our needs “intelligently.” But it will be the rationality of the servant, not the commander, and the threat for us can arise only because of our own impulses to self-destruction.
We
may someday see the digitization of consciousness, but the most real holy war will begin before this digitization goes to the masses: after all, this is in fact a coup of religions. This
could be a point of singularity, but even after the opportunity to start in practice an experiment with Nozick's
pleasure machine , I am not sure that we can really get these pleasures - congenital biophilia will pull us back to the real world or to such a “matrix” model which is indistinguishable from the real world.
In the end, the “matrix” hypothesis is based on that and assumes that if we
already live in a cybernetic historical simulator (and not in the philosopher’s hedonic mental experiment), we may not be able to comprehend the “real” reality beyond the simulator. In fact, there can be no passage between this and that reality. In any case, here we are not able to prove anything until the designers of the historical simulator are so kind to leave us a life after death.
Thus, in a cropped form, these three ideas do not give sufficient grounds for hoping for a happy future, especially if they turn out to be wrong or impossible (null hypothesis). Therefore, I conclude: even without excluding these hypotheses, it is unreasonable to assume that they will embody reality throughout my life.
I am done with computational theology. I think now you can drink!
About the author: Charles Strauss is an English science fiction writer. Winner of the awards Hugo (2005, 2010), Locus (2006, 2007), Sidewise (2006), Prometheus (2007), Skylark (2008). He has degrees in pharmaceuticals and information technology.