
A prerequisite for the onset of technological singularity is the creation of "strong artificial intelligence" (artificial superintelligence, ASI), which is able to independently modify itself. It is important to understand whether this AI should work like a human mind, or at least its platform should be designed like a brain?
The brain of the animal (including humans) and the computer work differently. The brain is a three-dimensional network, “sharpened” for parallel processing of huge amounts of data, while today's computers process information linearly, although millions of times faster than brains. Microprocessors can perform amazing calculations with speed and efficiency far beyond the capabilities of the human brain, but they use completely different approaches to information processing. But traditional processors do not do very well with parallel processing of large amounts of data, which is necessary for solving complex multifactor problems or, for example, pattern recognition.
The neuromorphic chips currently being developed are designed to process information in parallel, similarly to the brain of animals, using, in particular, neural networks. Neuromorphic computers will probably use optical technologies that will make it possible to produce trillions of simultaneous calculations, which will make it possible to model the whole human brain more or less accurately.
')
The
Blue Brain Project and the
Human Brain Project , funded by the European Union, the Government of Switzerland and IBM, are tasked with building a fully-fledged computer model of the human brain using biologically realistic neuron modeling. The Human Brain Project aims to achieve functional modeling of the human brain by 2016.
On the other hand,
neuromorphic chips will allow computers to process data from the "sense organs", detect and predict patterns and learn from their experience. This is a huge step forward in the field of artificial intelligence, noticeably bringing us closer to creating a full-fledged strong AI, i.e. a computer that could successfully solve any problems that a person could theoretically solve.
Imagine such an AI inside a humanoid robot that looks and behaves like a human, but learns much faster and can perform almost any tasks better than Homo Sapiens. These robots might have self-awareness and / or feelings, depending on how we decided to program them. Robots working is all to nothing, but what about the "social" robots living with us, caring for children, the sick and the elderly? Of course, it would be great if they could fully communicate with us; if they possessed consciousness and emotions like us? A bit like AI in the Spike Jones movie
“She” (Her) .
In the not too distant future, perhaps even less than twenty years later, such robots can replace humans for almost any job, creating an abundance society where people can spend time as they please. In this reality, high-class robots will move the economy. Food, energy and most consumer goods will be free or very cheap, and people will receive a monthly
fixed allowance from the state .
It all sounds very beautiful. But what about AI, which will significantly surpass the human mind? Artificial supermind (ASI), or strong artificial intelligence (FIC), with the ability to learn and improve themselves and potentially capable of becoming millions or billions of times smarter than the smartest of people? Creating such a creature could theoretically lead to a technological singularity.
Futurologist
Ray Kurzweil believes that the singularity will come around 2045. Among the critics of Kurzweil is Microsoft co-founder
Paul Allen , who believes that singularity is still far away. Allen believes that to build such a computer, you first need to thoroughly understand the principles of the human brain, and that these studies for some reason, dramatically accelerated, like digital technology in 70-90 years, or medicine a little earlier. But in reality, on the contrary, studies of the work of the brain require more and more effort and bring less and less real results - he calls this problem “inhibition due to complexity”.
Without interfering with the dispute between Paul Allen and Ray Kurzweil (
his answer to Allen’s criticism ), I would like to discuss whether it is absolutely necessary for the creation of the AIS to be fully understood and simulated by the human brain.
It is quite natural for us to consider ourselves as the highest peak of evolution, including intellectual, simply because it happened in the biological world on Earth. But this does not mean that our brain is perfect, and that other forms of higher intelligence cannot work in any other way.
On the contrary, if aliens with superior intelligence exist, it is almost impossible that their mind will function just like ours. The process of evolution is random and depends on an incalculable set of factors, and even if life is created anew on a planet identical to Earth, it would not develop in the same way, and, accordingly, through N billion years we would observe completely different biological species. If it had not happened
Mass Permian or any other global extinction? We would not be. But this does not mean that other animals would not evolve to a developed intellect instead of us (and it is likely that their intellect would be more developed due to the odds of millions of years). Perhaps it would be some reasonable octopus with a completely different brain structure.
Human emotions and limitations push us to the idea that everything good and reasonable should be arranged the same way we do. This mistake of thinking led to the development of religions with anthropomorphic gods. Primitive or simplified religions, such as animism or Buddhism, often either have a non-human deity or have no gods at all. More selfish religions, poly or monotheistic, tend to represent god or deities as superhumans. But we do not want to make the same mistake when creating an artificial supermind. The superhuman mind should not be an “enlarged” copy of the human, and the computer should not be analogous to our biological brain.
The human brain is a brilliant result of four billion years of evolution. Or, more correctly, a tiny branch in the Great Tree of Evolution. Birds have a much smaller brain than mammals, and they are often considered very dumb animals. But, for example, ravens have psychological skills at about the level of a preschooler. They show conscious, proactive, purposeful behavior, develop problem solving skills, and can even use tools. And all this with a bean-sized brain. In 2004, research in the Department of Animal Behavior and Experimental Psychology at the University of Cambridge showed that ravens are almost as smart as apes.
Of course, there is no need to repeat the human brain in detail for the manifestation of consciousness and initiative. Intellect depends not only on the size of the brain, the number of neurons or the complexity of the cortex, but also, for example, on the ratio of brain size to body weight. Therefore, cows whose brains are similar in size to the brains of chimpanzees are dumber than ravens and mice.
But what about computers? Computers are, in fact, only “brains”, they have no bodies. Indeed, when computers become faster and more efficient, their size, as a rule, decreases, not increases. This is another reason why we should not compare the biological brain and computers.
As Kurzweil explains in his answer to Allen, knowledge of how the human brain works can only push on some ways to solve specific problems in the development of AI, however, most of these tasks are gradually being solved without the help of neurophysiologists. We already know that the “specialization” of the brain regions occurs mainly through learning and processing our own experience, and not “programming”. Modern AI systems can already be well learned from experience, for example,
IBM Watson has gathered most of its “knowledge” by reading books on its own.
So, there is no reason to be sure that it is impossible to create artificial supermind without first understanding the work of your brain. A computer chip is by definition constructed differently from biochemical neural networks, and a machine will never feel emotions in the same way as we do (although it may experience other emotions that are beyond human understanding). And despite these differences, computers can already acquire knowledge on their own, and most likely, they will get better and better, even if they do not study in the same way as people. And if you give them the opportunity to improve themselves, the machines themselves may well run non-biological evolution, leading to superhuman intelligence, and ultimately to singularity.