📜 ⬆️ ⬇️

Do we live inside a computer model?



In the already distant 2003 philosopher, professor at Oxford, Nick Bostrom published a lengthy article in which he considered the assumption that all of us, in fact, live inside a computer model. And since then, this work does not give rest to many scientists who periodically publish articles in support or in the refutation of the work of Bostrom.

His article states that at least one of the statements is true:
')
1) The human race is likely to disappear before it reaches the posthuman level of development.
2) It is extremely unlikely that any posthuman civilization will create a significant number of models of the history of its evolution (or possible variants).
3) We almost certainly live inside a computer model.

From this it follows that the belief that our civilization can reach the posthuman level and create other models is a delusion.

We found interesting ideas, assumptions and evidence presented by the scientist, and we decided to share with you his conclusions (with abbreviations).

Introduction


Science fiction and many scientific works of serious scientists and futurologists predict that in the future we will have access to incredible computer computing power. Suppose it is. Thanks to these supercomputers, future generations will be able to launch many models of their predecessors' lives. Probably, these simulated "people" will have consciousness if the virtual world is finely crafted. Then there is a chance that the vast majority of minds, like ours, will be the products of the model. In this case, we can assume that this is a fait accompli, and we, in fact, do not live in the real world, but in the virtual one. And if we do not believe that this is the case, then we have no right to believe that our descendants will master such technologies. That is the basic idea.

Substrate-independence assumption




In philosophy, the concept of "substrate-independence" means that the mind can exist in matter, consisting of any substances. At the logical level, our consciousness can be reduced to a system consisting of various computational structures and processes. And this is not exclusively inherent in the biological neural networks contained in our skull. Silicon processors, in theory, are also capable of this. We will not go into disputes about this, just accept the thesis of the “substrate-independence” as it is. That is, it is not claimed that this is the case, just suppose that a hypothetical computer on which a program is running can gain consciousness.

Moreover, there is no need to assume that the computer mind must behave in different situations in the same way as a person would behave, including passing the Turing test. It is enough to assume that in the future the computational processes occurring in the human brain will be modeled in sufficiently complete and detailed manner. Up to the level of individual synapses.

Of course, our learning functions and cognitive functions are also affected by various chemical components. The substrate-independence thesis does not reject their role, but establishes that they influence the subjective experience only through direct or indirect influence on the computational actions of the brain.

Technological limitations


At the moment we do not have enough fast computers and software to create an artificial mind. But if technological progress continues unabated, then eventually the necessary technologies will be created. This will probably happen only in a few decades, but the time factor does not matter for the subject of this article. The assumption of our existence inside the model will “work” even if we believe that it will take hundreds of thousands of years to achieve the “post-human” stage of civilization development, when we reach technological capabilities, where only fundamental physical laws and lack of raw materials and energy will limit us.

At this level of development, it will be possible to transform celestial bodies down to entire planets into incredibly powerful computers. Now it is even difficult for us to guess what computational powers may be available to a posthuman civilization. Since we have not yet developed a “ theory of everything ”, we cannot exclude the possibility of discovering physical phenomena and phenomena in the future that will go beyond the limits of information processing restrictions that we present today. With much greater confidence, we can set the lower bounds of the posthuman computational power, taking into account only the mechanisms known to us today. For example, in 1992, Eric Drexler in the book Nanosystems: Molecular Machinery, Manufacturing, and Computation described a cube-sized computer (without power and cooling) that could perform 10 21 operations per second. Robert Bradbury (Robert Bradbury) suggests that a computer the size of a planet will achieve a performance of 10 42 operations per second. Seth Lloyd thinks that by creating quantum or plasma computers, we can get even closer to the theoretical limits of computing power. For a computer weighing 1 kg, Lloyd calculated the upper limit of 5x10 50 operations per second when operating around 10 31 bits.



It is also very difficult to assume what computational power will be enough to emulate the human mind. In 1989, in the book Mind Children, Hans Moravec (Hans Moravec) concluded that a total brain emulation requires a performance of about 10 14 operations per second. On the basis of the number of synapses in the brain and the frequency of their responses, Nick Bostrom himself calculated the necessary performance at 10 16 —10 17 operations per second. Most likely, in order to simulate the inner workings of synapses and dendritic structures, more computational power will be needed. However, our central nervous system at the micro level, apparently, has a high degree of redundancy to compensate for the unreliability and high "noisiness" of neural components.

The environmental model will also require computational power, depending on scale and detail. It is impossible to model the entire universe down to the quantum level unless new physical phenomena are discovered. But for realistic modeling of our habitat, much less power is needed. The main thing is that the artificial intelligence interacting with the virtual environment in the same way as a living person in the real world would not notice any deviations. So the insides of our planet at the microscopic level can not be modeled.

Distant astronomical objects also do not require much effort; it is enough to “supply” information that can be obtained from observations from a planet or spacecraft.

The surface of the planet at the macroscopic level will need to be modeled in full, and at the microscopic level - according to the situation. The main thing is that the picture in the eyepiece of the microscope looked authentic. It becomes more difficult when we use systems and machines that interact at the micro level to get the results we expect. In addition, power will be required for a detailed tracking of the faith of all modeled minds. So, to immediately fill in all the necessary details when the “computer man” looks in the microscope.

If any of us suddenly notice some inconsistency in this world, then it is easy to “fix”, correct the mind of this person. Or go back for a few seconds and correct the errors to avoid detection.



Since it is impossible to accurately calculate the required computational power for modeling human history, we can roughly estimate them at the level of 10 33 —10 36 operations per second (100 billion people x 50 years per person x 30 million seconds / year x 10 14 ... 10 17 operations / sec on each brain). But even if we were mistaken by several orders of magnitude, this is not very important for the hypothesis of a universal model that we are considering.

As mentioned above, a planetary-scale computer can have a performance of 10 42 operations / sec. Such a "device" is able to simulate the entire mental history of mankind, using only 0.0001% of its power. A posthuman civilization could build a huge number of such computers, creating an appropriate number of full-scale models. Such a conclusion can be made even taking into account large errors in all our assumptions.

The essence of the assumption of existence within the model


So, if there is a noticeable chance that our civilization will ever reach the posthuman level of development and create complete models of humanity, then why don't we already live in such models?

We present a mathematical proof of this assumption.

Legend:

FP is the share of all technological civilizations that have reached the posthuman stage.
N is the average number of models launched by posthuman civilization.
H - the average number of people that existed before civilization turned into posthuman.

Then the number of simulated human minds is equal to:

F sim = F p NH / (F p NH + H)

Let be:

F1 is the share of posthuman civilizations interested in creating models of humanity (or in which there are individual representatives interested in this and possessing the necessary resources).
N1 is the average number of models launched by such civilizations.

Then:

N = F 1 N 1

Consequently:

F sim = F p F 1 N 1 / (F p F 1 N 1 +1)

From this equation it follows that at least one of the three assumptions must be true:

(1) F p ≈ 0
(2) F 1 ≈ 0
(3) F sim ≈ 1

The principle of polite indifference (A bland indifference principle)


So, we will accept the assumption (3). Let x be the share of all observers living inside the model. If we do not confirm that our consciousness, compared to all the others, is more likely created by a computer, and does not belong to a living person, then the probability (Cr) that we all live inside the model should be equal to:

Cr (SIM | F sim = x) = x

This statement follows from the principle of polite indifference. Consider two cases. The first, the simplest, when all the minds inside the model are completely identical: they have the same knowledge, memories and experiences. The second case, when all the minds are similar, but at the same time qualitatively different from each other in their experience. And the assumption of a universal model works in both cases, since we do not have accurate information about which of the minds of the people inhabiting our world are virtual and which belong to living people.
(A deep analysis of this position can be found in the book of Bostrom “Anthropic Bias: Observation Selection Effects in Science and Philosophy”)

For a better understanding of the following analogy. Suppose a certain share of the human population has a certain sequence of S genes, which is usually referred to as “junk DNA”. Suppose this sequence is not detected in the analysis of the genome and there are no obvious external manifestations of its presence in humans. Then it will be logical to believe that you can be a carrier of the sequence S. And this does not depend on the fact that the minds of people with S differ from the minds of people without S. Just because every person has his own unique life experience, without any relation to the presence or absence of S in their genes.

The same is true if S is not due to biology, but to being in a computer model, given that we do not have the ability to distinguish living people from "computer".

Interpretations


If assumption (1) F p ≈ 0 is true, then humanity will almost certainly not reach the post-human stage. In this case, we can speak of a high probability of DOOM, that is, hypotheses about the extinction of humanity:

Cr (DOOM | F p ≈ 1) ≈ 1

It is quite possible to imagine a hypothetical situation in which the value of F p would be even greater. For example, the fall of a giant meteorite. In such cases, the probability value of DOOM would be much higher than the proportion of civilizations that did not reach the posthuman stage.

Assumption (1) in itself does not mean that we will soon die out. This probability depends on our current level of technological development long before we begin to die out. Another scenario in which assumption (1) is true is the collapse of a technological civilization. Primitive human communities will exist on Earth for an indefinite time.

For us there are many ways not to reach the posthuman stage. One of the most natural options is the development of some powerful, but dangerous technology. Today, one of the candidates for this role is molecular nanotechnology, which in the future will allow the creation of self-replicating nanobots capable of obtaining resources from the soil and organic matter - mechanical bacteria. Such nanobots, created with malicious intent, are able to destroy all life on the planet .

If we assume that assumption (2) F 1 ≈ 0 is correct, then this means that the share of posthuman civilizations interested in modeling humanity is negligible. There should be a strong convergence of the development paths of civilizations. If the number of running models is extremely large, then civilizations interested in this should be equally small. That is, almost all of them decided not to spend resources on it, or almost no representatives interested in this and having the necessary capabilities. Or in these civilizations there is a direct ban on the creation of such models.
What can make civilizations evolve in very similar scenarios. Someone will say that all advanced civilizations come to the ethical prohibition of modeling, in order not to bring suffering to the simulated minds. However, from today's point of view, the creation of an “electronic” civilization is not considered immoral. Yes, and only ethical explanation is not enough, it is necessary to achieve a high degree of similarity of social structures of different civilizations.

Another possible point of coincidence of development paths is the assumption that almost all inhabitants of almost every posthuman civilization will reach a level of development at which they simply lose the desire to create universal models of humanity. This will require very serious changes in the motivation of our descendants. Who knows, maybe in the future it will be considered just a silly idea. Perhaps due to the insignificance of scientific benefits, which does not look too incredible, given the alleged immeasurable intellectual superiority of future civilizations. Or maybe future “posthumans” will consider all these models to be too inefficient a way to get pleasure. After all, it is much easier to stimulate the necessary areas of the brain than to build huge “game servers”. Thus, if the assumption (2) is correct, it follows that posthuman societies will be very different from human ones.

The most intriguing scenarios open up if the assumption (3) F sim ≈ 1 is correct. If you and I are now living inside a computer model, then the space that we can observe is only a small part of the physical universe. However, these are physical laws in the world where there is a supercomputer in which our virtual world is created with you, and may not coincide with “our” physical laws.

There is also the possibility that the modeled civilization will also reach the post-human stage within the model. And then she will launch her own supercomputers to create models. Such supercomputers can be compared with today's "virtual machines". After all, they can also be nested: you can create a virtual machine that emulates another virtual machine, which emulates a third one, and so on. Therefore, if we ourselves are going to create a model, then this in itself will become a serious proof against assumptions (1) and (2), and then we will have to conclude that we ourselves exist within the model. Moreover, we will have to assume that the posthuman civilization that created our world is itself a computer model. And their creators, in turn, too.



In other words, reality can consist of many levels. And we can assume that over time their number increases. Although there is an argument against this hypothesis, according to which the amount of necessary resources consumed for modeling at the highest, “real” level, becomes too large. Perhaps even modeling a single posthuman civilization can be incredibly costly. In this case, it can be assumed that our model will simply be turned off upon reaching the posthuman stage. Amen.

To some extent, the post-men who launched the model can be compared with the gods in relation to us, "living" inside the computer. After all, they created the world that we know, their intellect defies awareness, they are omnipotent within our world and know everything that happens to each of us. Further logical chain can lead us to justify the expediency of "good behavior" in order to earn the favor of demiurges.

In addition to the theory of the universal model, one can consider the idea of ​​the possibility of selectively modeling a small group of people or even a single person. In these cases, the rest of humanity is “shadow people” modeled at a sufficient level. It is difficult to say how much less resources are required for modeling shadow people compared to “full-fledged” people. It is not even clear whether they will be able to behave indistinguishably from the "real", but without having a full-fledged consciousness.

We can also assume that in order to save computational resources, the creators replace the "life experience" of simulated people with fake memories. Then the conclusion is that there is no pain and suffering in the world, and all our bad memories are an illusion. Of course, such reasoning makes sense only if you are not suffering from anything at the moment.

Conclusion


Well, let's say we live inside a computer model, what to do next? Actually, nothing special, live as before. Make plans and fantasize about the future. The saddest option for us is when assumption (1) is correct. Compared to this, it is preferable that we still live in a computer model. Although the limits of computing power of our "creators" can lead to the "turning off" of our world after reaching the post-human stage. And in this case, the most beneficial option for us is the truth of assumption (2).

Source: https://habr.com/ru/post/236281/


All Articles