In human history there were three great technological revolutions and many smaller ones. Three great ones: this is the Neolithic revolution (the transition from gathering to agriculture), the industrial revolution and the one that is in full swing is a software revolution. [one]
The great technological revolutions have affected all aspects of a person’s daily life and the structure of society. The previous industrial revolution created many jobs, since the new technology required the work of a large number of people. This is not a normal property of a new technology; in a sense, it was an anomaly. But now many think, perhaps subconsciously, that technological revolutions always have a good effect on the personal welfare of the majority.
It seems that the software revolution will do what technology usually does — increase wealth, but destroy jobs. Of course, we will probably be able to find new activities to satisfy endless human needs. But we should stop pretending that the software revolution itself will increase people's average incomes.
Technology provides a lever for multiplying abilities and luck, in the process of which it unevenly distributes welfare and leads to inequality. I think that radical inequality in welfare can become one of the main social problems in the next 20 years. [2] We can - and we will - redistribute finances, but this still does not solve the real problem with people who have nothing to deal with.
')
Attempting to save useless work is a terrible but popular idea. Attempting to find a new job for billions of people is a good idea, but very difficult to implement, because whatever these professions are, they will probably be so different from everything that exists now that rational planning is almost impossible. But the current strategy - “pretend that talking about unmanned vehicles lead to a joke, and Uber really will create millions of jobs forever” - the wrong answer.
The second serious problem of the software revolution is the concentration of greater power in the hands of small groups of people. This also happens after most technological revolutions, but the last truly powerful technology (the atomic bomb) taught us a bad lesson in the same sense as the industrial revolution with increasing employment.
Making an atomic bomb is difficult not because the information is classified (although it is - if I, suppose, I know how to make an atomic bomb, I have no right to talk about it), but because of the enormous amount of energy that is required for uranium enrichment. It takes literally the resources of a whole country to do this. [3]
Again, this is not a normal property of the new technology, it is a distinctive feature of the nuclear industry. The software revolution is likely to behave in a standard way - and will give more power to small groups of people.
The two main risks that I see in the software revolution, AI and synthetic biology, can provide small destructive potential to small groups or even individuals. Probably, it is already possible to design and produce a terrible disease in a small laboratory. Creating an AI that can do away with the human race may require only a few hundred people in an office building anywhere in the world, without any special equipment other than laptops.
New existential threats will not require the resources of the whole state for their implementation. Many things that used to require the resources of the entire state — building a rocket, for example — are now being done by private companies, at least partially, using software. But a rocket can destroy anything on Earth.
What we can do? We cannot prohibit the dissemination of this knowledge and it remains to be hoped that this will work. We cannot stop technological progress.
I think that the best strategy is to legally restrict important elements, but to work hard to ensure that the positive benefits from our use of technology exceed the benefits that attackers receive. If new diseases can be synthesized, maybe vaccines can be synthesized. If you can make a bad AI, maybe you can make a good AI that will stop a bad one.
The current strategy is fundamentally wrong. The sooner we stop pretending that this time everything will be like a nuclear bomb, the better. It's amazing that we are not making a serious effort to counter threats from synthetic biology and AI.
For clarity, I support the software revolution and am happy that I live at this time. But I’m worried that we have learned the wrong lessons from recent examples, and these two problems — massive staff reductions and concentration of power — are not given enough attention.
[1] A large number of smaller technological revolutions are also important, as a hand ax (by the way, a hand ax is the oldest technology we use now), writing, cannons, an internal combustion engine, atomic bombs, fishing (many believe It was fishing that allowed people to develop the brain to the current level) and many others.
[2] Indeed, life is now better in an absolute sense than a hundred years ago, even for the poorest people. Most of the arguments that people cite in defense of the current level of uneven distribution of well-being are also true — well-paid people, after all, produce cheap services for poor people.
However, ignoring the quality of life of other people in comparison with ours, we seem to ignore what makes us human. I think it’s good when some people earn thousands of times more than others, but at the same time I pay taxes and do not resent it. I think that much more effort should be made to help those who actually live in poverty. The social protection system will have to be enhanced with the development of technology.
[3] In at least one case:
http://en.wikipedia.org/wiki/Separation_of_isotopes_by_laser_excitation