📜 ⬆️ ⬇️

How Ilon Musk and Y Combinator are planning to stop the tyranny of computers

They are going to finance a new organization, OpenAI , to create the most advanced types of artificial intelligence - and then give the results to society.



As if the field of development of AI was lacking in competition — giants such as Google, Apple, Facebook, Microsoft, and even automakers like Toyota are already trying to hire researchers — she can greet a new player, but not simple. This is a non-profit organization OpenAI, the announcement of which was held in December 2015 , and which swears to give all the results of work in the public domain, including patents - everything to avoid a dystopia, in which computers excel in people's intelligence.
')
Funding will be provided by a group of techno-celebrities, including people such as Ilon Musk, Reid Hoffman, Peter Thiel, Jessica Livingston and Amazon Web Services. Together they plan to spend a billion dollars in the long run. The project will be led by Musk and Sam Altman, director of Y Combinator, whose research team will also contribute (and Altman will also take part).

It is not surprising to see in this series Mask, famous for criticizing AI . But what about Y Combinator? Startup Incubator, which opened 10 years ago as a summer project, financed six startups. The incubator paid their founders salaries and gave valuable advice on business development. Since then, YC has helped nearly a thousand companies, including Dropbox, Airbnb and Stripe, and recently opened a research department. The last two years, it was managed by Altman, whose company Loopt was launched in 2005, and in 2012 it was sold for $ 43.2 million. Although YC and Altman founded OpenAI, and Altman is also the director of this company, it is still independent.

In essence, OpenAI is a research laboratory designed to confront large corporations so that they cannot earn too big profits by getting AI at their disposal. The project will also work against governments that can use AI to strengthen their power and oppress citizens. It sounds too idealistic, but the team has already hired several serious people, including the former Stripe chief technologist, Greg Brockman (he will be the main technology at OpenAI), and world-class researcher Ilya Satskever who worked at Google and belonged to a famous group of young scientists, studied under the pioneer of the neural networks Jeff Hinton in Toronto. He will be the director of the research department. Other vacancies were taken by talented young people, in whose resumes work in research projects, Facebook AI and DeepMind. The company boasts stellar advisors, including Alan Kay, one of the pioneers of computer science.

The project leaders talked with me about him and his aspirations. At first I talked with Altman, and then again with him in company with Mask and Brockman. I combined them into one article.

How did it all start?


Sam Altman: We launched the YC research team a month and a half ago, but thoughts about AI have been visiting me for a long time, like Ilona. If you think about the most important things for the future of the whole world, then, in my opinion, a good AI will be among the most important of them. Therefore, we create OpenAI. The organization will try to develop an AI friendly to people. And since it is non-profit, the whole world will own it.

Ilon Musk: As you know, for some time I was afraid of AI. And I talked a lot with Sam and Reid [Hoffman], Peter Thiel and others. And we thought, “is there a way to make sure, or at least increase the likelihood that the AI ​​will bring us some advantages?”. As a result of conversations, we decided that organizing NPOs would be a good idea. And we will be very serious about security.

From a philosophical point of view, there is an important element: we want a high prevalence of AI. There are two versions of opinions - do we need a lot of AI, or just a few AI? We think a lot is better. And well, if they can be presented as an extension of the will of man.

Man's will?


IM: AI as an extension of yourself, that is, that a person lives in symbiosis with AI, instead of AI being some kind of central intelligence of some kind. How do you think about, say, applications on the Internet - you have email, social networks, mobile applications. They essentially make you superhuman, and you do not think of them as something separate, you think of them as continuation of yourself. Therefore, to the extent that we can conduct AI in this direction, we want to do it. And we found a lot of like-minded engineers and AI researchers who think the same way.

SA: We believe that it is better if the AI ​​develops towards enhancing individual capabilities and improving people, and will be accessible to everyone, and not the only entity, a million times more powerful than humans. As our company is a non-profit, we will focus not on enriching shareholders, but on what is best for the future of humanity.

Doesn't Google share development with the public, as it just did with machine learning?


SA: They share many developments. But over time, as we are getting closer to surpassing human intelligence, the question is what will Google decide to share.

And your project will not surpass the human intellect?


SA: I think yes, but it will be an open source project that anyone can use, and not just, say, Google. Everything that the group develops will be available to everyone. If you take this job and adapt it to your needs, you will not necessarily share the results. But everything that we do will be available to everyone.

And if I - Dr. Evil, and use it for their own purposes? Do not you help me in this?


IM: This is a great question, and we argued a lot about it.

SA: There are many thoughts on this topic. Just as people protect humanity from Evil's doctors by the fact that there are many of them, and the doctor is one, so we think our scheme will work, in which many AI will work against rare villains. This scheme is better than the one in which there is one AI, more powerful than anything else. If this mighty thing alone flies off the coils, or Dr. Evil takes possession of it, and he has nothing to oppose - that will be a big problem.

Will you control what grows from OpenAI?


SA: We want to build a control system over time. For now, it will be only me and Ilon. The development of real AI is still very far away. I think we will have enough time to create a control function.

IM: I'm going to spend some time with the project team in the office about once a week or two, learning about updates, expressing my opinion and figuring out how far the AI ​​has come, and how close we got to something dangerous. Personally, I will be very serious about security. By this I am very concerned. And if we find some risky moments, we will definitely make it public.

Do you have examples of evil AI?


SA: There are a lot of science fiction, until the realization of which there are many more years - the Terminator and the like. I do not worry about the occurrence of such things in the near future. But what kind of problem will have to face - although this is not an evil AI - as with the wholesale automation and the disappearance of jobs. Another example of an evil AI is when people talk about smart programs that hack computers better than people. And this is already happening.

Do you start with some kind of ready-made system?


SA: No. The study will begin as in any laboratory, and for a very long time it will look like a laboratory. No one knows how to build it yet. We already have eight researchers, and a few more will join the project within a few months. For now, they will work in the YC office, and as they grow, they will move to theirs. They will play with ideas and write software to see if they can develop the current state of the art of creating AI.

Will people from outside participate in this?


SA: Of course. One of the advantages of an open program is that laboratories can collaborate with anyone, as they can freely share information. It’s very difficult to work with Google employees, as they have a bunch of privacy restrictions.

Sam, if OpenAI will work in the YC office, will your start-ups have access to the results of their work?


SA: If OpenAI is developed into an interesting technology and everyone can use it for free, it will be a plus for any tech company. But no more than that. However, we will ask our startups to provide as much information as they see fit for OpenAI. And Ilon will also think what Tesla and SpaceX can share.

And what, for example, will be information that can be shared?


SA: A lot of things. For example, all data from Reddit can be an excellent training kit. Videos from Tesla romo mobile training will also be very valuable. Huge amounts of data are very important. People get smarter by reading books. But a person becomes wiser, only by reading the book himself. And if one of the Tesla machines, for example, learns something about a new situation, then all Tesla machines will automatically benefit from this.

IM: In general, we do not have a large number of specific plans, since the company is just being established. She is able to embryo. But Tesla has really a lot of data from the real world, thanks to the millions of miles that accumulate every day from our fleet. Perhaps Tesla has more data about the real world than any other company in the world.



AI requires serious computing power. What will you have for the infrastructure?


SA: We cooperate with Amazon Web Services. They provide very large project capacities.

And the project will get a billion?


IM: You can say it will be even more. We do not give exact quotations, but large receipts will go from all listed persons.

In what time?


SA: For the time required to create a system. We will save, but the project, apparently, is designed for decades, and will require a lot of people and equipment.

And you do not need a profit?


IM: Right. This investment is not in profit. It is possible that revenue will appear in the future, just as a non-profit Stanford Research Institute receives. In the future, revenue may appear, but there will be no profits enriching the co-owners, there will be no shares and other things. We believe that this is necessary.

Ilon, you have previously invested in the DeepMind AI company, as I understand it, for about the same reasons - to follow the development of AI. And then they bought Google. Is this the second attempt?


IM: I'm not really an investor in the usual sense. I do not invest for financial gain. I invest in companies that I help create, and I can help a friend, either for the sake of some goal I believe in or because I care. I do not diversify outside of my company in a material sense. But my “investments” in DeepMind were made for a better understanding and control of AI, if you want.

Will you compete with the best scientists who can go to work at DeepMind or Facebook or Microsoft?


SA: Hiring is going well. Researchers are very pleased with the freedom, openness and the opportunity to share their work, which is not in the usual production laboratory. We have assembled such a quality initial team that the rest will join only for the sake of working with them. And I also think that our mission and vision and structure are very popular with people.

How many researchers will you need? Hundreds?


SA: Maybe.

Let us return to the idea that the general use of AI can save us from negative consequences. Is there a risk that by making it more accessible you will increase the potential danger?


SA: I would like to count all those hours that I spent behind the argument about this with Ilon and others. And still, at 100%, I'm not sure. You can not be sure 100%? But try different options for development. Security through secrecy in the case of technology rarely worked. If it is given to someone alone, how to decide, who - from Google, from the American government, from the Chinese government, from ISIS [or banned in the Russian Federation - comment by trans.], Or from whom? There are many villains in the world, but humanity is flourishing. What happens if one of them is a billion times more powerful than the other person?

IM: I think that the best protection against misuse of AI will be to arm as many people as possible. If everyone has the capabilities of AI, then there will not be such a small set of people who will have super abilities at their disposal.

Ilon, you are a director in two companies and you are sitting on the third board of directors. It seems that you do not have much free time to devote to a new project.


IM: This is true. But the security of AI has been hovering over my mind for too long, and I think that for the sake of my peace of mind I will go to a deal with myself.

Source: https://habr.com/ru/post/396621/


All Articles