As specialists in general artificial intelligence (AGI) say: “If a person has nothing to say about the essence of the AGI problem, he speaks about the problem of his (without) danger.” This problem is much clearer and closer to a wider audience than subtle technical issues. On this issue, even a
well-known physicist can state his opinion, even if it is an
unknown source . Recently
, Ilona Musk’s statement about the dangers of AI flashed on this topic
, the press secretary of which said that Mask would soon publish a more detailed opinion on artificial intelligence. . And this answer was not just verbal, but backed by
10 million dollars .
The distribution of this money will be handled by the
Future of Life Institute (the expert commission of which includes Stuart Russell, a well-known expert and author of one of the most popular textbooks on AI), who on January 22 will open the portal for applying for grants. The site of this institute also contains
an open letter on the priority of the problem of reliability and utility (not yet existing) of artificial intelligence. This letter is signed by many well-known specialists working in both academic and commercial organizations. It turns out that the security problem of AI is already worrying people who have something to say and essentially AGI?
In addition, people like Ilon Musk don’t scatter money, even if they aren’t too big. Yet such a proposal looks more like PR and fashion. It is doubtful that the problem of reliability and security of AI can be dealt with abstractly, without our own advanced AGI prototype. Judging by what is presented at
AGI conferences , those people who may be interested in such a grant do not have such a prototype that it would be time to fear. And if any corporation has one, then it is unlikely to be interested in this grant. So what is its meaning then?