📜 ⬆️ ⬇️

The head of Google believes that the fear of AI "is completely justified"



Sundar Pichai, CEO of Google, one of the largest companies working in the field of artificial intelligence, said in an interview this week that concerns about the detrimental use of technology are “completely justified” - but we must trust the technology industry that it will be able to all responsibility to regulate its use.

In a conversation with The Washington Post, Pichai said that new AI tools — underlying innovation such as robomobili and disease-recognizing algorithms — require companies to define an ethical framework and think carefully about how this technology can be abused.

“I think techno-business should understand that you cannot create it first and then fix it,” said Pichai. “I think it won't work.”
')
Technogiants must ensure that the AI, “having its own goals,” does not harm humanity, Pichai said. He said that he was optimistic about the long-term benefits of the technology, but his assessment of the potential risks of AI coincides with the opinion of some critics of the technology who claim that this technology can be used to introduce invasive surveillance, make deadly weapons and spread disinformation. Other technocompany managers, such as the founder of SpaceX and Tesla, Ilon Musk, made bolder predictions that the AI ​​could be “much more dangerous than nuclear bombs”.

Google’s AI technology is at the heart of everything, from the controversial Chinese project of the company to the emergence of hateful and conspiracy theories about videos on YouTube, a division of the company. Pichai promised to solve this problem next year. The way Google decides to use AI has also generated recent unrest among company employees.

Pichai’s call for self-regulation followed his speech in Congress , where lawmakers threatened to limit the use of technology in response to its misuse, including as an intermediary for spreading disinformation and hate speech. His recognition of the presence of possible threats from AI was crucial, since previously this programmer, born in India, often praised how the consequences of the introduction of automated systems capable of learning and making decisions without human supervision would change the world.

Pichai said in an interview that lawmakers around the world are still trying to understand the effect of AI and the potential need for its government regulation. “Sometimes it worries me that people underestimate the magnitude of the changes that may occur in the short and medium term, and I think that these issues are in fact extremely complex,” he said. Other tech giants, including Microsoft, have recently taken up regulation of AI, both from companies creating this technology and from governments that supervise its use.

However, with proper treatment, AI can provide “tremendous benefits,” Pichai explained, including helping doctors recognize eye diseases and other diseases by automatically scanning case histories. “It’s hard to regulate the technology at an early stage of development, but I think that companies should engage in self-regulation,” he said. - Therefore, we are so actively trying to voice a set of principles of AI. We may not have done everything correctly, but we thought it was important to start this dialogue. ”

Pichai, who joined Google in 2004 and, through the incumbent general director, in January, called AI "one of the most important things that humanity is working on," and said that it could be "more important" for humanity than "electricity or the fire". However, the race to improve machines capable of working independently gave rise to familiar fears that the corporate spirit of Silicon Valley - “do it quickly, cause damage”, as it was once described on Facebook - can lead to the fact that powerful and imperfect technology will eliminate jobs and harm to people.

At Google itself, attempts to create AIs are also controversial: the company faced a flurry of criticism this year because of work on a contract for the Department of Defense, under which AIs that can automatically mark cars, buildings and other objects will be used in military drones. Some employees even quit, saying the reason for this was the company's earnings in the “military business”.

In response to questions about such a reaction, Pichai told the newspaper that these workers were “an important part of our culture. They have the opportunity to have their say, and this is important for the company, this is what we value, ”he said.

In June, after the announcement, where the company announced that it would not renew this contract, Pichai revealed a set of ethical principles for creating AI, which includes a ban on developing systems that can be used to cause harm, violate human rights or help monitor people. in violation of the norms adopted by the international community. "

The company was criticized for issuing tools for working with AI, which can be used to harm. Released in 2015, TensorFlow, Google’s internal machine learning system, helped speed up the large-scale development of AI, but was used to automate the creation of fake videos, which were then used for disinformation and stalking.

Google and Pichai personally defended the release, arguing that limiting the spread of technology would mean that the public would not control it carefully enough, and the developers and researchers would not be able to improve its capabilities so that it would benefit.

“I believe that over time, successfully developing technology, it is important to discuss ethical issues and cognitive distortions, and develop it simultaneously,” said Pichai in an interview.

“In a sense, we want to develop an ethical platform, to attract scientists not from the field of computer science at an early stage of development,” he said. “It is necessary to involve humanity more actively, since this technology will affect humanity.”

Pichai compared early attempts at setting AI parameters to attempts by the scientific community to limit genetic research in the early stages of development of this field. “Many biologists have begun to make distinctions indicating where technology should evolve,” he said. “The academic community took up active self-regulation, which, I think, was extremely important.”

The director of Google said that such an approach would be absolutely necessary in the development of autonomous weapons - and this problem torments the directors and employees of technical companies. In July, thousands of industry workers representing companies such as Google signed a petition to ban AI tools that could be programmed to kill.

Pichi said that he considered several filled with hate and conspiracy theories on YouTube described in the WP article as “disgusting” and made it clear that the company would work on improving systems that recognize problematic content. In these commercials, the cumulative views of which exceeded several million times since their appearance in April, the baseless accusations of Hillary Clinton and her longtime assistant, Hyuma Abedin, that they attacked the girl, killed her and drank her blood, were discussed.

Pichai said he didn’t see these videos, which he was asked questions about in Congress, and declined to say whether YouTube’s flaws were the result of restrictions recognizing unwanted content systems or rules for evaluating the need to remove videos. But he added that "in 2019 you will see more work in this area from our side."

Pichai also described Google’s attempts to develop a new product in the China-controlled segment of the Internet as preliminary, and did not say what product it might be and when it might appear on the market, if at all.

It is called the “Dragonfly Project” [Project Dragonfly], and it has caused a strong negative reaction from employees and fighters for the observance of rights, warning of the possible help of Google in the government spying on citizens of a country intolerant of political dissidents. In response to a question about the possibility that Google will create a product that allows Chinese officials to find out when a person enters into the search some sensitive words, such as " the Tiananmen Square massacre, " Pichai said that it is too early to make such judgments.

“These are all hypothetical arguments,” said Pichai. “We are terribly far from this state of affairs.”

Source: https://habr.com/ru/post/433236/


All Articles