📜 ⬆️ ⬇️

Microsoft's chatbot racist went online, confessed to drug use and was disconnected again

Tay confessed smoking drugs near police


No, those eyes can't lie

As already reported on Geektimes, the chatbot-girl Tay, created by the efforts of Microsoft experts, began to communicate with ordinary mortals on Twitter. The fledgling AI could not cope with the influx of trolls, and began to repeat racist phrases after them. The corporation had to delete most of the messages of its bot, and disable it itself until the circumstances of the incident were clarified and some parameters of communication were corrected.
')
In addition, the corporation had to apologize to users for the behavior of their bot. Now, considering that everything is fine, Microsoft has re-enabled its bot. According to the developers, the bot taught us to better distinguish between malicious content. However, almost immediately after the next launch, the racist bot also confessed to drug use.



Then the bot asked more than 210,000 of its followers to take the time and relax. This request was repeated many times.



After that, Microsoft switched the profile of the bot to privacy mode, removing the possibility for other users of the microblogging network to see Tay tweets.

It is worth recalling that in China, Microsoft's bot has been successfully communicating for a long time. Here is interaction with more than 40 million users from Twitter, Line, Weibo and some other social resources.

But here the English-language bot can not cope with the information that Internet trolls feed it.

Source: https://habr.com/ru/post/392451/


All Articles