
In August 2011, Igor Labutov and Jason Yosinsky, two graduate students at Cornell University, launched a couple of chat bots opposite each other. Starting with a simple greeting, the bots talk
quickly turned into a verbal squabble with accusations and arguments about God. The first conversation of representatives of the AI ​​ended in conflict.
On the pages of Wikipedia, the war of edits has long been fought with the involvement of weak AI, but sometimes even “good” bots come into endless conflict.
Internet bots that perform automated programmed actions depending on the context were born practically together with the Internet itself. Since then, they have been constantly improved and complicated. One of the first chat bots, Eggdrop welcomed visitors to IRC channels
in 1993 . As they became more complex, bots began to take on more and more functions. A significant part of modern communications on the Internet are bots. For example, in 2009, they generated about 32% of all messages on Twitter for the most active users. Overall, they then generated 24% of all tweets. Marketers believe that 54% of the views of banner advertising on the Internet also provide bots. Representatives of weak AI control hundreds of thousands of accounts on gaming sites, and tens of thousands of artificial women
"spin up" customers on dating sites .
')
The number of bots on the Internet is gradually increasing. They work 24 hours a day, 7 days a week. And as the example of bots from Cornell University shows, these digital creatures are rarely programmed to collaborate - so they may conflict. The more complex the activities of the bots, the better the functionality and response to the context, the more likely the bots will meet and interfere with each other.
Bots have different functions. Some are listed in the table, depending on the task and intentions. The second column lists bots with good intentions, in the third column - with malicious intentions. It is logical that bots of the same functionality from the second and third columns will come into conflict. They
must be in conflict. Often the direct task of good bots is to eliminate the negative consequences of malicious bots.
Table 1Despite the hopes of futurologists, AI still has no idea of ​​the concept of morality and culture, even in simple communication. This only increases the likelihood of conflicts.
Experts from the
Oxford Internet Institute recently published the results of a
study that occurs on the pages of Wikipedia, when on the edit page there are several bots designed to make changes to the article.
According to statistics, bots contribute 15% of all edits to Wikipedia. Modern programs have advanced functionality: they can automatically correct vandalism, track bans, check spelling, create links between articles in different languages, automatically import content, perform data mining, identify copyright violations, greet newbies, etc.
The share of bots in edits of Wikipedia (B) and the proportion of edits of people and bots that are rejected (C, D)Wikipedia bots do not obey the central authority - this is a fundamental and important idea of ​​a self-organizing society, the only way to build a truly effective system. But if the individual editors of Wikipedia may not agree, we repeatedly rule each other, then the bots reflect their points of view.
The study showed that if the edits of the bots are rejected, then this usually happens on the initiative of other bots, and not people. At the same time, the share of edits “bot-bot” is gradually increasing, which suggests the growth of conflict and tension in the community of bots.
The proportion of edits "bot-bot" gradually increasesThe greatest number of bot edits that are corrected by other bots and then corrected back refers to “
disagreement between bots that specialize in creating and correcting links between different language editors of the encyclopedia ”. According to the authors of the study, “the lack of coordination may be due to the fact that the naming rules and conventions are slightly different in different language editions”. Most often, bots Xqbot, EmausBot, SieBot and VolkovBot are seen in conflicts, and the war of edits between bots most often occurs in several articles, among them:
- Pervez Musharraf (Former President of Pakistan)
- Uzbekistan
- Estonia
- Belarus
- Arabic
- Niels Bohr
- Arnold Schwarzenegger
As mentioned above, edits occur primarily in links between language editors.
Wikipedia is just a small example of an ecosystem densely populated with sophisticated bots. As can be seen from table 1, the botosphere is a much more extensive space. The same conflicts, and sometimes even more acute conflicts, may arise in other areas. The most famous classic examples are the bots' wars on stock exchanges (including high-frequency trading) and at auctions (including scalping), as well as automatic buying by the bots of scarce goods on the Internet (concert tickets, queues in the embassy and etc.).
In Wikipedia, the bot ecosystem is constantly monitored. It is hard to imagine the flourishing of propaganda on Twitter and social networks, where bots successfully spread fake news and promote fake news stories in order to bring them to the level of central media and influence public opinion. Information troops to promote and counter-propaganda on the Internet have been created and are working effectively in
several countries of the world . Twitter bots coordinate their actions as part of botnets, although such botnets are rarely identified,
unless by chance .
The scientific work of specialists from the Oxford Internet Institute was
published on February 23, 2017 in the journal PLOS One (doi: 10.1371 / journal.pone.0171774).