📜 ⬆️ ⬇️

Google talked about the development of AI to identify terrorism on YouTube

Google recently got into a scandalous story due to the weak moderation of videos on YouTube. As it turned out, this platform not only publishes videos of extremist content, but it advertises well-known brands. This caused a real revolution among advertisers. Google was literally accused of financing terrorism, racism, fascism and other sins. As a result, YouTube had to tighten the verification of the content on which advertising is placed. Google has promised in the official blog that it will pick up more people who will monitor compliance with the rules to prevent ads from appearing on pages with inappropriate content.

A new wave of accusations has risen in recent days: now Google, Facebook and Twitter are even threatened with fines in the UK and France for publishing extremist content (after the recent attacks).

Google and Facebook have been forced to respond.

Google outlined four steps it takes to locate and delete pro-terrorist content on its sites, especially on YouTube.
')
Engineers have developed technology to prevent the re-download of known terrorist content to YouTube, using image matching technology. This could be a news report from a TV or video with a glorification of violence, flooded by any user.

Google is putting more effort into developing machine-learning systems to automatically detect terrorism. Unlike reloading, this technology can even detect new terrorist videos by analyzing their content (content-based signals). Video analysis models have already trained more than 50% of terrorist videos removed from YouTube over the past six months.

In the end, the staff of moderators YouTube's Trusted Flagger, which now amounts to thousands of people, is increasing. "Computers can help identify dubious videos, but human experts still play a role in making difficult decisions, where you need to draw a line between violent propaganda and religious, trustworthy speeches," writes Google. The company notes that among all users, the proportion of correct "flags" may be low, but with Trusted Flagger experts it exceeds 90%. Google has entered into a partnership with another 50 expert groups (now there are 113), counter-terrorist agencies and other technology companies to inform and improve collaborative work on identifying terrorist content.

In addition, Google intends to be tougher to put on legal religious videos that proclaim superiority, for example, over Western civilization and culture. If they are marked as "inappropriate", they will never be available for monetization, recommendations, comments or likes, although they will not be deleted. Google considers this the right balance between freedom of speech and access to information. Here she will not be accused of distributing radical videos and making money on them.

On the development of AI to identify terrorism and told the company Facebook. The social network uses image comparison technologies (detection of duplicates of already known terrorist photographs), speech recognition (the neural network is now learning the sound and text of the old videos of the Al Qaeda terrorist organizations and ISIL), removing terrorist clusters from members of the social network ( they are grouped into clusters), fake account definitions, and cross-platform collaboration.

Source: https://habr.com/ru/post/370559/


All Articles