📜 ⬆️ ⬇️

YouTube employees are looking for hate on video. AI watches and learns


Advertising the BBC film next to the nationalist revival of Poland nationalist group video

Every day, hundreds of people in different parts of the United States (and maybe abroad) turn on their computers and start watching YouTube. But they do it not for pleasure, quite the opposite. People are sick of this work, but these are the requirements of the temporary employer, Google - they have to watch videos, look for hateful expressions in the credits and speech, mark the video as “offensive” or “delicate”. All in order to, in such videos, God forbid, did not get advertising a major advertiser, who then zagnobyat for financial support for the racists and the like sins . Walmart, PepsiCo and Verizon, among others, have come across this.

YouTube and Google recently appeared in the center of a scandal because of the activities of marketer Eric Feinberg (Eric Feinberg), who set the goal to eradicate the evil on the Internet - the distribution of websites and videos that correspond to hate groups and communities, according to "kill the Jews" keywords. Feinberg published links and screenshots of such sites and videos on YouTube, which successfully earn money by placing advertisements, including well-known brands. Thanks to the activist, the problem reached the level of national media, there appeared a lot of publications with examples of how advertisements of a company appear in a video with bad language or hate speech.

As a result, YouTube was forced to tighten the verification of the content on which advertising is placed. Google has promised in an official blog that it will recruit more people into the staff who will monitor compliance with the rules in order to prevent ads from appearing on pages with inappropriate content.
')
Google kept a promise - and hired a lot of temporary workers (“paces”) through ZeroChaos and other third-party agencies to hire temporary staff. They work remotely and watch YouTube videos from their homes, tagging and categorizing aggressive content.

Of course, even a thousand “paces” will not be able to view all the videos that humanity is uploading to YouTube. According to the latest statistics, 400 hours of video is downloaded there per minute , which is approximately 600,000 hours per day. If we assume that to evaluate a video you need to watch 10% of its duration, then total moderation will require 7,500 employees working in shifts of 8 hours ( (600000/10)/8=$750). To pay such a number of staff will take almost a million dollars a day.

So, Google considers protein staff as a temporary solution before integrating the normal silicon AI system, which will classify video clips. In fact, now the “paces” carry out neural network training with examples, showing her samples of “offensive” and “delicate” videos.

Erik Feinberg said that over the years of research "he has compiled a database of thousands of words and phrases that are associated with vile activity." For example, one of these words is the Serbian word “hanwa”, which correlates with the activity of jihadists. According to him, Google will need a lot of time to build such a base, so it’s better to buy a license from it. But Google, as we see, went its own way. “The problem cannot be solved by people and should not be solved with the help of people,” said Google Schindler, the commercial director of Google, in a recent interview with Bloomberg .

The employees themselves, who give marks to videos, are well aware that they are training AI. At the same time, employees are confident that AI will not cope with the task, because the classification of this kind of content is a very delicate matter. They need human eyes and the human brain to determine exactly what content is offensive, they say. In this connection, the winged definition of obscenity from an American judge “ I know it when I see it ” comes to mind .

But Google thinks differently, and now when classifying videos, they require maximum performance from moderators. Priorities have changed, now performance takes precedence over accuracy, in some cases they require a rating for hourly video after a few minutes of viewing. Moderators use different ways to save time: they quickly scan titles, scroll through the video with jerks for 10 seconds. They work with a timer that constantly shows the time spent on the task and the estimated deadline. As a result, it is required not only to mark the video as “not suitable”, but to indicate a specific category: “Inappropriate language” (subcategories: profanity, hate speech), “Cruelty” (subcategories: terrorism, war and conflict, death and tragedy, etc.) , "Drugs", "Sex / nudity" (subcategories: offensive content, nudity, other). The moderator must also indicate “other sensitive content” if provocative sexually exciting video or “sensational and shocking” content is shown. Some material just does not fit into these categories, which complicates the work of moderators. It is especially difficult to judge when people in a video speak a foreign language (Google actively picks up moderators with knowledge of foreign languages).

Google also implements test tasks with a previously known answer to check the quality of moderators' work.

Moderators say that they have already seen enough of everyone: suicide and such cruelty, after watching which a respite is required for a few hours to recover.

After launching the content rating program in 2004, ABE paid moderators $ 20 per hour for hard work on the “remote”. If the nerves survive, you could work overtime. Then ABE bought WorkForceLogic, and the last one in 2012 acquired ZeroChaos. To date, working conditions have deteriorated: the payment was reduced to degrading $ 15 per hour, and the maximum working week was limited to 29 hours. These Google employees have no paid vacations, they can be deprived of work at any time. And the saddest thing is that they are every day bringing their dismissal closer, while training the car themselves to do their job.

Source: https://habr.com/ru/post/403399/


All Articles