Flat Earth, anti-vaccinations and a moon without Americans: how YouTube became the main channel for the dissemination of conspiracy theories and hoaxes

Recommendation systems are one of the most common forms of artificial intelligence, with which

users are facing, regardless of whether they know about it or not. Such systems are embedded in Facebook and Twitter - and, of course, in “featured videos” on YouTube.

Such systems are extremely complex - especially ifwe are talking about services that are visited by tens of millions of users per day. Like other similar neural networks, AI from YouTube is trying to show the user the most relevant videos based on previous videos viewed by him or on his interests collected using Google.

In the case of YouTube, the AI ​​consists of two neural networks - the first generates recommendations, and the second ranks them for a specific user.

Initial selection of candidates for entry into"Recommendations" is based on the analysis of user activity. This is a video that he watched before or sites he visited. At this stage, the neural generator offers several hundred options, which are then actualized for a particular person. The neural network is ready to abandon many parameters (age, gender, location) in order to show the user what he recently watched.

Then comes the second neural network, whichresponsible for ranking. She assesses the candidate videos on a wide range of criteria and sees as the main task the highest correspondence of the video to the interests of the user.

For the recommendation system does not have muchmeaning that the user will miss some of the proposed videos - in general, it takes into account the most important markers for it. And based on them shows videos that quickly gain views.

The system is continuously trained - one network is givenuser activity history up to a certain time (t), and the second network asks what the user would like to see at the time point (t + 1). As a result, they get more data and get to know the user better and better. The AI ​​developers for YouTube believe that this was one of the best ways to recommend a video.

Why YouTube shows us conspiracy videos

The YouTube recommender system with all itsfrom a technical point of view, there is a clear ethical flaw - its main goal is that the user watches the video as long as possible and, as a result, sees more advertising.

Sidebar

For example, the algorithm sees that a clip of a knownThe musician quickly gains views and becomes viral. The neural network starts offering it to users who might like it, and automatic video playback starts an endless series of recommendations. And now the user is not looking at a music video, but a video about the fact that the Americans did not land on the Moon or about the Masonic conspiracy.

The work of this scheme is well demonstrated by the results of a survey conducted at the congress of supporters of the theory of flat Earth. 29 out of 30 participants said they learned the theory from a video on YouTube.

The flat earth theory is based on the ideaproposed in the XIX century by the English inventor Samuel Roubotem. Its supporters are of the opinion that the Earth is a flat disk with a diameter of 40 thousand km, centered on the North Pole. On the entire circumference of the disk is a wall of ice that does not allow the world ocean to spread. The theory says that the Sun, the Moon and other stars rotate above the Earth’s surface, and all photographs taken in space are fake.

The only respondent who becamea supporter of the flat earth theory before watching a video on YouTube, believed in the idea after talking with relatives who learned about it after watching a video on YouTube.

YouTube is the main distributor of ideas about flat Earth.

This scheme is beneficial to all participants: users get an endless tape of content, YouTube receives millions of hours of viewing, and advertisers receive monetization of their videos.

What does this YouTube do?

YouTube community guidelines do not prohibit authorsDownload videos with conspiracy theories or misleading information. Nevertheless, the video service makes point attempts to prevent the spread of conspiracy theories.

At the end of February, the service blocked severalthousands of accounts that published videos about conspiracy theories and spread various hoaxes. Under the lock then came and videos, the creators of which talked about self-assembly of weapons or their use. Two months earlier, YouTube announced an increase in the number of moderators to 10 thousand people.

In addition to the increase in the number of moderators, in Decemberlast year, YouTube began to show links to Wikipedia and other reliable sources when watching videos with conspiracy theories to limit the spread of fake news and false theories in video service.

However, the system does not work perfectly - the algorithmYouTube received a fire broadcast of the fire in Notre Dame de Paris, which occurred on April 15, for images of burning twin towers during the September 11 terrorist attack in New York The neural network showed users a Wikipedia article about the attack, which sets out several versions of the event - and then the moderators manually turned off the article.

“The panels with articles start the algorithm, and sometimes it is mistaken. We turned off the panels in live broadcasts of the fire in the Cathedral of Notre Dame manually, ”a YouTube representative said then.

YouTube algorithms error in the fire of Notre Dame Cathedral on April 15

Another example is a video entitled “Any Cancercan be cured in a few weeks, ”published by iHealthTube.com in late March. It lacks a panel with reference to a reliable source on the topic of the video, while the video was viewed about 7 million times.

YouTube representatives in response to a series of requestsBloomberg refused to explain how the recommendation system works and how effective the measures taken by the company to combat the dissemination of conspiracy videos. In general, the YouTube management prefers to avoid direct answers to sensitive questions and prefers to explain problems with technical deficiencies of algorithms or employee negligence, the agency writes.

YouTube may stop distributingconspiracy theories only in one case - if regulators of several countries or large advertisers demand it. The latter have already managed to ensure that the video service prohibits the monetization of video depicting violence.

Sidebar

If the video about the flat Earth, landing the USA on the moonor the ineffectiveness of vaccination will not be possible to monetize, their number will drop dramatically. Other ways of dealing with conspiracy theories will hardly prove effective - not a single AI developer knows how it works - and Google is no exception. And to re-create the recommendatory algorithm, which has already proved its efficiency, the company does not make any sense.

In response to criticism of the company's growth priorityon user security, Facebook has announced an intention to radically change its policy of working with data and social network algorithms. The YouTube leadership is still trying to explain its mistakes with the interests of the public or investors, and sometimes with the actions of individual employees. And for Google, the inability of video services to solve their problems remains one of the main problems.