Mozilla wants to expose YouTube's algorithms. The company calls on people to help

What is Mozilla's plan?

The company announced a new extension to allow users to better

Understand how YouTube's recommendation algorithm works and provide detailed information about the patterns it detects.

YouTube recommendations can be amazingbut also dangerous. The platform has long been recommending malicious content to its users, from pandemic conspiracies to political disinformation, even if they've previously viewed innocuous content.

In October 2019, Mozilla publishedown research on this topic, which found that YouTube recommended malicious videos that included misinformation. With the global pandemic and the upcoming US elections, the consequences of this problem are real and growing every day, the company said.

Despite serious consequences, the algorithmYouTube recommendations are completely mysterious to users. What will YouTube recommend to US users to watch in the last days before the election? Or in the coming days, when election results may be unclear?

By asking these questions, Mozilla givesYouTube users the ability to act when malicious videos are recommended to them and launch RegretsReporter. This browser extension was created to crowdsource research into the YouTube recommendation problem.

Why is this Mozilla?

YouTube's AI recommendations are one of the most...influential “curators of information” on the Internet. YouTube is the second most visited site in the world, and viewing videos based on its algorithm accounts for 70% of total viewing time on the site. It is no exaggeration to say that YouTube has a significant impact on public awareness and understanding of key issues around the world.

For years, people have been sounding the alarm aboutYouTube recommends videos that include conspiracy theories, misinformation, and other malicious content. YouTube previously stated that the company has made progress on this issue and the platform has managed to reduce the number of harmful recommendations by 70%. But it’s impossible to verify these claims or figure out where YouTube still has work to do.

Every day while YouTube checks thesestatements, videos such as "Civil War Is Coming" or "Plandemic" can be offered to more and more people. Mozilla said it is «recruiting YouTube users to become watchdogs» platforms. People can share data about recommendations provided to them to help Mozilla understand what YouTube recommends and how to make recommendation systems more reliable overall.

What specific information does Mozilla want to disclose?

  • What recommended videos contain racist, violent, or conspiratorial content?
  • Are there patterns in video recommendations in terms of frequency or severity of malicious content?
  • Are there specific YouTube usage patterns for which harmful content is recommended?

With this information, Mozilla - along withfellow researchers, journalists, politicians, and even YouTube engineers - can work to build more robust systems for recommending content.

By using open source, Mozilla willpublicly share the results of the study and hope that YouTube and others will use this information to improve their products. Follow the updates on the website.

How will YouTube whistle-blowing company work?

When you browse YouTube, the new extensionMozilla automatically sends data about how much time you spend on the platform without collecting any information about what you are watching or looking for. This may give us some insight into how often users are outraged by YouTube's recommendations.

The Mozilla browser extension prompts the user to enter data about the YouTube algorithm.
prt scr Mozilla

Users also have the option tosend the report directly to Mozilla. The report form will ask you to tell more about the YouTube claims and gather information about the video you are reporting and how you arrived at it. This is important for understanding how YouTube's recommendation system gets people to watch certain videos and what impact it can have on them.

It's safe?

Throughout this process, user privacy is of utmost importance to the company, Mozilla said.

The data that Mozilla collects is related toa randomly generated user ID rather than a specific YouTube account. Only Mozilla will have access to the raw data to minimize the risk of identifying users. The extension does not collect data while watching videos in private mode. More detailed information about user privacy is provided in the RegretsReporter Privacy Notice.

prt scr Mozilla

An important aspect of research

"We ask users not to changeyour behavior on YouTube when using this extension. Don’t specifically look for provocative ones. Instead, use YouTube as usual,” explains Mozilla. Judging by the company’s statement, this is the only way to “collectively” understand whether YouTube’s problem with recommending inappropriate content is being solved and in what areas the video service needs to improve.

YouTube reaction

Mozilla has no formal agreement with Google or YouTube to research the recommendation algorithm, but Boyd says they've been in contact with the company and intend to share information.

YouTube, however, said that Mozilla's methodology appears to be "questionable."

A YouTube spokesman said in a statement to TheVerge that the company is always interested in researching its recommendation system. “However, it is difficult to draw general conclusions from anecdotal examples, and we are constantly updating our recommendation systems to improve the user experience,” the spokesman said, adding that over the past year, YouTube has launched “more than 30 different changes to reduce border content recommendations”.

What's wrong with YouTube's algorithms?

At the end of 2019YouTubefinally admitted that their recommendation systemoffers malicious content. This is a small step in the right direction. For a long time, video hosting algorithms have received many complaints about dubious recommendation algorithms.

YouTube ignores warnings

Bloomberg reports that YouTube executivesignored all warnings, allowing “toxic videos to run rampant.” According to company employees, proposals to change guidelines and stop conspiracies were sacrificed for inclusion.

Back in 2018, Susan Wojcicki, chiefThe YouTube executive took the stage during the South by Southwest conference to defend the platform. Her company's reputation, which had been criticized for months for promoting falsehoods online, took a hit after a video sharing a web site linked to a conspiracy theory video about the Parkland, Florida, high school shooting was circulated. The authors of the video claimed  that the victims were "crisis actors".

Wojcicki is generally reluctant to speak withthe public, but at a conference in Austin, she spoke to present a solution she hoped would help suppress conspiracy theories. A tiny text box from websites like Wikipedia should underneath videos that question conventional wisdom, such as the moon landing, will help people view the truth.

YouTube striving to overtake television byis estimated to generate over $ 16 billion in advertising sales annually. But that day, Wojcicki compared her video site to a different type of establishment. “We're more like a library,” she said, taking the familiar position of an advocate for free speech. "If you look at libraries, there has always been controversy."

HowYouTube hides the truth?

YouTube is firing independent researchers who want to understand the problem of recommendation algorithms. The Mozilla Foundation created a graph to prove this.

  • At the beginning, there was news that the systemYouTube recommendations harm users. The algorithm has radicalized young people in the U.S., sowed division in Brazil, spread state-sponsored propaganda in Hong Kong, and more.
  • YouTube then responds.But not by admitting a mistake or giving a detailed solution. Instead, the company issues a statement that blames and criticizes the research methodologies used to study their recommendations, making vague promises that the company is «working on it».
  • After in a blog post YouTubeadmitted that their recommendation engine was suggesting borderline content to users and published a graph showing that they had devoted significant resources to solving this problem for several years.

    "What they don't realize is how theyare shying away from and firing journalists and scientists who have been highlighting this problem for years. Additionally, there remains a clear lack of publicly verified data to support YouTube's claims that they are addressing the issue," Mozilla said.

As a result, Mozilla published a list of answersYouTube to conduct third-party research into its recommendation system. A timeline posted on Mozilla's blog records 14 responses—evasive or dismissive—issued over a 22-month period, in reverse chronological order.

Top trends in YouTube excuses

  • YouTube often claims to fix the problem by changing its algorithm, but provides little detail on what exactly these settings do.
  • YouTube claims to have data that contradicts independent research, but refuses to share this data.
  • YouTube dismisses independent research on this topic as erroneous or anecdotal, but refuses to provide third-party access to its data to prove it.

In a blog post, YouTube stated that«borderline content» accounts for 1% of the content viewed by users in the US. But with 500 hours of video uploaded to YouTube every minute, how many hours of such content are still offered to users? After reviewing hundreds of stories about the impact of these videos on people's lives in recent months, it is clear that this issue is too important to be minimized by statistics that don't even give outside experts the opportunity to see and study the problem as a whole.

What's the bottom line?

Ever since Wojcicki took the stage defendingYouTube last year, new famous conspiracy theories appeared on the platform, including vaccination of children; about Hillary Clinton's connection to a satanic cult, which has angered lawmakers seeking to regulate technology companies. A year later, YouTube became even more associated with the darknet. Perhaps Mozilla's new plan for dealing with inappropriate content will help clarify how recommendations are generated. If anything, the new development re-raises the important issue of manipulating public opinion through social networks and media.

Read also

The Doomsday glacier turned out to be more dangerous than scientists thought. We tell the main thing

Two pieces of evidence of extraterrestrial life emerged at once. One on Venus, the other - no one knows where

Hybrid vehicles are more environmentally hazardous than diesel vehicles. We tell the main thing