The UN wants to introduce a moratorium on the use of AI: why is it necessary and where will it lead?

How did it all start?

A new UN report was released in September 2021. Experts studied how countries and companies

artificial intelligence systems are used,that impact people's lives and livelihoods without putting in place adequate protections to prevent discrimination and other harms.

Yesterday UN Human Rights Chief MichelBachelet, called for a moratorium on the use of artificial intelligence technologies that pose a serious threat to human rights. Including systems for scanning persons tracking people in public places. She noted that countries should immediately ban AI applications that do not comply with international human rights law.

Where is the use of AI unacceptable?

Applications to be bannedThese include government "social rating" systems that evaluate people by their behavior, and certain AI-based tools that divide people into groups, such as ethnicity or gender.

Technologies based on artificialIntelligence “can be a force for good” and can also lead to “negative, even catastrophic, consequences” if used without consideration of how they impact human rights, Bachelet said in a statement.

“It’s not about the lack of artificialintelligence,” Peggy Hicks, director of thematic interaction at the UN Human Rights Center, explained to reporters. She addressed the media while presenting the report in Geneva. “The point is that if AI is going to be used in these very important functional areas related to human rights, then it must be done correctly. And we simply haven’t created the structure that ensures that.”

Bachelet did not call for a complete banfacial recognition technology, but noted that governments should stop scanning people's behavior in real time. For example, to evaluate staff performance. You need to ensure that the technology is accurate and that the AI ​​is unbiased and meets privacy and data protection standards.

The report also raises concerns aboutinstruments that try to determine the emotional and mental state of people by analyzing their facial expressions or body movements. Experts emphasize that such technologies are subject to bias, misinterpretation and lack of a scientific basis.

"The use of systems by state authoritiesRecognizing emotions, for example to highlight individuals for police detention or arrest, or to assess the veracity of statements during interrogations, risks undermining human rights such as privacy, freedom and fairness of the judiciary, ”the report said.

Where is the problem of using AI especially acute?

Although the names of specific countries in the reportare not mentioned, it is worth mentioning China. Let us recall that in May the authorities launched a program to test emotion recognition cameras on Uyghurs in the western region of Xinjiang. And recognition of the Uyghurs themselves. Let us recall that back in 2018, Human Rights Watch published a report on the persecution of the Muslim population of the Chinese region of Xinjiang. According to the organization, in recent years, Uyghurs have been detained there en masse and often without reason, placing them in prisons and educational camps; Millions of people are under constant video surveillance, and their social status and fate depend on points accrued in the “social credit” system.

However, key authors of the report said naming specific countries was not within their authority and could even be counterproductive.

“In the Chinese context, howand in other contexts, we are concerned about transparency and discriminatory applications that target specific communities,” explains UN spokeswoman Peggy Hicks.

She cited several court cases in the United States and Australia where artificial intelligence was misused.

One of the loudest precedents in the United States -situation with Amazon. This corporation is one of the centers of development in the field of artificial intelligence. In 2017, the company closed a pilot AI-based recruitment project that it had been running for nearly three years. One of the key problems was gender discrimination of candidates - the algorithm underestimated the assessments of women candidates.

Meanwhile, in Australia, the court created a groundbreakingprecedent. Artificial intelligence systems can now be legally recognized as inventors in patent applications. However, not everyone agreed with this decision.

What's the bottom line?

The report's recommendations reflect the views of manypolitical leaders in Western democracies. It is important not only to harness the economic and social potential of AI, but also to address growing concerns about the reliability of tools that track and profile people.

US Commerce Secretary Gina Raimondo duringa virtual conference in June bolstered the concerns of UN officials. “It’s scary to think about how it can be used to further intensify discriminatory tendencies,” she said. “We need to make sure we don't allow this.”

Thus, European regulators are alreadyhave taken steps to curb the most risky AI applications. The proposed rules, laid out by European Union officials this year, will prohibit certain uses of AI, such as real-time facial scanning, and tightly control other uses that could threaten human safety or human rights.

To read Further

A unique boat turns into a submarine in two minutes and is invisible to the enemy

Physicists have cooled atoms to the lowest temperature in the world

The most detailed model of the Universe has been published online. Anyone can study it