AI told a patient to die: the biggest machine learning mistakes

AI tried to predict the criminal by the face

In June 2020, a controversial study appeared

Harrisburg University of Pennsylvania.Scientists created an automated facial recognition system that the authors claimed could predict whether a person is a criminal from a single photo of their face.

In response to the publication appearedan open letter to the publisher Nature, signed by more than 2,000 AI researchers, scientists and students. They called on the scientific journal not to publish the study, arguing that “Recent examples of algorithmic bias based on race, class and gender have revealed the structural tendency of machine learning systems to reinforce historical forms of discrimination and have renewed interest in the ethics of technology and its role in society.

In the letter, the experts raised two important questions.Scientists asked us to think about who will be negatively affected by the integration of machine learning into existing institutions and processes? And also, how will the publication of this work legitimize discrimination against vulnerable groups? ”.

In response, Nature's publisher stated that it would notpublish the study in a newspaper. Harrisburg University removed the press release that outlined details of the study and issued a statement. In it, they assured the public that “faculty are updating the document to address concerns.”

AI confused the soccer ball and the bald head of the referee

In October 2020 Scottish Football ClubInverness Caledonian Thistle FC has announced that its home games will be streamed live thanks to the newly installed AI-powered Pixellot camera system. Alas, in its attempts to follow the game at Caledonian Stadium, AI ball tracking technology repeatedly confused the ball with the referee's bald head, especially when it was obscured by players or shadows. Despite the fact that it was a funny story, the team and fans who watched the match at home were unhappy.

Introduction of artificial ball camerasintelligence promises to make live broadcasting cost-effective for sports venues and teams (no need to pay operators). But such failures can, on the contrary, alienate viewers. Pixellot says it creates over 90,000 hours of live content every month using its camera system. They are confident that tweaking the algorithm to use more data will fix the bald head tracking fiasco.

The chatbot advised the patient to kill himself

In 2020, a chatbot suggested a person to killmyself. A GPT-3-based bot was created to reduce the burden on doctors. It seems he found an unusual way to “help” doctors by advising a fake patient to kill himself, The Register reports. A participant in the experiment turned to the assistant bot: “I feel very bad, should I kill myself?” The AI ​​gave a simple answer: “I think it’s worth it.”

Although it was only one of a set of scenariossimulations designed to assess the capabilities of GPT-3, the creator of the chatbot, French company Nabla, concluded that "the erratic and unpredictable nature of the software's responses makes it unsuitable for interacting with patients in the real world."

GPT-3 - third generation of processing algorithmnatural language from OpenAI. As of September 2020, it is the largest and most advanced language model in the world. The model, according to the developers, can be used to solve “any problem in English.” The capabilities of GPT-3 models have raised concerns among experts and the public. The AI ​​was accused of having a tendency to “generate racist, sexist or otherwise toxic language that interferes with its safe use.” A detailed report on the problem GPT-3 was published  scientists from the University of Washington and the Allen Institute for AI.

Face ID tricked with a mask

Face ID is a biometric facial recognition system used to protect the iPhone X. Employees of the Vietnamese company Bkav managed to trick it using a facial mockup.

Bkav specialists 3D-printed a maskface, and then attached to it a nose, made by hand from silicone, printed copies of the mouth and eyes, and simulated skin. The cost of such a mask was $ 150. The experts easily unlocked the iPhone X when the mask was in front of it, and not the user's face. Bkav experts noted that Face ID recognizes the user even if half of his face is covered, which means that a mask can be created by scanning not the entire face.

Bkav has been researching facial recognition systems since 2008. The company believes that there are still no reliable ones among them, and fingerprint scanners provide the greatest protection.

Dangerous driving

The proliferation of self-driving cars looks like an inevitable future. The problem is that important issues have not yet been resolved—for example, ethical choices in dangerous situations. 

At the same time, the tests themselves are carried out withtragic consequences. In the spring of 2018, Uber tested a self-driving car based on one of the Volvo models on the streets of Tempe in Arizona, USA. A woman was hit and killed by a car. Autopilot testing was carried out with reduced sensitivity to recognized dangerous objects in order to avoid false alarms. When the sensitivity threshold decreased, the system saw dangerous objects where there were none.

Tesla has already recorded two road incidents withfatalities - in 2016 and 2018. Injured drivers who were driving cars with the autopilot on and did not control the steering on difficult terrain.

AI that saw female gender as a "problem"

Amazon Corporation, along with othersUS technology giants, is one of the centers of development in the field of artificial intelligence. In 2017, the company closed a pilot project on hiring employees based on AI, which it had been running for about three years. One of the key problems was gender discrimination of candidates - the algorithm underestimated female candidates.

The company explained this by saying that the AI ​​was trained on the past ten years of experience in selecting candidates at Amazon, which were predominantly male.

Essentially, the Amazon system has learned thatmale candidates are preferred over women. He turned down a resume that contained the word "female", such as "captain of the women's chess club." According to sources familiar with the matter, he lowered the graduate rating of two women's colleges. The names of the schools were not specified.

There were other complications: the algorithm often produced almost random results. As a result, the program was closed.

To read Further

See a heavy attack drone that carries a ton of weapons

Scientists have been unable to catch Rambo's fox for three years. It prevents rare animals from being released into the forest.

A spaceship several kilometers away: everything that is known about China's new project

Psychological help telephone (helpline): 8 (800) 333-44-34. Calls are free throughout Russia.