Employees of the University of Maryland have created a practical "invisibility cloak". It was used to check
A special pattern for the competition was generated,using a large set of training images. Each time the system detected a person, the data was revisited to see how much the pattern lowered the subject's score. Ultimately, the "adversarial" pattern was improved to the point where it basically hinders recognition of people.
An adversarial attack is a way to deceive a neural network and achieve an incorrect result from it.
The pattern on the sweater looks a bit like a bad Impressionist painting of people buying pumpkins at the market.
During the experiments, the scientists easily fooled the YOLOv2 detector with a template trained on the COCO dataset with a carefully formulated target.
COCO (Common Objects in Context) is a large set of images. Consists of 330,000 photographs and illustrations, which depict more than 1.5 million objects.
Read more:
Archaeologists have officially confirmed the legends from the Bible
It turned out what happens to the cells of the body when the heart dies
Starlink signal hacked to be used as an alternative to GPS