Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 25, 2019 12:16 pm PDT

A 40cm-square patch that renders you invisible to person-detecting AIs

Researchers from KU Leuven have published a paper showing how they can create a 40cm x 40cm "patch" that fools a convoluted neural network classifier that is otherwise a good tool for identifying humans into thinking that a person is not a person -- something that could be used to defeat AI-based security camera systems. They theorize that the could just print the patch on a t-shirt and get the same result.

The researchers' key point is that the training data for classifiers consists of humans who aren't trying to fool it -- that is, the training is non-adversarial -- while the applications for these systems are often adversarial (such as being used to evaluate security camera footage whose subjects might be trying to defeat the algorithm). It's like designing a lock always unlocks when you use the key -- but not testing to see if it unlocks if you don't have the key, too.

The attack can reliably defeat YOLOv2, a popular machine-learning classifier, and they hypothesize that it could be applied to other classifiers as well.

I've been writing about these adversarial examples for years, and universally, they represent devastating attacks on otherwise extremely effective classifiers. It's an important lesson about the difference between adversarial and non-adversarial design: the efficacy of a non-adversarial system is no guarantee of adversarial efficacy.

In this paper, we presented a system to generate adversarial patches for person detectors that can be printed out andused in the real-world. We did this by optimising an image to minimise different probabilities related to the appearance of a person in the output of the detector.

Read the rest


Original Link: http://feeds.boingboing.net/~r/boingboing/iBag/~3/fyPVqG8l800/tee-of-invisibility.html

Share this article:    Share on Facebook
View Full Article