Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
December 26, 2017 12:00 pm

Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter

An anonymous reader shares a Wired report: Algorithms, unlike humans, are susceptible to a specific type of problem called an "adversarial example." These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms. While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They've already been used to beat other kinds of algorithms, like spam filters. Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT's Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google's Cloud Vision API, a machine learning algorithm used in the real world today. For example, in November another team at MIT (with many of the same researchers) published a study demonstrating how Google's InceptionV3 image classifier could be duped into thinking that a 3-D-printed turtle was a rifle. In fact, researchers could manipulate the AI into thinking the turtle was any object they wanted.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/jngL36D11AE/researchers-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot