Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 1, 2018 07:00 pm

To Protect AI From Attacks, Show It Fake Data

AI systems can sometimes be tricked into seeing something that's not actually there -- remember when Google's software "saw" a 3-D-printed turtle as a rifle. At an event earlier this week, Google Brain researcher Ian Goodfellow explained how AI systems defend themselves. From a report: Goodfellow is best known as the creator of generative adversarial networks (GANs), a type of artificial intelligence that makes use of two networks trained on the same data. One of the networks, called the generator, creates synthetic data, usually images, while the other network, called the discriminator, uses the same data set to determine whether the input is real. Goodfellow went through nearly a dozen examples of how different researchers have used GANs in their work, but he focused on his current main research interest, defending machine-learning systems from being fooled in the first place. [...] GANs are very good at creating realistic adversarial examples, which end up being a very good way to train AI systems to develop a robust defense. If systems are trained on adversarial examples that they have to spot, they get better at recognizing adversarial attacks. The better those adversarial examples, the stronger the defense.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/KfAyejpwlOE/to-protect-ai-from-attacks-show-it-fake-data

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot