Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
October 2, 2019 08:13 pm PDT

Researchers think that adversarial examples could help us maintain privacy from machine learning systems

Machine learning systems are pretty good at finding hidden correlations in data and using them to infer potentially compromising information about the people who generate that data: for example, researchers fed an ML system a bunch of Google Play reviews by reviewers whose locations were explicitly given in their Google Plus reviews; based on this, the model was able to predict the locations of other Google Play reviewers with about 44% accuracy.

This has grave implications for privacy, as it can be used to de-anonymize or re-identify anonymized data-sets (for example, some of these statistical methods are used to make very accurate guesses about which pages are being transited through encrypted Tor connections).

But machine learning systems are also plagued by adversarial examples: tiny preturbations in data that would not confuse humans but which sharply decrease the accuracy of machine learning systems (from fake roadsigns projected for an eyeblink to changes to a single pixel in an image.

Wired's Andy Greenberg reports on two scholarly efforts to systematize a method for exploiting adversarial examples to make re-identification and de-anonymization attacks less effective.

The first is Attrigard, which introduces small amounts of random noise into user activity such that "an attackers accuracy is reduced back to that baseline." The second is Mockingbird, designed to make it much harder to guess which encrypted pages and sites are traversing the Tor network, though Mockingbird requires a much larger injection of noise, incurring a 56% bandwidth penalty in Tor's already bandwidth-starved environment. Read the rest


Original Link: http://feeds.boingboing.net/~r/boingboing/iBag/~3/E3m1g6dronI/mockingbird-and-attriguard.html

Share this article:    Share on Facebook
View Full Article