Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
March 8, 2019 05:13 pm PST

Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye

For several years, I've been covering the bizarre phenomenon of "adversarial examples (AKA "adversarial preturbations"), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle

A friend of mine who is a very senior cryptographer of longstanding esteem in the field recently changed roles to managing information security for one of the leading machine learning companies: he told me that he thought that it may be that all machine-learning models have lurking adversarial examples and it might be impossible to eliminate these, meaning that any use of machine learning where the owners of the system are trying to do something that someone else wants to prevent might never be secure enough for use in the field -- that is, we may never be able to make a self-driving car that can't be fooled into mistaking a STOP sign for a go-faster sign.

What's more there are tons of use-cases that seem non-adversarial at first blush, but which have potential adversarial implications further down the line: think of how the machine-learning classifier that reliably diagnoses skin cancer might be fooled by an unethical doctor who wants to generate more billings; or nerfed down by an insurer that wants to avoid paying claims.

My MIT Media Lab colleague Joi Ito (previously) has teamed up with Harvard's Jonathan Zittrain (previously to teach a course on Applied Ethical and Governance Challenges in AI, and in reading the syllabus, I came across Motivating the Rules of the Game for Adversarial Example Research, a 2018 paper by a team of Princeton and Google researchers, which attempts to formulae a kind of unified framework for talking about and evaluating adversarial examples. Read the rest


Original Link: http://feeds.boingboing.net/~r/boingboing/iBag/~3/m6L6McW84XA/hot-dog-or-not.html

Share this article:    Share on Facebook
View Full Article