Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
November 10, 2019 05:34 am

The Dangers of 'Black Box' AI

PC Magazine recently interviewed Janelle Shane, the optics research scientist and AI experimenter who authored the new book "You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place." At one point Shane explains why any "black box" AI can be a problem:I think ethics in AI does have to include some recognition that AIs generally don't tell us when they've arrived at their answers via problematic methods. Usually, all we see is the final decision, and some people have been tempted to take the decision as unbiased just because a machine was involved. I think ethical use of AI is going to have to involve examining AI's decisions. If we can't look inside the black box, at least we can run statistics on the AI's decisions and look for systematic problems or weird glitches... There are some researchers already running statistics on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work. This is in addition to being more ethical about whether a particular algorithm should be built at all... [T]here are applications where we want weird, non-human behavior. And then there are applications where we would really rather avoid weirdness. Unfortunately, when you use machine-learning algorithms, where you don't tell them exactly how to solve a particular problem, there can be weird quirks buried in the strategies they choose. Describing a kind of worst-case scenario, Shane contributed to the New York Times "Op-Eds From the Future" series, channeling a behavioral ecologist in the year 2031 defending "the feral scooters of Central Park" that humanity had been co-existing with for a decade. But in the interview, she remains skeptical that we'll ever acheive real and fully-autonomous self-driving vehicles:It's much easier to make an AI that follows roads and obeys traffic rules than it is to make an AI that avoids weird glitches. It's exactly that problem -- that there's so much variety in the real world, and so many strange things that happen, that AIs can't have seen it all during training. Humans are relatively good at using their knowledge of the world to adapt to new circumstances, but AIs are much more limited, and tend to be terrible at it. On the other hand, AIs are much better at driving consistently than humans are. Will there be some point at which AI consistency outweighs the weird glitches, and our insurance companies start incentivizing us to use self-driving cars? Or will the thought of the glitches be too scary? I'm not sure. Shane trained a neural network on 162,000 Slashdot headlines back in 2017, coming up with alternate reality-style headlines like "Microsoft To Develop Programming Law" and "More Pong Users for Kernel Project." Reached for comment this week, Shane described what may be the greatest danger from AI today. "For the foreseeable future, we don't have to worry about AI being smart enough to have its own thoughts and goals. "Instead, the danger is that we think that AI is smarter than it is, and put too much trust in its decisions."

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/mvThOidCXhg/the-dangers-of-black-box-ai

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot