Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 11, 2017 04:00 pm

A Big Problem With AI: Even Its Creators Can't Explain How It Works

Last year an experimental vehicle, developed by researchers at the chip maker Nvidia was unlike anything demonstrated by Google, Tesla, or General Motors. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions, argues an article on MIT Technology Review. From the article: The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur -- and it's inevitable they will. That's one reason Nvidia's car is still experimental.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/w6sSwgAaQn4/a-big-problem-with-ai-even-its-creators-cant-explain-how-it-works

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot