An Interest In:
Web News this Week
- March 24, 2024
- March 23, 2024
- March 22, 2024
- March 21, 2024
- March 20, 2024
- March 19, 2024
- March 18, 2024
September 24, 2018 11:20 pm
Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/iJp4z88K9VI/machine-learning-confronts-the-elephant-in-the-room
Machine Learning Confronts the Elephant in the Room
A visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take. From a report: In a new study [PDF], computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision -- and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene -- an image of an elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. "There are all sorts of weird things happening that show how brittle current object detection systems are," said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.Read more of this story at Slashdot.
Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/iJp4z88K9VI/machine-learning-confronts-the-elephant-in-the-room
Share this article:
Tweet
View Full Article
Slashdot
Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..More About this Source Visit Slashdot