Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
January 13, 2022 04:01 pm

Deep Learning Can't Be Trusted, Brain Modeling Pioneer Says

During the past 20 years, deep learning has come to dominate artificial intelligence research and applications through a series of useful commercial applications. But underneath the dazzle are some deep-rooted problems that threaten the technology's ascension. IEEE Spectrum: The inability of a typical deep learning program to perform well on more than one task, for example, severely limits application of the technology to specific tasks in rigidly controlled environments. More seriously, it has been claimed that deep learning is untrustworthy because it is not explainable -- and unsuitable for some applications because it can experience catastrophic forgetting. Said more plainly, if the algorithm does work, it may be impossible to fully understand why. And while the tool is slowly learning a new database, an arbitrary part of its learned memories can suddenly collapse. It might therefore be risky to use deep learning on any life-or-death application, such as a medical one. Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART). Grossberg -- an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University -- based ART on his theories about how the brain processes information. "Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events," he says. Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software. [...] One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense. "Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives," he writes. "It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works." The problem with today's AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People's behaviors adapt to new situations and sensations "on the fly," Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.

Read more of this story at Slashdot.


Original Link: https://slashdot.org/story/22/01/13/166211/deep-learning-cant-be-trusted-brain-modeling-pioneer-says?utm_source=rss1.0mainlinkanon&utm_medium=feed

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot