Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
August 14, 2017 02:00 am

Why AI Won't Take Over The Earth

Law professor Ryan Calo -- sometimes called a robot-law scholar -- hosted the first White House workshop on AI policy, and has organized AI workshops for the National Science Foundation (as well as the Department of Homeland Security and the National Academy of Sciences). Now an anonymous reader shares a new 30-page essay where Calo "explains what policymakers should be worried about with respect to artificial intelligence. Includes a takedown of doomsayers like Musk and Gates." Professor Calo summarizes his sense of the current consensus on many issues, including the dangers of an existential threat from superintelligent AI: Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field... A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence's thesis along several lines. First, they argue that there is simply no path toward machine intelligence that rivals our own across all contexts or domains... even if we were able eventually to create a superintelligence, there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system. As Yann LeCun, deep learning pioneer and head of AI at Facebook colorfully puts it, computers don't have testosterone.... At best, investment in the study of AI's existential threat diverts millions of dollars (and billions of neurons) away from research on serious questions... "The problem is not that artificial intelligence will get too smart and take over the world," computer scientist Pedro Domingos writes, "the problem is that it's too stupid and already has." A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI -- but also of the possibility that we're living in a computer simulation. "If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn't kill everyone. I think it a fair deduction that Professor Bostrom is wrong about something."

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/ErRLYcXEiWM/why-ai-wont-take-over-the-earth

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot