Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
September 12, 2018 01:25 am

Safe AI Requires Cultural Intelligence

An anonymous reader shares an excerpt from a report written by Gillian Hadfield via TechCrunch. Hadfield is a professor of law and strategic management at the the University of Toronto; a faculty affiliate at the Vector Institute for AI; and a senior policy advisor at OpenAI. From the report: Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes. [...] For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community. [...] Norms concern things not only as apparently minor as what foods to combine but also things that communities consider tremendously consequential: who can marry whom, how children are to be treated, who is entitled to hold power, how businesses make and price their goods and services, when and how criticism can be shared publicly. Successful and safe AI that achieves our goals within the limits of socially accepted norms requires an understanding of not only how our physical systems behave, but also how human normative systems behave. Norms are not just fixed features of the environment, like the biology of a plant. They are dynamic and responsive structures that we make and remake on a daily basis, as we decide whether or when to let someone know that "this" is the way "we" do things around here. These normative systems are the systems on which we rely to solve the challenge of ensuring that people behave the way we want them to in our communities, workplaces and social environments. Only with confidence about how everyone around us is likely to behave are we all willing to trust and live and invest with one another. Ensuring that powerful AIs behave the way we want them to will not be so terribly different. Just as we need to raise our children to be competent participants in our systems of norms, we will need to train our machines to be similarly competent. It is not enough to be extremely knowledgeable about the facts of the universe; extreme competence also requires wisdom enough to know that there may be a rule here, in this group but not in that group. And that ignoring that rule may not just annoy the group; it may lead them to fear or reject the machine in their midst.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/JR2FOeoWIs4/safe-ai-requires-cultural-intelligence

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot