Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
December 14, 2019 04:02 pm PST

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments.

This year's report is especially important, as algorithmic discrimination, junk science, bad labor practices, and inappropriate deployments have gained salience and urgency.

The Institute's top recommendations are:

* Stop deploying "emotion detection" systems ("governments should specifically prohibit use of affect recognition in high-stakes decision-making processes"). These are junk science ("built on markedly shaky foundations") but they're being used for everything from medical care to insurance to student performance evaluation.

* Stop using facial recognition in "sensitive social and political contexts" ("including surveillance, policing, education, and employment where facial recognition poses risks and consequences that cannot be remedied retroactively").

* Fix the industry's diversity problem "to address systemic racism, misogyny, and lack of diversity."

* Expand bias research beyond technical fixes: "center 'non-technical' disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies,and other disciplines keenly attuned to social context" (see: "second-wave algorithmic accountability")

* Mandatory disclosure of AI industry's climate impact: "Disclosure should include notifications that allow developers and researchers to understand the specific climate cost of their use of AI infrastructure."

* Give workers the right to "contest exploitative and invasive AI" with the help of trade unions: "Workers deserve the right to contest such determinations [by "AI-enabled labor-management systems"], and to collectively agree on workplace standards that are safe, fair, and predictable."

* Give tech workers the right to know what they're working on and to "contest unethical or harmful uses of their work": "Companies should ensure that workers are able to track where their work is being applied, by whom, and to what end."

* Expand biometric privacy rules for governments and private actors: A call to universalize Illinois's world-beating Biometric Information Privacy Act (BIPA). Read the rest


Original Link: http://feeds.boingboing.net/~r/boingboing/iBag/~3/DUqAeR0egA4/oppenheimers-r-us.html

Share this article:    Share on Facebook
View Full Article