Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
August 16, 2019 07:25 pm

The Algorithms That Detect Hate Speech Online Are Biased Against Black People

An anonymous reader shares a report: Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online. But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study [PDF], researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study [PDF] found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts. This is in large part because what is considered offensive depends on social context. Terms that are slurs when used in some settings -- like the "n-word" or "queer" -- may not be in others. But algorithms -- and content moderators who grade the test data that teaches these algorithms how to do their job -- don't usually know the context of the comments they're reviewing. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI -- which is often proposed as a tool to objectively identify offensive language -- can amplify the same biases that human beings have. They also prove how the test data that feeds these algorithms have baked-in bias from the start.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/dJnpkrGMM7g/the-algorithms-that-detect-hate-speech-online-are-biased-against-black-people

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot