Researchers in one study discovered that leading AI models for processing hate speech were one and a half times more likely to flag tweets as offensive or hateful when they were written by African-Americans and 2.2 times more likely to flag those written in the variation known as African-American English — commonly used by black people in the US.

The second study found racial bias against African-Americans’ speech in five academic data sets for studying hate speech. The sets included some 155,000 Twitter posts.

Some words that are considered slurs in most settings, such as the N-word, may not be in others. As of now, most machine learning systems can’t parse this type of nuance.

AI trained to detect hate speech online found to be biased against black people

While enjoying my cup of Schadenfreude, I cannot think but how beautiful the chickens look when they come home to roost. First they establish that one race in particular can use a word freely but if the rest of the world uses it, then it is the worst of insults and people who use it are racist maggots. And when the ultimate in technology is used to finally do a mind control and silence the wrongthinkers, the machine actually attacks the protected group.

Irony, Thy is Beautiful to I.

Hat Tip Robert E.

Spread the love

By Miguel.GFZ

Semi-retired like Vito Corleone before the heart attack. Consiglieri to J.Kb and AWA. I lived in a Gun Control Paradise: It sucked and got people killed. I do believe that Freedom scares the political elites.

5 thoughts on “Artificial Intelligence is Racist!”
  1. One must ask … Is the AI really biased, or are proportionately more POCs actually using hate speech?

    (And how does that correlate to major political party membership statistics by race?)

    1. It doesn’t, so it just used the word choices to determine “hate speech”. Since the “n-word” is slung around like it’s punctuation in some groups, the code correctly picks that out as “hate speech”.

      And, of course, this isn’t “artificial intelligence”. It’s just data processing. But saying “AI is racist” sells more paper than “blacks use racial slurs”.

  2. The problem is the programmers haven’t been able to program “POC Can NEVER Be Racist!” and reliable race discovery and determination Into the programming, yet. Then they will need to get to the edge cases and incorporate proper victim hierarchy too.

    I can just imagine the consternation and confusion when the AI Update on victim hierarchy is being rolled out. What happens when the relative victimness of Diabled Lesbian Women to Sub Saharan Muslim Women, and the AI Programming conflicts? Pure chaos as their robot allies fight over, “Who is the Bigger Victim?”

    1. Except it will never happen. The language police CANNOT let the rules be stable enough to be written into code — they’d lose their power to arbitrarily change the rules.

Only one rule: Don't be a dick.

This site uses Akismet to reduce spam. Learn how your comment data is processed.