AI isn’t really intelligent, it is a system of trained responses. Trained being the key word here.
The gist of AI and deep learning is that you have a set of inputs and a set of outputs. The outputs are generally restricted. Too many outputs and things can get complicated. You take a sample set of inputs and feed it to the AI and it guesses at what to do. If the guess is good, then that decision with its inputs is remembered. There are random numbers thrown in as well as randomly keeping bad decisions. Over time the AI makes better and better decisions.
The problem is that AIs are goal driven. This means that when you set the goals the AI will make decisions that will cause it to reach those goals.
As an example, if your goal is to have an AI evaluate resumes to attempt to determine who is the best fit for the job you are offering you need to provide it with a training set and a set of rewards.
As an example, in the video included, the rewards are based on distance traveled. The programmer changes the goals over time to get different results, but the basic reward is distance traveled. Other rewards could be considered. One such reward could be based on “Smoothness” The less change of input, the better the rewards. This is sort of cheating as we can guess that smooth driving will give better results over all.
I’m don’t do a lot of work with AIs, I’ve got experts that I call upon for that.
In the case of judging resumes, the AI is given rewards based on picking candidates that were successful by some metric. Lets assume that the metric is “number of successfully resolved calls” or “number of positive feedback points on calls”. There are hundreds of different metrics that can be used to define “successful”. And those are used to create the feedback on what is a “good” choice.
The AI is then given the resumes. Those resumes might be pre-processed in some way but just consider it to be the full resume.
They did this. And after they got the AI trained they started feeding it new resumes. The AI consistently picked people that were not BIPOC. Yep, the AI became “racist”.
When this was discovered the AI discarded. Having a racist AI was a sign that the programmers/developers that created the AI were racist themselves. It was racism that is inherit in the system that caused the AI to be racist.
Reality is that the AI isn’t racist. It was just picking the resumes that had the best fit with resumes of “good” hires. This implies that there are characteristics that are associated with race that lead to better outcomes. It also implies that those characteristics are in resumes that are striped of identifying marks.
When I was hiring for a government contract by the time I saw a resume all personal identifying marks were removed. You could not know that the applicant was male or female, white or black or purple. You couldn’t tell how old they were or how young they were.
Out of a set of 100 resumes, 10 would be female. Of those 100 resumes no more than 20 would be forwarded to me for final evaluation. In general, the final 20 would contain more than 10% female candidates.
Those female candidates were rejected time after time. Even though I had no way of knowing they were female. This was bad for the company because we needed female hires to help with the Equal Opportunity Employment numbers. It didn’t seem to matter who was choosing or when the cut was made. There was some characteristic in their resumes that caused them to not make the final cut.
We did hire two females but the question was: Why were so many females rejected?
The AI is even worse as it doesn’t care about race or sex. It cares about the predicted outcome. And for whatever reason, it was showing it’s bias.
In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
Perhaps the film’s greatest feat is linking all of these stories to highlight a systemic problem: it’s not just that the algorithms “don’t work,” it’s that they were built by the same mostly-male, mostly-white cadre of engineers, who took the oppressive models of the past and deployed them at scale. As author and mathematician Cathy O’Neill points out in the film, we can’t understand algorithms—or technology in general—without understanding the asymmetric power structure of those who write code versus those who have code imposed on them.
“I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character.” — Martin Luther King, Jr.
We can’t judge people by the content of their character. We can’t judge people by their skills. We can’t judge people by their success.
Judging people by merit or ability causes certain groups to be “under represented”.
This is your problem. This is my problem. We just need to stop being judgemental and racist.
Maybe, at some point, those groups that are under represented will take responsibility upon themselves. To succeed in larger and larger numbers.