Sometimes I write a post and when I’m done I decide to sit on it.  Maybe it’s too controversial and I decide not to publish it.

This one has been tough.

One of my bugaboos is the racial violation of Hanlon’s razor that states: “never attribute to malice that which is adequately explained by stupidity.”

We should never attribute to racim that which is adequately explained by other logical means.

Unfortunately that violates the current political zeitgeist, which puts racism at the heart of everything.

Thats where this post begins:

In a previous post, I covered Congreswoman Rashida Tlaib’s accusation that facial recognition technology is racist.

Tlaib isn’t the only Progressive who has come out against facial recognition technology for law enforcement.  Bernie wants to ban it.  Fellow Squad-mate, Congresswoman Alexandria Ocasio-Cortez also has issues with facial recognition over racial reasons.

There is a lot that we can discuss about the pros and cons of facial recognition technology.

As a small government person, I will acknowledge that there are all sorts of privacy issues with the widespread use of facial recognition software.  I have a real problem with facial recognition in the private sector, like how Facebook was scanning users without their knowledge and selling that information to the government.

There is a reason all the cameras in my computers are covered with electrical tape.

On the other hand, there is some benefit to limited public use of facial recognition software.  Using the software to scan for criminals with active warrants, especially violent ones, on public streets is something I can support.

I’d like to see it require a warrant.  The police would have to get a judge to sign off on uploading a mug shot and then having all the cameras in a city search for that face.

But this is all politics.  I want to focus on the technology for a moment, and that gets tricky.

The expert AOC called to testify is Joy Buolamwini, who is a computer scientist with the MIT Media Lab.  She’s also a political and social justice activist, and the founder of a group called the Algorithmic Justice League.

So despite her academic credentials, I think her activist bias taints her opinion a little too much.  She’s not alone in this.  The Guardian actually published an -article titled:

How white engineers built racist code – and why it’s dangerous for black people

The thesis of this article, like Buolamwini’s opinion, is that white programmers are bigoted and write their bigotry into the algorithms they use for facial recognition.

Digital cameras operate very much like the human eye.  In the eye, light passes through a lens where it hits photoreceptor cell in the back of the eye, which converts the energy of the photons into an electrical signal which the brain turns into an image.  Charge-coupled device (CCD) work the same way, but use semiconductors instead of cells as the photoreceptor and a computer chip instead of a brain to turn the electrical signals into an image.

For a digital camera to work, light has to hit the CCD.  This is where facial recognition technology starts to go wonky.

Facial recognition works by taking a picture of the face, targeting certain identifiable points, e.g., the corners of the eyes, mouth, nose, cheeks, etc, and creating a point cloud map of the face.  That point cloud map is assumed to be unique to each face.  The computer takes this point cloud map and finds faces that have very close to identical point cloud maps.

This video does a good job explaining how facial recognition works and the issue with lighting and depth.

If you want an extreme example of how facial recognition is “racist” we need to look at carbon nanotubes.

Vantablack is not a paint, it is a plasma deposited coating and it stands for vertically aligned carbon nanotube arrays.  It is the darkest material made by man, with 99.96% light absorption.  It is so dark that the eye cannot pick up contours on an object coated in Vantablack, making it look like a 2-D black spot… and kind of creepy.  See the two busts below, one as cast and one coated in Vantablack.

Now, take this and scale it back to humans.

Melanin is a broad spectrum photo-absorber.  It absorbs UV light the best, but it also absorbs other wavelengths of light.

Here is a very good video by a black photographer on the difficulty of shooting photographs of black models, especially dark skin black models and the importance of good lighting.

This is a truly excellent video.  The photographer really explains well the importance of light balance and how to shoot black skin and hair.  He never mentions racism, just what the camera does and what its limitations are and how to get around them.

A perfect example of this comes from Bored Panda.  Khoudia Diop is a model from Senegal known for her dark skin.  The photo below is horrible.

The photographer couldn’t balance the light of her shiny gold dress and her face has been rendered nearly featureless.

This is why, like the video above demonstrates, lighting is key.

Now think about security cameras and facial recognition.  There is no lighting and no photographer.  Just a camera.

If the camera cannot capture a good image, it cannot do a good facial recognition, the computer is left to “guess” with the best of its ability.

One fundamental issue with facial recognition software and race is that darker skin tones have harder to detect features in low, bad, or off-angle lighting.

If we are going to create facial recognition that works on all skin tones under all lighting conditions, we are going to have to entirely rethink facial recognition.

The point is, there are issues here other than “white people are inherently racist and so write racist code.”

Thats is where that post ended.

I never published it, I’m not sure why.

Then the YouTube algorithm hit me with the Google Pixel 6 Super Bowl ad.

 

This was exactly what I was talking about.

I tried to do some research into what Google had done and Google promptly violated my adage on attributing racism.

 

I wonder how the white engineers at Google feel about being called racist for all their previous generations of phone cameras?

Actually, they probably like it, being Leftists.  They probably get off on being called unworthy bigots because so much of Leftist behavior remind me of subordinates in a humiliation domination fetish.

But I digress.

When you take out the race speak the reality is that the previous generations of AI and processing capacity of the phone couldn’t do this.

The phone has to readjust the white balance pixel by pixel for every pixel in a 50-megapixel camera.

For the AI to do that accurately they had to base it on thousands of pictures of all sorts of different people if different colors and different lighting, expanding the range of balance and contrast so that a wider range of tones and shades are optimized.

If you have ever played with a digital photo manipulation software and had one part of your image too dark and a other too bright and so had to compromise, you can appreciate how much of an advance in image manipulation this system is.

They had to make the camera do what the photographer did by hand in the above video, for every person in the photo.

And this is where I need to take a giant shit on Google.

If I were a marketing director I’d push articles and ads to say “we created a new level of digital camera technology that allows us to optimize lighting and shading for all people of all colors so that everyone can look their best.”

Instead they went with “white people make racist cameras because they want to make black people ugly so we got a bunch of black people to design a camera phone just for us.”

Amazing digital processing technology.

Terrible racial grievance mongering advertising.

And that’s sad because the application of this improved digital processing AI could be used for a lot of things from art to forensics to scientific photodocumentation.

It’s frustrating that a corporation can come up with such exciting technology that I want to love then they turn around and make me hate it by beating me over the head with a radical Progressive racism based ad.

 

Spread the love

By J. Kb

10 thoughts on “A Google Super Bowl ad has finally gotten me to publish a post I’ve been sitting on forever: On the racism of optical physics”
  1. I spent years working on writing and improving CAM. As part of that I became very good at image processing.

    Part of the task is learning to visualize things that your eye discounts.

    I have a love hate relationship with jpeg images. You can extract two images from each jpeg. One is the luminous which is a gray scale, the other is the color.

    When you combine the two you get the fish results.

    There were so many jpegs where the gray scale was ok but the color was a blocky mess. Combine the two and the human eye thought the entire image was “good”.

    As a “fun” project we got some of the early digital x-rays from a doctor, we then went about doing our thing to extract as much information as we could.

    During this we spotted something and we brought to the attention of the source and asked “what is this?”

    That doctor refused to look at any of our enhanced images and only looked at the original. He told us “it’s nothing”.

    Two weeks later he called to update us. The patent had gotten another set of x-rays. The thing we spotted had grown and he could now see that it was “bad” and they were treating it.

    The last time I got x-ray of my teeth my dentist pushed some buttons and used false color maps. The same thing we used to spot that cancer early. He told us he didn’t like using it but it sometimes helped.

    This new image enhancement method had been in use for years. Using AI to detect where it should be used is sort of cool

  2. How do children react when things are not perfect? They cast blame. They do not search out the cause of the problem, they seek someone to blame.
    .
    If you ever wondered why racism is the root cause of all problems in the world today, it is because we listen to children.

  3. Some of this reminds me of the current fad spreading around the software industry, to make people waste time to modify existing software. Not to make it better, but merely to replace words that are on today’s “not PC” list. For example, in source control systems the primary version being tracked was commonly referred to as the “master” branch; it’s getting renamed. Some communication protocols have since time immemorial been described as “master/slave” protocols; that too is no longer permitted. I don’t remember if “red-black trees” (a data structure for maintaining indexes of large numbers of things) are outlawed now. Last I heard, computer “servers” (big powerful computers) are still allowed, too. Of course, these things are subject to change at the whim of the PC police.
    The craziest part is when people are told to make these changes in source code that is proprietary to the company, i.e., source code that will never be seen by anyone outside.

    1. I have a love-hate relationship with the idea of facial recognition software. I would love to have at home facial recognition systems to tie into a smart home set up as long as such a system was self contained. I absolutely oppose and despise of public facial recognition systems he’s literally The ultimate tool of the surveillance state when combined with Basic AI and big data systems. If you think it is bad now when you are tracked by your phone just wait until you are tracked everywhere by your face.

    2. My mother was a facility manager-the computer that ran the plant (HVAC, security, power) was modified to remove “slave terminal” from the system. Amusingly, any time she tried to use a “secondary” terminal, she got a message: “NO ADMIN ACCESS FROM SLAVE TERMINAL.”

  4. A bit off topic but thanks for the pointer to vantablack. I didn’t know about it, but can think of some immediate potential uses for it at work.

Only one rule: Don't be a dick.

This site uses Akismet to reduce spam. Learn how your comment data is processed.