The founder of a Thousand Oaks dog and cat shelter has enacted a new policy forbidding some gun owners from adopting pets there, triggering numerous threats against her.
Kim Sill, 61, announced the new rules in the Shelter Hope Pet Shop’s weekly newsletter in late May.
“We are pro-gun control,” the newsletter says. “If your beliefs are not in line with ours, we will not adopt a pet to you.”
This one isn’t as bad as the last one. You can still adopt if you are an NRA member or a gun owner. You just have to compromise your rights and make sure Kim knows you support her view point.
In other news, I just read an article about shelters being overwhelmed with puppies and cats that they can’t adopt out fast enough.
USA Today said it has deleted 23 articles from its website after an investigation found that the reporter who wrote them used fabricated sources.
The journalist who is said to have used the fabricated sources was identified as Gabriela Miranda, a breaking news reporter who resigned from the Virginia-based newspaper weeks ago, the paper confirmed Thursday.
USA Today was contacted by somebody requesting a correction. When USA Today started looking into it they found that Miranda had attributed quotes to people that didn’t work at the organization she said they did. Other people attributed in quotes can’t be located for confirmation. Miranda attributed quotes to the wrong people.
(LOUISVILLE, Ky.) — According to surveillance video obtained by ABC News Louisville affiliate WHAS, the mayor of Louisville, Kentucky, Greg Fischer, appears to fall to the ground after being hit.
The mayor of Louisville, Kentucky, was assaulted over the weekend while out attending community events when he was punched by the assailant. Police are still investigating and have yet to make any arrests.
AI isn’t really intelligent, it is a system of trained responses. Trained being the key word here.
The gist of AI and deep learning is that you have a set of inputs and a set of outputs. The outputs are generally restricted. Too many outputs and things can get complicated. You take a sample set of inputs and feed it to the AI and it guesses at what to do. If the guess is good, then that decision with its inputs is remembered. There are random numbers thrown in as well as randomly keeping bad decisions. Over time the AI makes better and better decisions.
The problem is that AIs are goal driven. This means that when you set the goals the AI will make decisions that will cause it to reach those goals.
As an example, if your goal is to have an AI evaluate resumes to attempt to determine who is the best fit for the job you are offering you need to provide it with a training set and a set of rewards.
As an example, in the video included, the rewards are based on distance traveled. The programmer changes the goals over time to get different results, but the basic reward is distance traveled. Other rewards could be considered. One such reward could be based on “Smoothness” The less change of input, the better the rewards. This is sort of cheating as we can guess that smooth driving will give better results over all.
I’m don’t do a lot of work with AIs, I’ve got experts that I call upon for that.
In the case of judging resumes, the AI is given rewards based on picking candidates that were successful by some metric. Lets assume that the metric is “number of successfully resolved calls” or “number of positive feedback points on calls”. There are hundreds of different metrics that can be used to define “successful”. And those are used to create the feedback on what is a “good” choice.
The AI is then given the resumes. Those resumes might be pre-processed in some way but just consider it to be the full resume.
They did this. And after they got the AI trained they started feeding it new resumes. The AI consistently picked people that were not BIPOC. Yep, the AI became “racist”.
When this was discovered the AI discarded. Having a racist AI was a sign that the programmers/developers that created the AI were racist themselves. It was racism that is inherit in the system that caused the AI to be racist.
Reality is that the AI isn’t racist. It was just picking the resumes that had the best fit with resumes of “good” hires. This implies that there are characteristics that are associated with race that lead to better outcomes. It also implies that those characteristics are in resumes that are striped of identifying marks.
When I was hiring for a government contract by the time I saw a resume all personal identifying marks were removed. You could not know that the applicant was male or female, white or black or purple. You couldn’t tell how old they were or how young they were.
Out of a set of 100 resumes, 10 would be female. Of those 100 resumes no more than 20 would be forwarded to me for final evaluation. In general, the final 20 would contain more than 10% female candidates.
Those female candidates were rejected time after time. Even though I had no way of knowing they were female. This was bad for the company because we needed female hires to help with the Equal Opportunity Employment numbers. It didn’t seem to matter who was choosing or when the cut was made. There was some characteristic in their resumes that caused them to not make the final cut.
We did hire two females but the question was: Why were so many females rejected?
The AI is even worse as it doesn’t care about race or sex. It cares about the predicted outcome. And for whatever reason, it was showing it’s bias.
In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
Perhaps the film’s greatest feat is linking all of these stories to highlight a systemic problem: it’s not just that the algorithms “don’t work,” it’s that they were built by the same mostly-male, mostly-white cadre of engineers, who took the oppressive models of the past and deployed them at scale. As author and mathematician Cathy O’Neill points out in the film, we can’t understand algorithms—or technology in general—without understanding the asymmetric power structure of those who write code versus those who have code imposed on them.
“I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character.” — Martin Luther King, Jr.
We can’t judge people by the content of their character. We can’t judge people by their skills. We can’t judge people by their success.
Judging people by merit or ability causes certain groups to be “under represented”.
This is your problem. This is my problem. We just need to stop being judgemental and racist.
Maybe, at some point, those groups that are under represented will take responsibility upon themselves. To succeed in larger and larger numbers.
World swimming’s governing body has effectively banned transgender women from competing in women’s events, starting Monday.
FINA members widely adopted a new “gender inclusion policy” on Sunday that only permits swimmers who transitioned before age 12 to compete in women’s events. The organization also proposed an “open competition category.”
“I don’t remember ever touching that trigger on the gun so I don’t know what happened, to be honest,” Hartin, whose ex-husband is the son of billionaire Lord Michael Ashcroft, said in the interview, an excerpt of which was published by the Sun.
Ms Hartin is the ex-daughter in law of some important person in England. Wealthy too. She shot an important cop [claiming it happened] while attempting to clear her firearm. Channeling her inner Alex she [says she] didn’t even touch the trigger. It must be a faulty firearm that magically went off at just the right moment.
I’ve had ONE negligent discharge. I was attempting to lower the hammer on a Marlin lever action with a scope and hammer extension. Live round in the chamber but pointed down range. My thumb slipped off the hammer extension and the hammer struck the firing pin causing the rifle to go off.
My normal method of lowering the hammer on a live round is to put my left thumb under the hammer, holding the hammer back with my right thumb, releasing the hammer (pulling/touching the trigger) and lowering the hammer to my left thumb then getting my thumb out of the way and continuing to lower the hammer. With the Merlin with scope I couldn’t comfortably get my left thumb into place and so “bang”.
This is why we have the four rules and why we follow them. If you think there is an exception for following the four rules rethink your position. These rules save lives. Failing to follow them might mean you are on trial for unintentional homicide.
All guns are always loaded. Even if they are not, treat them as if they are.
Never let the muzzle cover anything you are not willing to destroy.
Keep your finger off the trigger till your sights are on the target.
Identify your target, and what is behind it.
We have tweaked those rules. If I’m handed a firearm, it is loaded. If I’ve confirmed that the firearm is indeed unloaded. Then I’m willing to treat the weapon as if it is unloaded, within limits. I still won’t point it at anything I’m not willing to destroy. So I will dry fire a firearm that I’ve confirmed to my satisfaction is indeed unloaded. But it will be pointed in a safe direction when I pull the trigger.
Regardless, know the rules, follow them, be safe.
Updated to show the version of accidental/negligent discharge is her claim. I don’t know anything about this case outside of her reported claims.