Gaston spent years as a human rights lawyer in Venezuela defending the political opponents of Nicholas Maduro’s regime — mostly students jailed for speaking out against the government plagued by corruption.
Gaston, whose full name is being withheld over fears for his safety, planned to surrender to border officials and seek asylum in the United States. Instead, he was arrested by troopers with the Texas Department of Public Safety upon his arrival and sent to an immigration detention center.
— Human rights attorney’s worst fears realized in Operation Lone Star arrest
A Venezuela citizen decided to escape his third world shit hole. He had been doing real good work in Venezuela. Because of this he was fearful for his safety. He feared he would be arrested and disappeared, or worse.
So he headed north to the land of milk and honey.
He traveled from Venezuela to Columbia but that wasn’t good enough so he continued to Panama where he didn’t stop. Further north to Costa Rica, Nicaragua, El Salvador or Honduras, to Guatemala and into Mexico. At no time did he stop his travels, even though he was no longer in fear of being arrested and put in prison by the goons of Maduro.
He then crossed the border into Texas and was stopped by members of Texas’ Department of Public Safety.
Gaston is very very upset about this. He was arrested when he crossed into the United States Illegally. He was only arrested because he happened to cross at a point where land on the US side of the river was private property.
On private property the Texas DPS can make arrests. On public property only the federal law enforcement can make arrests. If he had been arrested by federal law enforcement he could then claim he was seeking asylum.
ABC news wants you to know that he is very upset.
“There wasn’t any there. No notice that said that was private property, or what,” Gaston said. “Neither that I have knocked down a wall nor that I have even penetrated a fence.”
There was a great big river that he had to swim across. Gaston knew darn well that he was crossing illegally. And he is very upset that nobody warned him that he happened to be crossing on to US private property.
The short of it is that Gaston is an illegal alien that was caught crossing into these United States illegally. He spent five weeks in detention before being released in the US where he is now seeking asylum.
I leave you with Mr. Gaston’s own words in regards to being detained for 5 weeks:
“I can tell you that this is the most terrible discrimination that a human being deprived of his liberty can suffer,” he added.
USA Today said it has deleted 23 articles from its website after an investigation found that the reporter who wrote them used fabricated sources.
The journalist who is said to have used the fabricated sources was identified as Gabriela Miranda, a breaking news reporter who resigned from the Virginia-based newspaper weeks ago, the paper confirmed Thursday.
USA Today was contacted by somebody requesting a correction. When USA Today started looking into it they found that Miranda had attributed quotes to people that didn’t work at the organization she said they did. Other people attributed in quotes can’t be located for confirmation. Miranda attributed quotes to the wrong people.
AI isn’t really intelligent, it is a system of trained responses. Trained being the key word here.
The gist of AI and deep learning is that you have a set of inputs and a set of outputs. The outputs are generally restricted. Too many outputs and things can get complicated. You take a sample set of inputs and feed it to the AI and it guesses at what to do. If the guess is good, then that decision with its inputs is remembered. There are random numbers thrown in as well as randomly keeping bad decisions. Over time the AI makes better and better decisions.
The problem is that AIs are goal driven. This means that when you set the goals the AI will make decisions that will cause it to reach those goals.
As an example, if your goal is to have an AI evaluate resumes to attempt to determine who is the best fit for the job you are offering you need to provide it with a training set and a set of rewards.
As an example, in the video included, the rewards are based on distance traveled. The programmer changes the goals over time to get different results, but the basic reward is distance traveled. Other rewards could be considered. One such reward could be based on “Smoothness” The less change of input, the better the rewards. This is sort of cheating as we can guess that smooth driving will give better results over all.
I’m don’t do a lot of work with AIs, I’ve got experts that I call upon for that.
In the case of judging resumes, the AI is given rewards based on picking candidates that were successful by some metric. Lets assume that the metric is “number of successfully resolved calls” or “number of positive feedback points on calls”. There are hundreds of different metrics that can be used to define “successful”. And those are used to create the feedback on what is a “good” choice.
The AI is then given the resumes. Those resumes might be pre-processed in some way but just consider it to be the full resume.
They did this. And after they got the AI trained they started feeding it new resumes. The AI consistently picked people that were not BIPOC. Yep, the AI became “racist”.
When this was discovered the AI discarded. Having a racist AI was a sign that the programmers/developers that created the AI were racist themselves. It was racism that is inherit in the system that caused the AI to be racist.
Reality is that the AI isn’t racist. It was just picking the resumes that had the best fit with resumes of “good” hires. This implies that there are characteristics that are associated with race that lead to better outcomes. It also implies that those characteristics are in resumes that are striped of identifying marks.
When I was hiring for a government contract by the time I saw a resume all personal identifying marks were removed. You could not know that the applicant was male or female, white or black or purple. You couldn’t tell how old they were or how young they were.
Out of a set of 100 resumes, 10 would be female. Of those 100 resumes no more than 20 would be forwarded to me for final evaluation. In general, the final 20 would contain more than 10% female candidates.
Those female candidates were rejected time after time. Even though I had no way of knowing they were female. This was bad for the company because we needed female hires to help with the Equal Opportunity Employment numbers. It didn’t seem to matter who was choosing or when the cut was made. There was some characteristic in their resumes that caused them to not make the final cut.
We did hire two females but the question was: Why were so many females rejected?
The AI is even worse as it doesn’t care about race or sex. It cares about the predicted outcome. And for whatever reason, it was showing it’s bias.
In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
Perhaps the film’s greatest feat is linking all of these stories to highlight a systemic problem: it’s not just that the algorithms “don’t work,” it’s that they were built by the same mostly-male, mostly-white cadre of engineers, who took the oppressive models of the past and deployed them at scale. As author and mathematician Cathy O’Neill points out in the film, we can’t understand algorithms—or technology in general—without understanding the asymmetric power structure of those who write code versus those who have code imposed on them.
World swimming’s governing body has effectively banned transgender women from competing in women’s events, starting Monday.
FINA members widely adopted a new “gender inclusion policy” on Sunday that only permits swimmers who transitioned before age 12 to compete in women’s events. The organization also proposed an “open competition category.”