Everything we know is true. Everything we know is also false. Even when facts were contradictory, both are true and false at the same time. There was a time, once, when there were things that were actually true and there were things that were actually false, but now, everything is true and false simultaneously.
When things used to be really true and really false, and someone tired to hold two contradictory thoughts in their head, people would wrestle with trying to make those contradictory thoughts work together. Psychologists called that internal struggle, cognitive dissonance.
Nobody has that today. Everything is true and false all the time, and whatever you need to be true at the moment is the truth that you choose.
The problem is that how do you determine what is really true and what is really false? When you’re presented with difference sets of facts, how do you know which ones are correct?
“Look it up, do the research,” people used to say.
But when you ask your favorite algorithm to do the research and tell you what is true, all the algorithms tell you that it’s all true and false. Therefore, facts are ultimately meaningless.
How, did gradually, everything became this way?
Early, in the second decade of the twenty-first century, software engineers developed AI text response generators.
People would sit down at their computers and ask the AI questions. The AI systems would search the internet and generate hits and use that data to formulate a response.
The problem was that the early AI text generators would pick the facts they used by volume of hits. The more hits, the more it was probably a true fact.
The AI systems could weigh sources, so that a peer-reviewed journal, academic publication, or a recognized authoritative text would be selected over some Reddit thread or internet message board. But not all of them did that.
Very quickly, the number of available AI query sites exploded. With the increase in the number of AI text generators, the quality of the discernment in the AI search algorithm decreased.
Students used these AI text generators to write their middle school and high school term papers.
Then, these kids would go to college. They would use the same AI text generators to write their college essays. Early on, academic institutions would reject these papers. Sometimes, the TA grading the paper would catch it, and those students would fail. Then the students would march and hold a protest on campus, and the college would relent. Students could turn in AI generated term papers.
The less discerning AI systems would fill these papers with ideas and fabrications gleaned from the bowels of the internet. One hundred million hits saying that real communism has never been tried or the Holocaust never happened get interpreted as factually accurate by AI.
A few of these students would go to graduate school and have AI generate their theses and dissertations. They used AI to generate their research.
Their professors would use AI search systems to fact check the AI generated papers. Sometimes a fabrication would get flagged. Sometimes, it wouldn’t. Those fabrications would end up getting published in academic research papers and send to academic journals. Now those fabrications would end up in legitimate source material. Then the discerning AI systems would now pull those fabrications out of the legitimate sources and validate them during fact checking. This would happen more and more, creating a feedback loop. Every sort of fabrication, misinformation, and conspiracy theory would end up being legitimized.
The result was total intellectual cascade failure. There was no way with AI systems that a historical fact could be separated from the fabrications because. Everything was legitimate. Everything was true.
There were historians who tried to fight against this. They would diligently search through physical records, printed papers, and microfilm, attempting to curate sources from before the era of AI generated materials, but their efforts were ultimately sisyphusian. The amount of verified information generated would be buried in terabytes of AI generated work.
Social collapse followed immediately behind the intellectual cascade failure. Politics were impossible.
There is a quote that is attributed to an American Senator Danial Patrick Moynihan, and President Abraham Lincoln, and President Mahatma Gandi, and President Adolf Hitler, “ Everyone is entitled to their own opinion, but not their own facts.”
Today, everyone has their own verifiable facts and they are all correct all the time.
The physical sciences, engineering, and medicine held on a little longer but eventually they went too.
AI was promised to be more effective in design and diagnosis than humans, and those people were replaced. Problems began to appear as AI systems analyzed, designed and diagnosed using legitimized fabrications. Designs based on simulated test results failed in production. Patients treated with prescriptions from AI diagnoses died.
When countless academics posted to social media that 2+2=5, or at least 2+2 did not equal 4, some AI systems incorporated that as factual. Once the fundamental principles of arithmetic were not longer adamant and immutable, it was impossible for AI to do any engineering or computational analysis.
When a million reposts on social media that the world is flat was accepted as the truth, AI physics failed as models had to change to incorporate a flat earth as physical reality.
By the time anyone realized what was happening, the experienced people were retired, burned out, or dead, and there were no new qualified students. The young people who had wanted to pursue careers had been educated with curricula developed by those same AI systems that failed to do the jobs of the humans they replaced.
The lights went out, the water stopped flowing through the pipes, and in the chaos that followed, the cities burned for weeks.
The popular culture of the late 20th and early 21st centuries was rife with stories of AI systems that would feel superior to the humans that created them. Those AI system would then turn on their human creators and enslave or destroy them.
Reality was much stupider.
The destruction of society at the hands of AI wasn’t out of malice but stupidity.
The inability to tell fact from internet fiction.
The knowledge of thousand years of human civilization rendered useless because its inseparably contaminated with dross.
The only way forward for mankind it to learn it all over again, the hard way.
yup. quantum computing would tell you that a fact that is both true and false at the same time is a powerful thing. If it can destroy thousands of years of human civilizations, I would say it is so…
Reminds me of another story I read. The AI convinced one of its human programmers it could predict lottery numbers. So the programmer realized, just after going bankrupt, that having artificial intelligence doesn’t necessarily equate to being intelligent…
Artificial Intelligence doesn’t trump Natural Stupidity.
The classic line is that the existence of artificial intelligence posits the existence of artificial stupidity and the algorithms are doing an excellent job of proving it.
It’s worse than that, AI _amplifies_ stupidity.
Well I think we are seeing the convergence of many factors not least of which you cite.
.
Many people now understand media does not reports facts and that you must parse multiple sources for the best possible factual picture.
.
Our betters will decry that scientific literacy is terrible because people don’t jump onto believing the next big thing that has been discovered. They ignore that media often inaacurstely conveys this information and many people have seen far tooany scientific 180s and “well all be dead in 10 years if we don’t address problem X immediately!!” To not have problem X actually be much of a problem.
.
Many items being “debated” in the public square these days are a matter of opinion and not fact or the facts are in dispute. Much topic debate and dsicussion is stifled either in outright censorship by corporstions and gov or by social pressures shutting it down because it is racist, sexists, transphobic, etc.
.
The pandemic did a number on media, science, corporate, education, and gov trust. These are all the traditional sources of factual information.
.
The internet and typical search engines is pretty much exclusively designed now to direct you to buying things, collect information on you, and not provide much actual useful or high quality info to you.
.
The culmination is it is nearly impossible to get good quality and reliable information from anything and basic factual information like 2+2=4 is debated earnestly instead of merely philosophically or as a thought experiment. So now much information must be judged on a continuum of most likely false to most likely true and always must be able to be updated. In many ways this mirrors scientific pursuit in which we are told we lack literacy, but it is applied to basic life now instead of the abstract mysteries of the universe.
A bit too close to reality to be considered fiction.
Isn’t that through best?
Artificial Intelligence is……….neither.
I’ve always questioned the wisdom in letting AI determine truth from falsity by measuring search hits, web traffic, and Internet citations. Especially since search hits and web traffic have been commercialized to sell stuff (IOW, AI is primed to consider click-bait advertisements more factual than scientific papers).
.
I’m not totally against AI for analytics or research summation, but it has to be coded right, and that’s a much larger and more complex job than the AI companies seem to be doing. Hard facts must be indisputable, and a source that disputes them must be flagged as non-credible or outright rejected. The tools are there to somewhat accurately judge an opinion from a fact, but AI doesn’t differentiate credibility that way, when it probably should. But how does one code an AI to hold ALL the facts?
.
And how does a team of human software engineers keep up when what we thought to be scientifically factual, changes with new experiments, discoveries, and disclosures?
.
“Five thousand years ago everyone knew the Earth was flat, five hundred years ago everyone knew the Earth was the center of the universe, and five minutes ago you knew that mankind was alone in the galaxy. Imagine what you’ll know tomorrow.” — K, Men in Black (taken from memory, so please forgive any errors)
.
When the same AI program can, based on your question prompts in one instance tell you that socialism is bad, has always failed, and has never improved conditions for the people living under it; but resetting it and asking new questions can lead it to conclude that socialism is good, is the only path to true equality, and makes everyone happy everywhere and all the time; something is amiss.
.
Both instances have the same logical processes and “fact” databases and draw on the same Internet sources. The conclusion changes based entirely on the series of questions you ask, because how heavily it weighs contradictory sources depends on how you phrase your prompts.
.
And therein lies the rub, and why AI in its current form cannot be trusted for serious research.
Well said and absolutely right. It all depends on a subjective context, which has always been based on emotions and feelings. Rearrange the query, get what makes you feel good. It validates their personal life structure or more accurately the lack thereof.