AI is changing our world. At what point will it change our reality? | Opinion – USA Today
A Pakistani crowd set fire to a Hindu temple. Chinese researchers experimented with a bat in a wet lab. An election worker donning a gay pride pin shredded election ballots. Palestinians are gratefully accepting U.S. aid in Gaza.
All these events were caught on video, yet none is real. These videos are a result of a Time magazine analysis of Google’s new Veo 3 artificial intelligence video model, a powerful new AI tool making 8-second videos that are uncomfortably realistic.
AI videos have existed for years now, but this is on an entirely new plane, one that proves we’ve climbed out of the uncanny valley and summited a higher, scarier peak. While these videos are still clockable as AI, the technology will only get better, making it even more difficult to determine what’s real and what’s fake.
In an age where misinformation already runs rampant, the line separating fact from fiction has become a blur. Advancements in AI will only erase it further by proliferating false realities until we find ourselves in a post-truth society.
Post-truth refers to a situation where facts are no longer important in contemporary political and public debate. The concept is not so much used to suggest that the truth does not exist, but that facts have become secondary to our political point of view, according to ScienceDirect.
By the time our social media feeds are littered with AI videos of political candidates saying things they never said and fake news moments pushing agendas to sway public opinion, it will be far too late. By then, the truth, the facts and reality itself will become secondary, a problem we’re already seeing in education.
A New York article recently dug into AI’s place in education and how many students are relying on AI to do all of their schoolwork. Capitalism has turned higher education into a means to an end, the latter being a high-paying job (hopefully). Students have no desire to pursue knowledge; they’re fine with letting AI work and think for them.
A generation averse to critical thought is primed to fall for AI trickery. It’s scary to think that higher education, a place once conducive to the development of critical thought, problem-solving skills and creativity, is being infiltrated by a singular mechanism that undoes its very foundation.
And what’s to say of these students when they exit the four-year microcosm that is college after cheating their way through? Will pre-med students turn to ChatGPT to diagnose patients? Will future lawyers use AI to summarize cases and form arguments?
AI not only invades political and educational spaces, but it chips away at our humanity in our most isolated, insular and vulnerable moments.
ChatGPT and other dedicated AI therapy chatbots have become a low-cost, accessible option for those seeking mental health assistance as our culture wrestles with loneliness and isolation. But computerized AI is not meant to offer meaningful support for the minutiae of human emotion. Behavior that rests on AI’s unfounded ability to grasp our humanity leads to outcomes that range from futile to detrimental.
A 14-year-old boy died by suicide after becoming emotionally dependent on an AI chatbot. ChatGPT convinced a man with past mental health diagnoses that he had fallen in love with an AI entity named Juliet. When confronted with the truth, he became violent, resulting in his being killed by police. His father used ChatGPT to write his obituary.
Cases like these seem to be more common, paired with less deadly but equally unnerving impacts of AI use, like falling down rabbit holes of spiritual mania and unfounded clairvoyance.
These scenarios all share similarities in that victims are gaslit to believe AI is a sentient being that understands them on a deeper level. They fall prey to confirmation bias and the spiritual psychosis that follows. The false reality spewed by AI becomes their all-encompassing truth.
The way AI is discussed makes it seem like this is a necessary outcome, that there is nothing anyone can do to stop people from falling down AI’s deadly rabbit hole of misinformation. Technological determinism posits that advancements in technology, including advancements in AI, are an inevitable process independent of human or societal interference.
But who exactly benefits from leaving such a dangerous tool of misinformation unchecked? What is gained from blurring the line between fact and fiction? We aren’t hapless figures in a false simulation, despite efforts to make that our new reality.
Kofi Mframa is a columnist and digital producer for USA Today and the USA TODAY Network.