AI Chatbots Are Being Programmed to Spread Conspiracy Theories – VICE


Newsletters
By and
It seems like at least a couple of times a week, Elon Musk announces that he’s going to make sweeping changes to his AI chatbot, Grok, because the thing didn’t spit out the precise kind of propaganda he and his supporters need to feel coddled and have all their horrific ideas reinforced. They need something, anything, that will confirm their beliefs, even if it requires creating a machine specifically designed to reinforce their worldview, to wrap them in blankets and tell them everything will be okay, that they are perfect little snowflakes who are always right about everything all the time.
This appears to be a blueprint being used by more than one morally bankrupt propagandist. According to Australian outlet Crikey, conspiracy theorists are, like Elon, developing their own AI chatbots that reinforce their conspiratorial vision of reality, and they’re turning those chatbots on the rest of us.
These aren’t just the standard ChatGPT conversations. These are custom-trained models, like the Texan-built “Neo-LLM.” They’re fed a buffet of far-right misinformation from a series of websites with the mission of convincing, for instance, hesitant parents to skip their kids’ vaccines. Its creator, a guy named Mike Adams, says that Neo-LLM is “the world’s largest curated collection of content that’s typically censored or missing from search engines and other LLMs.”
You gotta love how these folks think they’re being “censored” when the scientific community and its hundreds of years of research and data tell them that the things they believe are stupid. You’re not being censored, Mike. You believe in fantasies.
The idea here is that users chat with these bots to validate their already wild beliefs, then post transcripts and videos online as “proof” that even AI agrees with them. If you spend any amount of time on Twitter/X lately, you’ve probably seen something like this in action. There are legions of people who insert themselves into online debates with a screenshot of a chatbot’s take on the discussion, as if they just dropped the mic and ended the whole debate once and for all, while, in actuality, proving absolutely nothing.
There are plenty of people out there who immediately believe anything a chatbot tells them and assume the answer a chatbot spits out is the final say on the matter. Adams is trying to weaponize this phenomenon with a chatbot designed to only reinforce the beliefs of conspiracy theorists.
As we’ve seen recently with ChatGPT, AI chatbots tend to wholeheartedly and enthusiastically agree with users, telling them exactly what they want to hear, often reinforcing terrible habits and acting as enablers of dangerous delusions. These bots don’t fact-check. Their morals and ethical codes do not extend beyond those of their creators.
You might laugh at all the wild ass responses these things spit out, but someone else out there whose grasp on reality was already tenuous might believe it and do something dangerous with that information because, again, way, way too many people nowadays wholeheartedly believe everything a chatbot says. This technology is pushing vulnerable people over the edge.
On the flipside of the coin, you have Debunkbot, a chatbot I wrote about last year. It has the backing of MIT studies that say it does a good job of challenging the beliefs of, and perhaps even converting, conspiracy theorists by presenting them with factual information and logical arguments that could help reel them back from the edges of conspiratorial madness.
By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from VICE Media Group, which may include marketing promotions, advertisements and sponsored content.
By
By
By
By
By
By
By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from VICE Media Group, which may include marketing promotions, advertisements and sponsored content.
By
By
By
By

source

Jesse
https://playwithchatgtp.com