ChatGPT could be permanently changing how we speak – Digital Trends
Just a few weeks ago, a paper from MIT stirred widespread debate on how using AI tools like ChatGPT was affecting our cognitive and critical thinking. The findings, though surprising, were not entirely unexpected. All the way back in 2023, Jeff Hancock, Professor of Communication at Stanford, revealed how AI chatter has already changed how we perceive language and their emotional impact.
“What we found is that even if you don’t use those AI-generated responses, they influence how you think,” he said, citing the example of smart reply templates in emails and messaging apps. Two years later, it seems that the signature language patterns of large language models such as ChatGPT are becoming a part of natural human conversations, too.
The latest findings come courtesy of research from the Florida State University and it notes how “AI buzzwords overused by chat-based large language models aren’t just suggested on screens anymore — they’re frequently showing up in how people actually speak.” The team focused on a total of 20 words, and even I am surprised that at least two of them now appear regularly in my own work.
The team studied what they refer to as lexical trends before and after the release of ChatGPT, and discovered a theme of union between words chosen by people in their day-to-day conversations and those frequently used by AI chatbots. These words, such as “intricate” or “delve,” have come to be known as AI buzzwords and are commonly seen in content generated by AI chatbots.
The team assessed over 22 million words in spontaneous and non-scripted conversations such as podcasts covering technology and science-related topics. The researchers studied some famous podcasts such as BBC Curious Cases, Lex Fridman, and the Big Technology podcast, a list that totals 17 well-known audio shows.
A noteworthy trend that was observed as part of the research was that AI buzzwords appear as such in human conversations, and their synonyms, which strengthens the belief that AI chatbots are influencing our speaking behaviour. But what’s the bigger picture here?
If you’ve been following the AI discourse, or just interact with AI chatbots such as ChatGPT or Gemini, you will notice that they use a few words quite frequently. There’s a discernible pattern that is easy to spot, such as being more verbose than a casual human conversation, or even using too many em dashes in text.
“What stands out is the breadth of change: so many words are showing notable increases over a relatively short period. Given that these are all words typically overused by AI, it seems plausible to conjecture a link,” notes Tom Juzek, a professor of computational linguistics at the institution who led the latest research.
The researchers found that words that are otherwise usually seen in academic work are popping up in human conversations, with some of them appearing with nearly double the usual frequency. The research poses the question of whether AI chatbots are having a “seep-in effect.”
More importantly, if AI buzzwords are already appearing in real human conversations, the researchers wonder how the inherent biases of an AI chatbot are going to affect human behavior in the long run. The research paper is the first to detail a link between the conversational human language system and how it is affected by the widespread usage of AI tools such as ChatGPT.
Now, before you go thinking “whoever speaks these words must be talking too much with an AI,” there are a few aspects you must keep in mind about this research. The team makes it clear that it’s a case of “observed overlap in affected vocabulary,” and doesn’t speak of a definitive AI-induced language change.
The paper, however, warns of a“ novel and potentially profound shift in the trajectory of language evolution” if the preference for AI buzzwords keeps appearing in human communication. In the long run, the bigger risk is whether the biases of an AI chatbot also start appearing subconsciously in human conversations, even if they don’t reflect the original thought process of the speaker.
There is certainly precedent for that, as multiple cases have emerged where AI conversations have sent a person on a really dangerous path, leading to health emergencies and serious self-harm. The overarching takeaway of the research is that AI is not only morphing the language, but also essentially how we study language itself.
Titled “Model Misalignment and Language Change: Traces of AI-Associated Language in Unscripted Spoken English,” the peer-reviewed paper will be published in the AIES Proceedings journal.
With the launch of GPT-5, ChatGPT received more than just an upgrade — it appears to have gone through a personality overhaul. Comparing past chats with the responses the chatbot gives now is like night and day.
In its announcement, OpenAI promised a more natural, less AI-like writing style from the latest model. And, although I might agree, I’m not sure how much I like it. I’m not alone — some users are extremely unhappy with this turn of events.
When you’re playing around with ChatGPT or Gemini for the first time, it’s easy to just toss in a quick, simple prompt and see what happens.
“What can I make for dinner?”. The odds are that you will get some OK ideas. But that’s about the extent of what you can hope for— OK. And many people never get beyond simple prompts with junky responses.
What’s happened? A supposed GPT-5 system prompt leaked via Reddit and GitHub this weekend. The prompt reveals the exact rules given to ChatGPT for interacting with users and carrying out various tasks.
A Reddit user claims to have discovered the “verbatim system prompt and tooling info” for GPT-5, OpenAI’s latest large language model (LLM).
Upgrade your lifestyleDigital Trends helps readers keep tabs on the fast-paced world of tech with all the latest news, fun product reviews, insightful editorials, and one-of-a-kind sneak peeks.