How LLMs and chatbot companions might be turning us all into the same person. – Psychology Today


Cut-offs cut deep and wide, their emotional impact reverberating far beyond the combatants. Because much of the suffering is hidden, repair is challenging for everyone, not least of all therapists.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted | Reviewed by Margaret Foley
Boring people don’t listen. They tell their own stories, over and over, and never make any attempt to engage in our story, our lives. And so we avoid them.
AI, on the other hand, is a superb listener. So much so that people, particularly teens, are turning to chatbots for companionship. But in doing so, do we run the risk of all becoming the same kind of person, wanting the same kinds of friendships, with the same kinds of interactions? In a word, boring.
Maik Bieleke and Wanja Wolff capture this well in what they call “the boredom trap.” They point out that large language models (LLMs)—which drive both ChatGPT and companion bots—are, by their nature, regressing towards sameness. These models are based on predictive algorithms. They vacuum up enormous amounts of data (e.g., things actual people have written), run them through a black box to determine the likelihood that one word will follow another, and use those probabilities to churn out coherent speech. You see this in your text completion on your phone, and in general, it works pretty well.
But as Bieleke and Wolff point out, this will also trend towards monotony. Always choosing the high probability option narrows our expressive range. Language is as valuable and amazing as it is in part because of its capacity for nuance. One of us had a French colleague who once asked, “Qu’est que c’est la différence par “perhaps” et “maybe?” It’s tough to formulate a good answer, and maybe there isn’t one. But perhaps having the different options is what matters.
Relying on LLMs for writing in science risks shaping our corpus of outputs in a narrow (and we would argue, boring) way. But there may be even bigger risks. Relying too heavily on LLMs may make us more boring.
Indeed, recent evidence suggests we might be in a feedback loop with the LLMs we’re starting to rely on more and more. Hiromu Yakura and colleagues analyzed outputs of these LLMs and compared them to natural speech. It seems we’re starting to incorporate into our own speech many of the buzzwords favored by AI language models. Words like “delve,” “boast,” and “swift,” for some reason, are in vogue for ChatGPT right now, and we’ve started using them more. We’re becoming more like chatbots. As Yakura points out, this “closed cultural feedback loop” runs the risk not merely of being boring, but of eroding our vast array of unique, diverse linguistic expressions. The move to monotonous, boring sameness could be a culture killer.
But surely what we get from chatbot companions can’t be all bad? In these times of increasing loneliness and challenges to mental health, chatbot companions might fill a hole we can’t otherwise easily fill. There may even be ways in which chatbots work better than humans.
Dariya Ovsyannikova and colleagues recently examined people’s perceptions of empathy when delivered by either a trained mental health professional or a chatbot. They found that chatbots were rated to be more empathic than the trained professionals we normally turn to. AI was rated to be more compassionate even when people knew they were looking at responses from a chatbot.
But is this really where we want to go as a society? Is this really a viable solution to rising levels of loneliness?
Our friendships, and our therapists for that matter, are not merely there to supportively and sycophantically reflect what we want to hear. Relationships, big and small, thrive on some amount of tension, the push-pull of human interaction that helps us grow is also the push-pull that makes those interactions more engaging, more meaningful. If we devolve into a species reliant only on LLMs for companionship, we run the risk of never growing, never challenging our own worldviews. Listening only to the monotonous reflections of a system designed to make us feel good could also make us eerily similar to one another or, at the very least, insular and unwilling to see the world through others’ eyes.
Indeed, Myra Cheng and colleagues recently showed exactly this. In their work, AI chatbots affirmed the human user’s actions 50 percent more often than other humans normally would. Worse than that, all this unconditional affirmation led to a reduced likelihood that people would be willing to even try to repair a conflict they had with someone else. Their human participants also trusted the AI model more, liked the sycophantic responses, and were more willing to use it again.
What this all suggests is that we run the risk of narrowing our sphere of experience to only what makes us feel good, ultimately reinforcing the feeling that we’re always right, which clearly isn’t true. Not only is that boring, but it’s potentially destructive.
The trend to AI sameness will be pervasive. Not just our technical writing skills and our interactions with each other, but our creative expressions more generally run the risk of becoming predictable and boring. When AI image generators use monstrous amounts of human-created art to drive their own visual outputs, not only are they stealing the work of those creators, they are driving an anti-creative trend toward monotony. Like the dilution of our own verbal expressions, this has real impacts on cultural diversity.
Maybe when writing for technical audiences and purposes, or when editing work not written in your own mother tongue, or any number of other technical uses, LLMs and their trend toward sameness will not pose insurmountable challenges, and clearly have proven useful to date. Indeed, Bieleke and Wolff admit to having used an LLM as a drafting aid for their article critiquing the use of LLMs in writing for science.
But as we try to navigate the rapidly expanding influence of LLMs in daily life, it is worth asking some important questions. If we feel our lives are becoming boring and are lacking in meaningful connections, perhaps we need to seek out those connections, in all their unpredictable, messy, and at times tense glory. In expressing how those complex interactions make us feel, we may want to cultivate and protect diversity over sameness.
No LLMs were used in the making of this blog.
James Danckert, Ph.D., is a Cognitive Neuroscientist at the University of Waterloo. John Eastwood, Ph.D., is a Clinical Psychologist and an Associate Professor of Clinical Psychology at York University. They are the authors of Out of My Skull: The Psychology of Boredom.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Cut-offs cut deep and wide, their emotional impact reverberating far beyond the combatants. Because much of the suffering is hidden, repair is challenging for everyone, not least of all therapists.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.

source

Jesse
https://playwithchatgtp.com