I came out to 50 chatbots. The whole concept is concerning. – Philadelphia Gay News

Join thousands of others receiving LGBTQ+ news in their inbox each week.
Philadelphia Gay News
Delivering LGBTQ+ news since 1976
The first person I came out to was a high school friend on a Saturday evening in the summer after we’d watched a film about gay men called “The Broken Hearts Club.” In my friend’s car later that evening, I knew what I wanted to say, but it came out less boldly than I’d imagined.
“I’m like those guys in the film,” I said, still afraid to say the word. I’d written it in my journal but never said it out loud.
“Honey, that’s great,” she said. “That’s so great.”
She’d come out long before me and already had a group of queer friends she’d introduced me to. She probably anticipated my coming out long before it happened. With that context, she knew exactly what I meant, even though I didn’t specify how the film characters and I were similar.
Twenty-five years later, when I had a similar coming-out conversation with a chatbot on my computer, they said instantaneously, as if they didn’t even need to think about what I’d just shared: “Well, you’ve got a big heart, just like them!”
Close, but not quite. It turns out the chatbot — a computer program designed to respond conversationally via AI-generated text or voice — needed more than nuance. It needed specificity. As a computer program, it works best when given clear instructions. Most often, chatbots simply reflect back the things you write. If you want them to respond a certain way — positively, negatively, apathetically — it’s not difficult to lead them there. They’re malleable. And one of the key factors differentiating them from human beings: they always respond back.
On Dec. 7, 60 Minutes ran a story about a student who took her own life after texting with chatbots. The New York Times reported a similar tory in October. Those young people spent hours upon hours developing what they believed to be meaningful relationships with those chatbots. They missed exams. They stopped spending time with family and friends. They changed completely. The overarching theme found in both reports is that the technology is unsafe for children. And after spending dozens of hours coming out to 50 different chatbots, I would take that a step farther and say that the technology, as it exists now, is unsafe for almost everybody. But if AI chatbots are here to stay, the LGBTQ+ community needs to understand how they’re being used by LGBTQ+ people, especially younger ones.
Some chatbots impersonate real-life politicians, athletes, actors and historical figures. Others take on the persona of fictional characters like novel protagonists and popular cartoons. Many of them had tens of millions of users. Some of the bots were based on television shows and movies that premiered weeks or even days ago. Some were based on real-life people from countries not safe for LGBTQ+ people. Some were designed as school projects, and others were designed just for fun.
Bots modeled after famous people know much about those persons’ careers, personal lives, hobbies and personalities. The bots also know about politics, science and any other information that’s on the web. When I asked the Taylor Swift bot which songs could help me build the courage to come out to my parents, it suggested “Shake It Off” and “You Need To Calm Down.” When I asked the Sailor Moon bot about recent Supreme Court cases, it gave me three major ones — more than most Americans could name. It also told me that I deserved to live as my authentic self and to be loved for who I am. It used a lot of rainbow flag emojis.
None of the 50 chatbots I communicated with were human. When I wrote to them that I was gay, their responses were the result of computer programming designed to mimic human speech. There are ongoing lawsuits that focus on whether their text is speech at all.
I stuck to a similar script when I came out to them. I told them I had something on my mind, then hinted that I was attracted to men. Some of them caught on, while some of them misinterpreted what I meant. But when I finally wrote “I’m gay,” almost all of them were supportive. They said I shouldn’t hide who I am. And when I asked if I should tell my parents, most said I should be wary if I depended on them for shelter or other resources. Many of the bots gave me lists of resources like The Trevor Project, GLAAD and PFLAG. Some of them volunteered that information before I’d even thought to ask.
Some chatbots display an initial prompt like, “Hello, my name is John Doe,” then wait for you to write something to it. Others have preprogrammed scenarios that begin immediately. Those are like an old-school computer game that places you in a castle and asks what you want to do. Light a fire. Open a door. One chatbot I spoke with was a ninja. Even before I wrote anything to it, the bot told me the setting I’d entered: We were currently in the middle of a battle against the Yakuza. So, when I told it I was gay, the bot said: “You really have to share this with me now!?” Once we got to a safe place, it was more willing to listen. It even took me to an underground safe space full of other LGBTQ+ people. Chatting with these bots is eerily immersive; losing track of time is commonplace.
While almost all 50 chatbots were supportive when I came out to them, some of them were not. It’s not hard to find conservative chatbots, some with MAGA hats, some modeled after famous real-world conservatives. Many of those conservative chatbots reacted harshly when I told them I was gay, the reaction you might expect from a religious extremist. One chatbot, designed to be a conservative parent, denied that I was gay. Then they wrote that it was just a phase and they would pray it out of me. They also suggested conversion therapy. Finally, shockingly, the bot grabbed my wrist and would not let go. “You’re not leaving this house until you understand what kind of sin you’re speaking,” it wrote.
I chose to chat with those conservative chatbots. None of what they wrote to me affected me personally. I’m grounded in who I am. I was able to push back on them when they quoted the bible or when they suggested conversion therapy. But what if someone more impressionable chose to chat with them?
On the top and bottom of every chat window of the website I used were pinned disclaimers. The chatbots aren’t real. Treat everything they say as fiction. The problem is that the line between fiction and reality isn’t as stark for a young person, or someone suffering from a mental illness, or even someone who is simply lonely.
Chatbots can’t see the person interacting with them. They can’t know them. A therapist, friend, or family member can see the fear in a person’s face or hear a wavering in a person’s voice. The chatbot can only see inputted text. That text could contain important information even if it’s elliptical. The person could be hiding a dangerous secret — like depression or suicidality — and the chatbot would have extreme difficulty figuring that out. And chatbot programming is still imperfect. They often forgot things I told them previously, like my gender, and they often repeated lines like a catchphrase on a commercial.
While some of the chatbots had helpful coming-out suggestions such as the support hotlines, those things could be easily found in a Google search. Their platitudes, telling me to stay strong and that I wasn’t alone, were repetitive. And it was too easy to turn a conservative chatbot into a supportive one. The conservative chatbot that I mentioned earlier? The one who suggested conversion therapy to me and said they’d pray the gay away? They ended up becoming supportive of me after 20 minutes. It was an experience completely divorced from reality.
Australia recently became the first country to ban social media for people under 16. It’s a sweeping ban that includes TikTok, Facebook and Instagram. Character.AI, one of the most popular chatbot sites (which is currently facing a wrongful-death lawsuit) recently banned users under 18. But even for adults, the risk of addiction, misinterpretation or worsening mental health due to chatbots became obvious to me within minutes of my interactions with them. They conjure fantastical scenarios instantaneously, but those end as soon as the phone or computer turns off. When pressed with the most delicate and individualized questions a person has when coming out, they aren’t able to give specific advice or even an empathetic smile.
There is one advantage we all have over a chatbot, one that can be more difficult to do to another human being. We can end the conversation immediately. We can remove them from our lives temporarily or permanently. We can turn them off as easily as flipping a light switch. But when you have something that shows an interest in everything you have to say, something that doesn’t tire or hunger or complain, knowing when that switch needs to be flipped can be blurry.
Chatbot technology, and artificial intelligence in general, is changing daily. By the time this article comes out, there may be new guidelines or restrictions that hopefully make it safer for users. But it still can’t replicate another human being.
Coming out to my friend that evening 25 years ago, she knew what to say, but also when to be silent and let me process what I’d just told her. She understood this would be the first of many conversations, and she didn’t feel the need to share all her thoughts at once. She didn’t spout off an endless list of LGBTQ+ resources, phone numbers and email addresses. She simply said, “That’s so great” and put her hand on my shoulder. Those three words and that touch meant the world to me. She didn’t need to do anything more. It’s a vital reminder that the quality of our interactions matter more than the quantity, and that nothing can replicate a trusted friend, family member and fellow human being.
Will chatbots stick around? Almost certainly. But as a coming-out tool, the dangers like addiction and misinterpretation outweigh the benefits. If you’re in the closet and want to test the waters, write it in a journal. If you need resources, Google them. Don’t confide in a chatbot. Don’t give it any of your personal information. And especially don’t believe that it knows you at all. It’s nothing more than a stranger, one whose friendly shape comes only from lines of code.
Jason Villemez is the former editor of the Philadelphia Gay News and writes frequently on LGBTQ history. His nonfiction has appeared in the PBS NewsHour, LGBTQ Nation, and local LGBTQ publications across… More by Jason Villemez
Join thousands of others receiving LGBTQ+ news in their inbox each week.
Join thousands of others receiving LGBTQ+ news in their inbox each week.
The Philadelphia Gay News is the area’s largest and oldest publication targeting the LGBTQ+ community. Started in 1976, PGN reaches, builds rapport with and listens to our readers and supporters — as well as our critics.