Terrifying chatbot is here to talk women out of abortions | Opinion – USA Today
In an instance of science fiction horror come to life, anti-abortion activists have created a new chatbot to dissuade people from receiving a necessary form of health care. It’s a harrowing sign of where we are in the intersection of technology, health care and politics.
Choose Life Marketing, a public relations company that works with crisis pregnancy centers and anti-abortion groups to reach “abortion-minded women,” has created Olive, a chatbot powered by artificial intelligence that “listens, responds with warmth, and engages like a real person.” According to the website, Olive knows your state’s laws while providing “life-affirming” information to people who are seeking answers in a dire situation.
This chatbot is nothing more than the continuation of the nefarious agenda of the anti-abortion movement and crisis pregnancy centers, nonprofit Christian organizations that are often confused with clinics providing abortion services. But by automizing conversations that should be had with health care professionals, Olive is creating new concerns for pregnant people.
Olive is created by the anti-abortion crowd and will respond accordingly. It is fundamentally designed to have the same conversation that a volunteer at a crisis pregnancy center would have with a pregnant person who walked in their doors, meaning that it will never present abortion as an option, even if the procedure is available in your state.
The bigger concern, however, is the way Olive stores data from users. Choose Life Marketing notes that “all contact info is stored, and every conversation is saved,” allegedly so that the volunteers at these crisis pregnancy centers can follow up with users. But this also means that these centers – which, again, are vehemently against abortion – now have data about pregnant people in their states who are potentially seeking abortion care.
This could easily lead to legal battles where information from these sites are subpoenaed, especially if a person went to a crisis pregnancy center without realizing it was an anti-abortion scam.
The entire project is terrifying. Crisis pregnancy centers have created confusion for years, and AI is known for spreading misinformation. Combining the two is a recipe for disaster – not to mention a potential legal nightmare.
Granted, this isn’t the only instance of the abortion debate going digital. Recently, abortion rights advocates created Charley, their version of a chatbot that provides information about state laws and the options available to a pregnant person. This includes promoting abortion pills available by mail.
Unlike Olive, the creators of Charley claim that the chatbot will never ask for your personal information and that every conversation with the chatbot is deleted.
While I am ideologically aligned with Charley’s mission and even see some of the positives of its creation, I’m wary of using any sort of artificial intelligence to provide pregnant people with answers, given the sensitivity of the information at hand. In our desire to automate every element of our lives, we are making ourselves vulnerable to cybersecurity breaches.
People are also using more mainstream chatbots like ChatGPT and Google Gemini to have these sensitive conversations. This is having repercussions already: It has been reported that AI chatbots are spreading misinformation about “reversing” abortions and directing people to crisis pregnancy centers instead of actual clinics.
It’s well-documented that chatbots will present false information about health care and pose cybersecurity risks; it also won’t stop people from using them. It is up to the creators of these AI tools to ensure that they are presenting accurate information regarding any health care topic, including abortion.
For those of us using these tools in our everyday lives, we must not forget that all tech companies are controlled by humans with biases and political beliefs – even if the information is being presented as fact.
Olive is the perfect example of how AI can be used for political purposes – and I fear what will happen if this chatbot becomes commonplace.
Follow USA TODAY columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno