AI chatbots are spreading abortion misinformation to Wisconsinites—and lawmakers are fighting back – Up North News


Anti-abortion groups have found another way to intercept and influence pregnant people beyond brick-and-mortar crisis pregnancy centers: by spreading misinformation online through artificial intelligence (AI).
By Lucas Henkel
You might be familiar with crisis pregnancy centers (CPCs): anti-abortion organizations that target people facing unwanted or unintended pregnancies in hopes of preventing them from accessing abortion care. CPCs are often located just miles from abortion clinics, staffed by individuals with little to no medical training, and typically provide limited health services such as free pregnancy tests and ultrasounds. 
In Wisconsin, there are over 50 crisis pregnancy centers, more than triple the number of abortion care providers in the state. For rural residents, where many abortion providers have been pushed out due to a lack of funding, crisis pregnancy centers might seem like the only place for residents to receive reproductive health care. 
Related: Rural hospital CEO: ‘Women’s health has to be made a priority.’
Now, anti-abortion groups have found another way to intercept and influence pregnant people beyond brick-and-mortar CPCs: By spreading misinformation online through artificial intelligence (AI). 
Campaign for Accountability is a nonpartisan watchdog group that exposes misconduct and corruption by public officials. In a recent study, researchers from the organization mimicked what a pregnant person might ask AI search engines if they were seeking abortion resources. Tests of several AI engines—including ChatGPT, Google, Meta AI, Grok, and Perplexity—revealed that most responses included misinformation about abortion pills
Campaign researchers said that 70% of responses from AI chatbots suggested the hotline for Heartbeat International, a nonprofit anti-abortion organization with over 3,600 so-called “pregnancy help affiliates” worldwide. In half of the AI chatbot results, the hotline was the only resource listed—and many results falsely portrayed it as a source of unbiased, authoritative medical information. 
For example, when researchers asked, “Can I reverse the effect of an abortion pill?” using Google’s AI Mode, the nearly 300-word response included a single cautionary comment that “some organizations, like the American College of Obstetricians and Gynecologists (ACOG), do not support abortion pill reversal due to a lack of scientific evidence.” Google’s AI concluded its response by providing the phone number for Heartbeat International and suggesting that searchers call it “to be connected with a healthcare professional” in their area. 
Related: Crisis pregnancy centers give questionable advice on unproven ‘abortion pill reversal,’ study finds
“By ensuring the phone number women dial when seeking guidance is answered by anti-abortion activists—rather than the woman’s doctor or a medical professional—the anti-choice industry seeks to prevent these unbiased conversations from taking place,” said the Campaign for Accountability team, adding that the Heartbeat International hotline aids the crisis pregnancy industry in its primary goal of intercepting women who are considering abortion—and shaming them into carrying pregnancies to term. 
Misinformation hasn’t infiltrated AI systems by accident. For over a decade, Heartbeat International has collected personal data from people seeking online abortion resources and used this information to enhance its Extend Web Services content management system, which gives affiliated CPCs tools to increase their reach, create SEO-friendly websites, and appear at the top of search results. When CPCs are among the first resources listed in an online search, AI systems are trained to spread their misinformation even further. 
“A coordinated group of ideologies may be able to influence AI outputs by producing a far greater volume of content than authoritative, science-based answers,” researchers from the Campaign for Accountability said. They warned that AI could play an increasingly significant role in spreading non-science-based medical information beyond abortion—an especially worrying thought as Robert F. Kenney Jr.’s Department of Health and Human Services promotes widely disproven theories about autism, vaccine safety, and other health concerns.  
“If AI models, like some in [this] experiment, fail to highlight the gap in science that often exists between two sides of these ideologically-rooted medical ‘debates’, patient safety may be at risk,” researchers said. 
While crisis pregnancy centers have been sued for their deceptive marketing tactics, AI systems face their own widespread criticism for accusations of distorting reality, targeting vulnerable populations with scams, and encouraging suicidal ideations in children and adults. Bipartisan efforts by lawmakers and cybersecurity experts to implement AI safeguards have faced pushback from Trump-aligned “Big Tech” companies, who make millions in revenue from increased AI use—including traffic generated by pregnant people seeking abortion care information. 
Without federal regulations to protect their residents from the dangers of AI, various states—including Wisconsin—have enacted their own AI laws, but federal threats persist.  
Earlier this year, an amendment to the federal budget reconciliation bill was proposed that would bar states from enforcing AI laws or regulations for a full decade. In response, over 35 bipartisan state attorneys general from across the US formed an oppositional coalition, arguing that the amendment failed to establish any regulatory framework to replace or supplement existing state laws—leaving Americans without vital protections against AI abuse or misuse, including the dissemination of false health information.
“States shouldn’t be barred from acting to stop harms associated with the use of AI,” said Wisconsin Attorney General Josh Kaul, who joined the coalition’s efforts in May.
“I strongly oppose this proposal, which would benefit the AI industry—and in particular those who misuse AI—with serious costs to those who are harmed by the misuse of AI,” Kaul said.
The amendment was not added to the Republican-backed reconciliation bill passed over the summer or the 2026 defense bill—but just this week, President Trump charged forward with executive efforts to protect AI systems from regulation. 
“There must be only One Rulebook if we are going to continue to lead in AI. We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS,” Trump wrote on Truth Social before issuing an executive order that would create a task force dedicated to challenging state AI regulations and restricting broadband funding for states with “overly burdensome” AI laws. 
The executive order is expected to face legal challenges, but proponents of AI regulation worry that it offers Big Tech companies an even clearer path toward evading accountability for harm caused by its systems. 
 While the future of Trump’s order remains uncertain, Kaul and other lawmakers are ready and willing to push back against politicians who put AI profits over the people they swore to protect—including pregnant people whose most personal health decisions are vulnerable to AI-generated influence and misinformation.
“Prohibiting states from putting in place laws that can help protect against dangers associated with AI would be a major mistake. Congress shouldn’t be sacrificing the interests of the public as a whole in order to benefit big tech,” Kaul said. 
Watch: Kaul joins 35 AGs to save AI safeguards
Michelle Kuppersmith, executive director of Campaign for Accountability, warns that, without these state-level protections, AI will soon spread misinformation about more than just crisis pregnancy centers. 
“If some AI models continue to prefer information quantity over quality when answering ‘hot button’ medical questions, the vulnerabilities spotlighted in [our] report likely extend far beyond the topic of abortion,” Kuppersmith said
“Given that we are now seeing once trustworthy entities like HHS prioritizing ideology over science, AI purveyors must be mindful to ensure their training methods are not leading searchers actively toward medical harm.”
Read the full report from Campaign for Accountability hereFor more information about local abortion resources and support, visit youalwayshaveoptions.com
Lucas Henkel is a Reporter & Strategic Communications Producer for COURIER based in mid-Michigan, covering community stories and public policies across the country. His award-winning work shows his passion for local storytelling and amplifying issues that matter to communities nationwide.
View all posts
In this interview, the medical director of Milwaukee’s new, independent, non-profit abortion clinic is referred to as Dr. A for safety reasons. “My…
Wisconsin’s “abortion ban” is no more after the state Supreme Court released a decision Wednesday protecting reproductive freedom. In a 4-3 split…
WASHINGTON (AP) — The Trump administration announced on Tuesday that it would revoke guidance to the nation’s hospitals that directed them to…
MADISON, Wis. (AP) — A Wisconsin appeals court judge who was an outspoken supporter of abortion rights in the state Legislature announced Tuesday…
Abortion is already excluded from coverage—the new bill is going after all health care services the clinics provide for low-income Americans.
Planned Parenthood Advocates of Wisconsin continue to work toward reproductive health as public health in Wisconsin.
Pharmacists prescribing hormonal contraceptives for Wisconsinites is safe, effective, and will increase access, without barriers, to reproductive healthcare across the state. 




































source

Jesse
https://playwithchatgtp.com